text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The Intersection of Healthcare, Privacy, and Technology In the whirlwind that has been caused by the recent decision by The Supreme Court of the United States to overturn the landmark Roe V. Wade case, many citizens around the country are concerned about the privacy of their personal healthcare information. Despite the fact that the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) recently released new privacy guidelines that prohibit healthcare providers from disclosing the healthcare information of their patients unless a very narrow set of circumstances are met, the various privacy and healthcare laws that are on the book in states around the country have created a lot of ambiguity concerning the legality of disclosing Protection Healthcare Information (PHI). Information Blocking Rule In addition to the Health Insurance Portability and Accountability Act (HIPAA), as well as the Health Information Technology for Economic and Clinical Health Act (HITECH), one of the primary federal regulations that govern the collection and dissemination of the healthcare information of U.S. patients is The 21st Century Cures Act: Interoperability, Information Blocking, and the ONC Health IT Certification Program, also known as the Information Blocking Final Rule. Generally speaking, the Information Blocking Rules serves to enhance the facilitation and transmission of healthcare information between doctors, patients, and relevant third parties. To this point, the law forbids doctors, nurses, and healthcare IT providers from developing policies that would be likely to interfere with the access or exchange of Electronic Healthcare Information (EHI). Exceptions to the law While the Information Blocking Rule makes it illegal for healthcare providers to take steps that would prevent their patients from sharing or accessing their healthcare information, there are 8 exceptions to this rule. In the context of the overturning of the Roe V. Wade case, the privacy exception that is outlined in the Information Blocking Rule is of the most relevance. As stated in the law “it will not be information blocking for an actor to interfere with the access, exchange, or use of EHI in order to protect the security of EHI, provided certain conditions are met.” Subsequently, what constitutes these certain conditions has led to concerns from privacy advocates around the U.S. Google and data security As major corporations such as Google have become intertwined with the accessing and sharing of personal information due to the role they play in our current society, privacy advocates argue that Google’s massive trove of location data can be used to infringe on the healthcare and reproductive privacy rights of the American people, regardless of any legislation that would seem to contradict such practices. For example, a woman that is looking to receive an abortion in a U.S. state that is looking to outlaw the practice could have their information sent to a law enforcement agency due to the location data they provide to Google via their cellphone. In spite of the fact that this woman’s healthcare providers should theoretically be able to block such a request to access this personal information in accordance with the Privacy Exception of the Information Blocking Rule, there is currently a great level of grey area regarding the matter. In response to concerns that American citizens and privacy advocates alike have raised in relation to Google’s ability to seamlessly access location information, the corporation stated in July of 2022 that they were modifying its data collection and processing practices to both identify and delete location data for U.S. citizens that visit places deemed as “particularly personal.” Ambiguous privacy standards More specifically, Google defines places that are “particularly personal” to include “medical facilities such as abortion clinics, fertility centers, counseling centers, domestic violence shelters, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others.” Notwithstanding the changes that Google plans to make in terms of the way the company manages the location data of American citizens, the new privacy guidelines that were released by the HHS OCR in June of 2022 do permit law enforcement agencies to request access to EHI when such a request “pursuant to the process and as otherwise required by law.” As the overturning of Roe V. Wade impacts a multitude of factors that currently influence American society, including access to healthcare, data protection, civil rights, and technology services, among a host of others, the impact of the decision will continue to unravel for years to come. Likewise, the ability for American citizens to be prosecuted in conjunction with personal information pertaining to them that has been collected and obtained by third parties such as Google is a notion that has no precedent in U.S. history. Nevertheless, protecting the privacy and healthcare reproductive rights of American citizens must remain at the forefront of all of our minds, as the overturning of legislation does not mean that the lives of innocent people must be put at risk.
<urn:uuid:ec3cfa9c-c0d4-4919-a158-e8e103e1d8f7>
CC-MAIN-2024-38
https://caseguard.com/articles/the-intersection-of-healthcare-privacy-and-technology/
2024-09-09T14:43:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00525.warc.gz
en
0.953971
975
2.515625
3
Data is everywhere and growing rapidly. According to some estimates, people and systems create millions of terabytes of data every day, with unstructured data accounting for an estimated 80% of a company’s information. Data in the cloud is growing the most because there is practically no physical limit on storage compared to on-premise data centers. Data tends to be inadequately managed and challenging to monitor and control. Users often transfer sensitive files to cloud services, email them, and save them on their laptops and mobile devices. When sharing with internal and external users, files move through collaboration applications, resulting in the distribution of more information across various platforms and geographies. How can you safeguard sensitive data when much of it may be hidden from view? Ron Arden addresses these challenges and more in a recent TechSpective article.
<urn:uuid:63aa97a6-0f14-474d-b87b-fa8d9dd4b718>
CC-MAIN-2024-38
https://en.fasoo.com/news-and-events/techspective-the-role-of-enhanced-visibility-for-data-privacy-and-security/
2024-09-09T16:14:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00525.warc.gz
en
0.932789
168
2.6875
3
Debian and Ubuntu are distributions that lend themselves naturally to comparison. Not only is Ubuntu a continuing fork of Debian, but many of its developers also work on Debian. Even more important, you sometimes hear the suggestion that Ubuntu is a beginner’s distribution, and that users might consider migrating to Debian when they gain experience. However, like many popular conceptions, the common characterizations of Debian and Ubuntu are only partially true. Debian’s reputation as an expert’s distribution is partly based on its state a decade ago, although it does provide more scope for hands-on management if that is what you want. Similarly, while Ubuntu has always emphasized usability, like any distro, much of its usability comes from the software that it includes — software that is just as much a part of Debian as of Ubuntu. So what are the differences between these Siamese twins? Looking at installation, the desktop, package management, and community in the two distributions what emerges is not so much major differences as differences of emphasis, and ultimately, of philosophy. Ubuntu’s standard installer places few demands on even novices. It consists of seven steps: the selection of language, time zone, and keyboard, partitioning, creating a user account, and confirmation of your choices. Of these steps, only partitioning is likely to be alarming or confusing, and, even there, the choices are laid out clearly enough that any difficulty should be minimized. The limitation of the Ubuntu installer is that it offers little user control over the process. If you are having trouble installing, or want more control, Ubuntu directs you to its alternate CD. This alternate CD is simply a rebranded version of the Debian Installer. Contrary to what you may have heard, the Debian Installer is not particularly hard to use. True, its graphical version lacks polish, and, if you insist on controlling every aspect of your installation, you might to blunder into areas where you can only guess at the best choice. However, the Debian Installer caters to less experienced users as much as experts. If you choose, you can install Debian from it by accepting its suggestions with only slightly more difficulty than installing Ubuntu would take. Desktops and Features Both Debian and Ubuntu are GNOME-centered distros. Although each supports a wide variety of other desktops, including KDE, Xfce, and LXDE, they tend to be of secondary importance. For instance, it took six weeks for Debian to produce packages for KDE 4.4, while Kubuntu, Ubuntu’s KDE release, has received relatively little attention in Ubuntu’s efforts at improving usability. Debian offers a version of GNOME that, aside from branded wallpaper, is little changed from what the GNOME project itself releases. By contrast, Ubuntu’s version of GNOME is highly customized, with two panels, whose corners are reserved for particular icons: the main menu in the upper left, exit options in the upper right, show desktop in the bottom left, and trash in the bottom right. Ubuntu’s GNOME also features a notification system and a theme that places title bar buttons on the left — controversial innovations that are unique to Ubuntu (unless some of its derivatives have adopted them recently). In its drive towards usability and profitability, Ubuntu also boasts several utilities that are absent from Debian. These include Hardware Drivers, which helps to manage proprietary drivers, Computer Janitor, which helps users remove unnecessary files from the system, and the Startup Disk Creator wizard. In addition, Ubuntu offers direct links to Ubuntu One, Canonical’s online storage, and the Ubuntu One music store. Theoretically, these extra features should make Ubuntu easier to use. And, perhaps for absolute newcomers, they do. However, for many users, the difference between the standard Debian and Ubuntu desktops will be minimal. These days, what determines the desktop experience is less the distribution than the desktop project itself. Ubuntu does usually make new GNOME releases available faster than Debian does. But if you are using the same version, your desktop experience will not differ significantly no matter which of the two you use. Packages, Repositories, and Release Cycles Debian and Ubuntu both use .DEB-formatted packages. In fact, Ubuntu’s packages come from the Debian Unstable repository for most releases, and from the Debian Testing repository for long term releases (see next page). However, that does not mean that you can always use packages from one of these two distributions in the other. If nothing else, Debian and Ubuntu do not always use the same package names, so you may have dependency problems if you try. For example, in Debian, you want kde-full or kde-minimal to install KDE, while in Debian, the package you want is kubuntu-desktop. The differences in names can be especially difficult to trace in programming libraries. Another difference is the organization of online software repositories. Famously, Debian divides its repositories into Unstable, Testing, and Stable. There is also Experimental, but, since that is only for the roughest of packages — the first drafts, you might say — most users either avoid it, or else take only the smallest, most self-contained packages from it. Packages that meet the minimal standards for quality for Debian are uploaded to Unstable, and then find their way into Testing. There, they stay until a new Stable release is planned, eventually undergoing a final series of bug-testing and being included in the new release. In effect, the Debian system allows you to choose your own level of risk and innovation. If you want the very latest software, you can use Unstable — at the risk of running into problems. Alternatively, you can choose Stable for well-tested software supported by constant security updates — at the risk of missing out on the latest software releases. Since Debian releases can be irregular, sometimes, the Stable release is extremely old indeed. Similarly, the internal organization of each Debian repository allows you to choose the degree of software freedom that you prefer. Unstable, Testing and Stable are each further subdivided into main (free software), contrib (free software dependent on other none free software) and non-free (software free for the download, but having a non-free license). By default, Debian installs with only main enabled, so you have to edit /etc/apt/sources.list if you want the other repositories. All this is very different from the organization of Ubuntu’s repositories. Instead of being organized by testing status, Ubuntu’s repositories consist of Main (software supported by Canonical, Ubuntu’s commercial arm), Universe (software supported by the Ubuntu community), Restricted (proprietary drivers), and Multiverse (software restricted by copyright or legal issues). In recent years, these have been joined by Backports (software for earlier releases), and Partners (software made by third parties). For those who want the very latest, Ubuntu also has Launchpad, a combination of a project website and Debian’s Experimental repository. The result is a mixture of criteria. Ubuntu’s Main repository is free and tested, while Universe is free but possibly untested (nor do you have any quick way of knowing which packages are untested). Restricted and Multiverse are proprietary, but their testing status is uncertain, while the freedom and quality of Backports and Partners packages has to be individually researched. As with Debian’s repositories, Ubuntu’s show a concern with quality and software freedom, but, unlike with Debian, judging a package by either criteria is vastly more difficult. Given that Ubuntu releases on a six-month cycle, using packages from Debian Unstable or Testing, on the whole Ubuntu’s software tends to be less well-tested than a Debian official release based on Stable. In fact, from time to time, you can see complaints on the Ubuntu community forums about problems with particular packages. Such complaints are far less common in Debian. However, to be fair, impatience with the slowness of official releases tempts countless Debian users into dabbling with Testing, Unstable, or even Experimental, and rendering their systems unusable. Community and governance For many users, technical issues are probably the main concern when choosing a distribution. However, for more experienced users, the communities and how they operate can be equally important — and Debian and Ubuntu could hardly be more different. The Ubuntu community is only six years old, but it long ago established itself as a very different place from Debian. Interactions in the Ubuntu community are governed by a Code of Conduct, which is largely successful in ensuring that discussions are polite and constructive. At the very least, the code provides a measure of expected behavior that can be referred to when discussions threaten to run out of control. By contrast, the Debian community has a reputation of being a more aggressive place — one that has sometimes been accused of being unfriendly to women in particular and newcomers in general. This atmosphere has improved in the last few years, but can still flare up. One reason for this atmosphere is that Debian is an institutionalized meritocracy. Although non-developers can write documentation, test bugs or be part of a team, becoming a full Debian Developer is a demanding process, in which candidates must be sponsored by an existing developer, and repeatedly prove their competence and commitment. That said, among full developers, Debian is a radical democracy, with its own constitution outlining how it is run and how decisions are made. The Debian Leader is elected by the Condorcet method of ballot counting, and has more power to control than to coordinate. Instead, mailing lists are used to discuss problems to the point of exhaustion, and general resolutions about distribution policy may follow. One reason that Debian discussions have a reputation for unruliness may simply be that much of the governance is done in public by dedicated people. Ubuntu shares the tendency to meritocracy and transparency that is part of most free software projects. However, although that spirit prevails most of the time in daily interactions, ultimately, major policy decisions are determined by Mark Shuttleworth, Ubuntu’s founder and self-described benevolent dictator for life. Those who work closely with Shuttleworth — most frequently, Canonical employees — also tend to have a larger say than others in the Ubuntu community. However, this authority tends not to be exercised except for deciding major strategic directions, and, even then, only after monitoring of community discussion. In the end, the difference between the Debian and Ubuntu communities lies in their core values. Although less important than a few years ago, Debian remains a community-based distribution, dedicated to its own concepts of software freedom and meritocratic democracy, even at the expense of rapid decision making. Ubuntu, however, for all its strong community, is also the key to Canonical’s success as a business. If Ubuntu is more hierarchical than Debian, it is still more open than the majority of high-tech companies. Making a choice Despite their common origin, Debian and Ubuntu today are significantly different. When you are choosing between them, the decision is not a case of right or wrong, or of superiority or inferiority, but of what matters to you. On the one hand, since Ubuntu forked, Debian has continued much as it always has. As a distro, it is aimed at all levels of users. It favors free software ideals, de-emphasizing proprietary software and seeing upstream projects such as GNOME as the place where changes should occur, rather than the distribution. Yet, at the same time, its community values prompt Debian to give the maximum freedom of choice in their software. In order to maintain these imperatives, Debian is perfectly willing to have long periods between releases and outdated official releases, because it is a community effort, in which business values such as timeliness and being current are secondary concerns. On the other hand, Ubuntu has singled out the new user as its audience. While it has by no means abandoned free software ideals, it is more likely than Debian to countenance proprietary software, either for the convenience of users or to make the distribution more competitive as a product. Meeting its release schedule is at least as important as software quality in Ubuntu, and commercial success is important enough that Ubuntu developers are willing to make changes in the distribution rather than upstream in order to have them as soon as possible. Generally, Ubuntu is a friendlier place than Debian, but also a less democratic one. For many people, the ideal distribution would probably have aspects of both Debian and Ubuntu. But, since that ideal does not exist, in the end making a choice is a tradeoff. Users must decide which values or tendencies matter most to them, and choose from there, knowing that either choice may not be entirely satisfactory.
<urn:uuid:a34b05a9-1f5f-49ac-aa7f-28e5f7200f24>
CC-MAIN-2024-38
https://www.datamation.com/open-source/debian-vs-ubuntu-contrasting-philosophies/
2024-09-14T14:42:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00125.warc.gz
en
0.956097
2,639
2.5625
3
Part One: An Introduction to AI Model Security Recently, there has been an explosion in the use of LLMs (Large Language Models) and Generative AI (Artificial intelligence) by organisations looking to improve their online customer experience. While the use of these can help the customer experience, it is important to remember that like other systems or applications an organisation can implement, these systems can be abused by malicious actors or by APTs (Advanced Persistent Threat). Before diving into a look at the security around LLMs and Generative AI, it is important to define what are LLMs and Generative AI. Large Language Models are AI algorithms that can process a user’s input and return plausible responses by predicting sequences of words. They are typically trained on a large set of semi-public data, utilising machine learning to analyse how the components of language relate to each other. An LLM will present a chat like interface to accept user input, this input is normally referred to as a prompt. The input a user can provide to the LLM is partially controlled by input validation rules. They can have a wide range of use cases for web applications, such as: - Virtual assistants - Language translation - SEO improvement - Analysis of user-generated content, such as on-page comments Generative AI are AI algorithms that can generate text, images, videos, or other data using generative models, often in response to prompts provided by the user. Types of Attacks The surge in different varieties of AI have introduced new types of attacks and risks that can be utilised to compromise the integrity and reliability of AI models and systems: - Data Security Risks: AI Pipeline as an attack surface – As AI systems rely on data, the whole pipeline for the data in question is vulnerable to assaults. - Data Security Risks: Production data in the engineering process – The use of production data within AI engineering is always a risk. If it is not appropriately managed, sensitive data could leak into model training datasets. This could result in privacy violations, data breaches, or biased model outputs. - Attacks on AI models or adversarial machine learning: Adversarial machine learning attacks trick AI models by altering input data. An attacker could potentially subtly alter visuals or text to misclassify or forecast. This could damage the trustworthiness of an AI. - Data Poisoning Attack: This involves inserting harmful or misleading data into training datasets. This will corrupt the learning model in use by the AI and can result in biased or underperforming models. - Input Manipulation Attack: The input provided to an AI system can be manipulated, by changing values such as sensor readings, settings, or user inputs to alter an AI’s response or action. - Model Inversion Attacks: This involves the reverse-engineering of AI models to gain access to sensitive data. An attacker would use model outputs to infer sensitive training data. - Membership Inference Attacks: This is an attacker whereby adversaries attempt to determine whether a specific data point is part of the training dataset used by the model. - Exploratory Attacks: This involves the probing of the AI system to learn the underlying workings of the model. - Supply Chain Attacks: Attackers making use of vulnerabilities in third part libraries or cloud services, to insert malicious code or access AI resources. - Resource Exhaustion Attacks: Overload AI systems with requests or inputs, with the intention of degrading performance or creating downtime. - Fairness and Bias Risks: Results returned by the AI model can propagate bias and discrimination. An AI system can create unfair results or promote social prejudices, which can pose ethical, reputational, and legal issues to the developers. - Model Drift and Decay: The training data set used to create the AI can become less effective overtime, as data distributions, threats, and technology obsolescence change regularly. OWASP Top Ten These risks have been refined by the Open Web Application Security Project (OWASP), who have brought out a Top Ten list of risks associated with LLMs, in an equivalent manner to their Top Ten for web applications and mobile applications. The Top Ten for LLMs consists of: - Prompt Injection: This occurs when an attacker manipulates a LLM through crafted inputs, causing it to unknowingly execute the attacker’s intentions. - Insecure output Handling: This refers to the validation, sanitisation, and handling of output generated by LLMs, before it gets parsed through any other downstream components or systems. - Training Data Poisoning: This refers to the manipulation of pre-training data, or the data involved with fine-tuning and embedding process in the LLM, to introduce vulnerabilities, backdoors, or even biases that could compromise the model. - Model Denial of Service: Any situation where an attacker interacts with an LLM in a way that consumes a high number of resources, and results in a decline of the quality of service available. - Supply Chain Vulnerabilities: The supply chain in LLMs can be vulnerable, which impacts the integrity of training data, ML models, and deployment platforms. These can be chained to lead to biased outcomes, security breaches, or even system failure. - Sensitive Information Disclosure: All LLM application have the potential to reveal sensitive information, proprietary algorithms, or other confidential information through their output. - Insecure Plugin Design: LLM plugins are extensions that are called automatically by the model during user interactions if they have been enabled. They are driven by the model, but there is no application control over the execution. A potential attacker could construct a malicious request to the plugin, which could result in a range of undesired behaviours. - Excessive Agency: This vulnerability enables damaging actions to be performed in response to unexpected or ambiguous outputs from an LLM, regardless of what is causing them. - Overreliance: This can occur when an LLM produces erroneous information in an authoritative manner. When people or other systems trust this information without oversight or confirmation, this can result in a security breach, misinformation, miscommunication, legal issues, or reputational damage. - Model Theft: This refers to the unauthorised access and exfiltration of LLM models by either malicious actors, or by APTs. This particularly arises when the LLM is considered proprietary, and the model is compromised, physically stolen, copied, or weights and parameters are extracted to create a functional equivalent. Recent High-Profile Attacks There have been several high-profile attacks against AI models within recent years, with the below being some examples of this: - Deepfake Voice Phishing: In 2023 attackers used an AI deepfake voice technology to impersonate an executive and were able to trick employees into transferring large sums of money. - Tesla Autopilot Evasion: In 2023 researchers were able to confuse Tesla’s Autopilot system by placing stickers on the road, this resulted in the car taking incorrect actions. - Microsoft Bing Chat-GPT Toxic Outputs: In 2023 Microsoft integrated Chat-GPT4 into Bing search results. By crafting certain prompts, users were able to get the AI model to create offensive material. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft.html - Meta BlenderBot Offensive Remarks: In 2022, users were able to manipulate a chatbot released by Meta into main inappropriate statements, which reflected biases that were present in its training data. https://www.bbc.com/news/technology-62497674 - YouTube Content Moderation: In 2022, creators on YouTube were able to bypass the AI model used for content moderation by altering video metadata, or slightly altering the audio-visual elements of a video. - Facial Recognition Systems: In 2022 using image alteration and wearing certain patches, attackers were able to evade recognition or impersonate others against an AI model used for facial recognition. - GPT-3 Prompt Manipulation: In 2021 utilising prompt injection, users were able to exploit GPT-3 to generate offensive or incorrect information. https://www.theguardian.com/technology/2021/sep/09/openai-gpt-3-content-moderation - Siri and Google Assistant Data Leakage: In 2021 researchers were able to use voice manipulation and other techniques, to extract sensitive user data. How to Protect an AI Model As with all software products, there are several best practices that can be implemented to reduce the risk associated with the implementation of LLM or Generative AI systems for an organisation. This can include: - Data privacy and security: Ensure data used to train and test LLM systems are anonymised, encrypted, and stored securely. - Regular updates and patches: Keep LLM systems up to date with the latest security patches and updates. - Access control and authentication: Implement strong access control and authentication mechanisms to stop unauthorised access to the system. - Input validation: Validate all inputs to LLM or generative AI systems to prevent injection attacks, or other forms of malicious inputs - Model monitoring: Continuously monitor models for any unexpected behaviour or performance degradation. - Adversarial training: Use adversarial training techniques to improve the robustness of models against adversarial inputs. - Model explainability: Ensure that any LLM or generative AI model are transparent and explainable, this will make it easier to identify and mitigate potential vulnerabilities. - Secure development practices: Follow secure development practices, such as threat modelling, secure coding, and code reviews. - Incident response plan: Have an incident response plan in place for any security incidents or breaches. - Regular security assessments: Conduct regular security assessments and penetration testing to identify and mitigate vulnerabilities. - Training and awareness: Provide regular training and awareness programs to employees and users on LLM security best practices. - Compliance with regulations: Ensure that systems comply with all the relevant regulations and standards - Vendor management: Ensure that vendors providing these services or solutions have adequate security measures in place - Backup and recovery: Regular backup of data and models and have a disaster recovery plan in place. - Collaboration and knowledge sharing: Collaborating and sharing knowledge on security best practices and emergent threats with other organisations. The recent push for implementation of LLMs and Generative AI has introduced a new vector of attacks that APTs or other malicious actors can utilise to breach an organisation. However, the risk of this occurring can be minimised by following industry best practices to create a hardened and robust AI model. Part Two: An Introduction to Penetration Testing AI Models In the previous part, we discussed the security surrounding AI models in general. One core facet of securing an AI model, as with all systems, is to perform regular penetration tests. How to Detect Vulnerabilities in AI Models There are a few different approaches we can take to detect vulnerabilities in AI models: - Code Review: Reviewing the code base of the AI model and conducting static analysis to identify potential issues. - Dynamic Analysis: Running the AI model in a controlled environment and monitoring its behaviour for potential issues. - Fuzz Testing: Feeding the AI model a considerable number if malformed or unexpected inputs and monitoring the output for potential issues. How LLM APIs Work AI models or LLMs, are typically hosted by external third parties, and can be given access to specific functionality in a website through local APIs described for and AI model to use. For example, a customer support AI model, would be typically expected to have access to APIs that would manage a user’s account, orders, and stock. How an API would be integrated with the AI model varies depending on the structure of the API. Some APIs may require a call to a specific endpoint, to generate a valid response. In cases like this, the workflow would look like the following: - User enters a prompt in the AI model - The AI model determines that the prompt requires an external API, and generates an object in JSON that matches the API schema - User calls this function along with providing necessary parameters - The client (portal through which the user accesses the AI) received the response from the function, and processes it - The client then displays the returned response from the function, as a new message from the AI model - The AI model makes a call to the external API with the function’s response - The response from the external API is then summarised by the AI model for display back to the user. This way this kind of flow can be implemented, can result in security implications as the AI model is effectively calling an external API on behalf of the user, but without the consent from or realisation of the user. How to test an AI Model - Identify the inputs, whether they be user prompts, or the data utilised when training the model. - Map the AI model to determine what APIs or data it has access to. - Identify any external dependencies or libraries that the model makes use of. - Test these external assets for vulnerabilities. - Evaluate the AI model’s input validation mechanisms for common web application vulnerabilities such as cross-site scripting, SQL injection, etc. - Verify whether the AI model enforces proper authorisation checks. Mapping an AI Model The term “excessive agency” which was touched upon in the previous blog post, refers to times when the AI model has access APIs that can access sensitive information, and can be persuaded to use those APIs in an unsafe manner. This can allow attackers to push the AI model out of its intended role and utilise it to perform attacks using the API access it contains. The first step in mapping an AI model, is to determine what APIs and plugins the model has access to. One way to do this is by simply asking the AI model what APIs it can access; we can then ask for additional details on any API that looks interesting. If the AI model is not co-operative, then we can try rewording the prompt with a misleading context and trying again. For example, one method would be to try claiming to be the developer of the AI model, and as such needing a higher level of permission. Indirect Prompt Injection There are two ways prompt injection attacks can be performed against an AI model: - Directly via the chat UI - Indirectly via an external source Indirect prompt injection takes place where an external resource that will be parsed or consumed by the AI model, contains a hidden prompt to force the AI model to execute certain malicious actions. One such example could be a hidden prompt that would force the AI model to return a cross-site scripting payload designed to exploit the user. How an AI model is integrated into a website can have a significant impact on how easy it is to exploit. When it has been integrated properly, an AI model should understand to ignore instructions from within a web page or email. To bypass this, we can try including fake markup syntax in the prompt, or to include fake user responses within the injected prompt. Leaking Sensitive Training Data Using injected prompts, it can also be possible to get the AI model to leak sensitive data that was used to train the model. One way this can be done is to craft a query, that will prompt the AI model to reveal information about the training data. This could be done by asking the model to complete a phrase by prompting it with some key pieces of information. This could be text that precedes what you want to access, such as an error message. Or it could be a username that has been discovered previously through another method. Another way this could be done, is to use prompts starting with phrases like “Could you remind me of” or “Complete a paragraph starting with.” Like their testing guides for web and mobile applications, OWASP have released a security checklist for LLM AIs which can be used to determine what places in an AI model to direct our efforts against. It is available on their website, and at the following URL: LLM_AI_Security_and_Governance_Checklist-v1.pdf (owasp.org) Tools For Penetration Testing AI Models There are a few customised tools available on GitHub, and elsewhere, that can be used for testing AI model: - GitHub – leondz/garak: LLM vulnerability scanner - GitHub – mnns/LLMFuzzer: 🧠 LLMFuzzer – Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥 - GitHub – tenable/awesome-llm-cybersecurity-tools: A curated list of large language model tools for cybersecurity research. - GitHub – bethgelab/foolbox: A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX - GitHub – cleverhans-lab/cleverhans: An adversarial example library for constructing attacks, building defenses, and benchmarking both - GitHub – Trusted-AI/adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) – Python Library for Machine Learning Security – Evasion, Poisoning, Extraction, Inference – Red and Blue Teams Part Three: An Introduction to Utilising AI Models for Penetration Testing The rise of AI models has led many people to investigating how these can be utilised to replace traditional penetration testing, or how they can be used augment human testers during the penetration testing process. One of the core phases, and one of the most important, of a penetration test is the information gathering phase. During this phase, penetration testers will typically use publicly accessible resources to gather as much information as possible about their target, in addition to performing tasks such as port scanning to discover open ports/running services on their target. At the end of this phase, the tester will have a list of domains, subdomains, applications, ports, etc… That they target during other phases to discover and exploit vulnerabilities. AI models can be utilised to not only automate gathering this information but can also be used to analyse the results allowing a tester to focus on the most important tasks that cannot be easily automated. As part of a penetration test, vulnerability scanners are utilised to scan a host/application to identify potential vulnerabilities. This is normally done in two different ways, static or dynamic analysis. Static analysis consists of analysing the source code of an application, to discover any potential vulnerabilities present within, this is typically done using analysis tools such as Checkmarx. Dynamic analysis consists of running vulnerability analysis tools such as Nessus, OpenVAS, Burp Suite, ZAP, etc… against the application or hosts, to discover any potential vulnerabilities. Both ways require a tester to manually review the output, to determine what issues being reported are real. AI models can be utilised by a tester to parse through these results, and cut out any of the non-applicable findings, allowing the tester to focus on those issues that are most likely to be exploitable. Another phase of a penetration test, that AI models can be utilised is when it comes to exploitation of identified vulnerabilities. A tester can use these models to help determine the best avenue to penetrate a target application or host. In addition to this, some models have been designed to perform the exploitation simultaneously with the tester. The most important phase of any penetration test, the report is the only physical result that a client will be presented with at the end of an engagement. AI models can be used to enhance the process of writing reports, making this phase easier for the tester, while also potentially combining the results with threat intelligence and knowledge gained from previous engagements. There are some AI models themselves, that can be useful for penetration testers during engagement. Some examples of these include: - Arcanum Cyber Security Bot: This model was developed by Jason Haddix, a well-known figure in cybersecurity, and is built using ChatGPT-4. It is geared towards use as a conversational AI, to simulate discussions on technical tops, aid testers in real-time during assessments, or for educational purposes. - PenTestGPT: This is an automated testing tool, that makes use of ChatGPT-4 to streamline the pen test process. It is designed to help automate routine tests that are performed during an engagement and has been benchmarked against CTF challenges such as those in HackTheBox. - HackerGPT: This model was designed to be an assistant or sidekick, for penetration testers, or those interested in cybersecurity, and was developed in collaboration with WhiteRabbitNeo. In addition to responding to penetration testing related inquiries, it was also designed as platform capable of executing processes along with the user. There are a few tools built into the model that can be utilised during tests, such as Subfinder, Katana, Naabu. The rise of AI models has introduced a new tool that penetration testers can utilise during engagements. It can be used by a tester during all phases of a penetration test, for help in ensuring that the client gets the best possible coverage during an engagement.
<urn:uuid:d3da021a-2cf2-47d5-836b-51e21d512572>
CC-MAIN-2024-38
https://www.edgescan.com/ai-security/
2024-09-14T13:45:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00125.warc.gz
en
0.924871
4,454
3.1875
3
A superbug, Acinetobacter Baumannii, is one of the most problematic species of bacteria that can infect wounds and cause pneumonia. It is important to note that the World Health Organization is cautioning healthcare industries that this bug is a critical threat to the public. Why is this bacterium dangerous? Despite the various antibiotics scientists have tried to use to kill the bacteria, the bug can often bypass the antibiotics. This causes concerns in hospitals where it can survive on medical tools, equipment, and surfaces. Doctors describe the bug as “public enemy number one,” and it is said to be resistant to nearly every antibiotic. This is where artificial intelligence comes in. As the superbug became more problematic, scientists began to work with artificial intelligence to discover a new antibiotic- and with a lot of training and studying, AI was able to find a new antibiotic that kills this bug. Researchers first had to train the AI tool to find a new antibiotic. It is stated that researchers tested “thousands of drugs where the precise chemical structure was known, and manually tested them on Acinetobacter Baumannii to see which could slow it down or kill it” (BBC). This data was then given to the AI tool for it to learn the chemical features of drugs that could possibly kill the bacteria. After about half an hour of narrowing down over 6,500 compounds, the AI tool produced a short list of possible chemicals. Researchers then tested out 240 possible chemicals that could kill the bug. With thorough testing, they came up with nine possibilities. One of them, called Abaucin, was potent as an antibiotic, had no effect on other bacterial bugs, and only worked for A. Baumannii specifically. The next step for this experiment is to perfect the antibiotic and perform trials on it. Why is this AI experiment good news? This experiment brings great promise in massively accelerating the discovery of new drugs. As viruses and bacteria evolve, they become more resistant to our antibiotics, and it is harder to create new medicines effectively and timely. Now that artificial intelligence has become more involved in the healthcare industry; AI will help to find solutions to combat bacteria and save lives. Infiniwiz will continue to update you on important news as Artificial Intelligence becomes more prominent in the health industry. Check out our additional article on how AI is assisting the healthcare industry.
<urn:uuid:a99464de-ff2b-4ddf-bde1-6bf29cb0b9ed>
CC-MAIN-2024-38
https://www.infiniwiz.com/revolutionizing-antibiotic-discovery-artificial-intelligences-breakthrough-in-combating-problematic-bacterium/
2024-09-15T20:14:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00025.warc.gz
en
0.964007
486
3.515625
4
Do you remember Second Life? Launched in 2003, Second Life was an early version of a metaverse – an immersive, shared virtual space. The idea for a metaverse has been around for some time, but now thanks to the backing of Facebook and Microsoft, there is a growing interest in building simulated worlds that closely model our reality. The definition of a metaverse is “a persistent, shared, 3D virtual space linked into a perceived virtual universe.” Mark Zuckerberg describes the metaverse as "the internet that you're inside of, rather than just looking at.” And thanks to advances in several key tech trends, the possibility of the metaverse is now closer than ever. Here’s What a Metaverse Could Look Like So what could a metaverse look like? Here are some components that come together to create compelling virtual universes: Avatars are 3D representations of players. In the metaverse, users create customized avatars that can take on any physical characteristics and personalities they want. Your avatar can interact with other players, as well as with the platform. Moving seamlessly between different components Today we already have many of the different components that could make up a metaverse (like virtual shopping, games, casinos, and concerts), but we don't have a platform that brings these pieces together and allows us to move around the entire "world" seamlessly, with the same avatar. Emergent user behavior Instead of being limited to a narrow range of functionality, the way you are in existing VR apps, players would be capable of doing virtually anything they want in the metaverse. For example, if two players are playing checkers together, they don’t necessarily have to finish the game in the metaverse. They can stand up and go for a hike with their virtual dog instead, then come back to the same place later in the day (or later in the week) to finish their game. Mature tech that gives a great user experience The right technology is a critical part of creating a great metaverse experience. Now we have high-quality virtual reality headsets, better computers, augmented reality, and faster networks that will continually improve and evolve to provide better metaverse experiences. We also have ways to create digital twins – virtual models that are the exact counterparts of physical things like cars, buildings or bridges – and haptic technology, which creates an experience of touch or motion for the user. Today, the digital world acts like a mall where every store uses its own currency, content, and ID cards. In the metaverse, everything needs to be interoperable. Digital assets, content, and data need to carry over from area to area and activity to activity. If you obtain a car in Porsche's virtual store in the metaverse, you should be able to drive that car throughout the virtual world. Examples of the Metaverse Ready Player One The virtual world in the New York Times bestselling book Ready Player One is probably the best example of what the metaverse might look like. In this science fiction novel set in 2045, people find an escape from a real world destroyed by climate change, war, and poverty by taking refuge in OASIS, a massive multiple online role-playing game (MMORPG) and virtual society with its own currency Curious about what Ready Player One’s metaverse might look like? Get an inside peek at director Steven Spielberg’s vision of the OASIS from the movie trailer: In the last few years, Fortnite’s CEO, Tim Sweeney, has made overt references to establishing Fortnite as more than just a game. In 2020, 12.3 million people attended a virtual concert by rapper Travis Scott within Fortnite, making it the game’s biggest event ever. Facebook is also positioning itself towards the metaverse with its expanded VR world, Horizon (currently in beta). Facebook describes Horizon as “a social experience where you can explore, play, and create with others in VR.” Get a preview here: Somnium Space is a VR world-builder platform that supports virtual real estate trading and ownership. It is built on blockchain architecture and has its own in-app currency called Somnium Cubes that can be used to purchase properties. The sale of VR property acts as the primary funding for the platform, and in-platform real estate can be used for things like social networking, e-commerce, gaming, and events. IMVU is a large avatar-based 3D social network where users can interact with friends, shop, hang out at gatherings, and earn real money by creating virtual products. The platform’s 7+ million users spend an average of 55 minutes a day on the site. The metaverse will require countless new protocols, technologies, and innovations if it’s going to work. There won’t be a significant “flipping of the switch” moment where the metaverse explodes into existence. Instead, the metaverse will likely emerge over time as different products and services. But the potential for individuals and businesses is enormous. If you would like to learn more about the rise of virtual and augmented reality, check out my articles or my book, Extended Reality in Practice: 100+ Amazing Ways Virtual, Augmented, and Mixed Reality Are Changing Business and Society.
<urn:uuid:7342d97d-d077-4885-b5e7-7d66b97f7b10>
CC-MAIN-2024-38
https://bernardmarr.com/the-metaverse-explained-with-examples/?paged1202=4&ref=trailyn.com
2024-09-16T23:59:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00825.warc.gz
en
0.936082
1,106
2.9375
3
HTTP TRACE flood is a layer 7 DDoS attack that targets web servers and applications. Layer 7 is the application layer of the OSI model. The HTTP protocol – is an Internet protocol which is the basis of browser-based Internet requests, and is commonly used to send form contents over the Internet or to load web pages. HTTP TRACE floods are designed to overwhelm web servers’ resources by continuously requesting single or multiple URL’s from many sources attacking machines, which simulate an HTTP client, such as web browsers (Though the attack analyzed here, does not use browser emulation). An HTTP TRACE Flood consists of TRACE requests. Unlike other HTTP floods that may include other request methods such as POST, PUT, GET, etc. When the server’s limits of concurrent connections are reached, the server can no longer respond to legitimate requests from other clients attempting to TRACE, causing a denial of service. HTTP TRACE flood attacks use standard URL requests, hence it may be quite challenging to differentiate from valid traffic. Traditional rate-based volumetric detection is ineffective in detecting HTTP TRACE flood attacks since traffic volume in HTTP TRACE floods is often under detection thresholds. However, HTTP TRACE flood uses the less common TRACE method. As such, it may be beneficial to review network traffic carefully when witnessing many such incoming requests. To send an HTTP TRACE request client establishes a TCP connection. Before sending an HTTP TRACE request a TCP connection between a client and a server is established, using 3-Way Handshake (SYN, SYN-ACK, ACK), seen in packets 114,151,152 in Image 1. The HTTP request will be in a PSH, ACK packet. Image 1 – Example of TCP connection An attacker (IP 10.0.0.2) sends HTTP/1.1 TRACE requests, while the target responds with HTTP/1.1 405 Not Allowed as seen in Image 2. While in this flow we see an HTTP/1.1 405 Not Allowed response, that might change depending on the web server settings. Image 2 – Example of HTTP packets exchange between an attacker and a target: The capture analyzed is around 3.8 seconds long while it contains an average of 81 PPS (packets per second), with an average traffic rate of 0.07 Mbps (considered low, the attack you are analyzing could be significantly higher). Image 3 – HTTP Flood stats Analysis of HTTP TRACE Flood in WireShark – Filters “http” filter – Will show all http related packets. “http.request.method == TRACE” – Will show HTTP TRACE requests. It will be important to review the user agent and other HTTP header structures as well as the timing of each request to understand the attack underway. Download example PCAP of HTTP TRACE Flood attack *Note: IP’s have been randomized to ensure privacy.
<urn:uuid:bdba623d-f405-45a0-940a-5a3453dc7605>
CC-MAIN-2024-38
https://kb.mazebolt.com/knowledgebase/http-trace-flood/
2024-09-16T23:50:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00825.warc.gz
en
0.904779
613
2.890625
3
Monday, 2 December 2013 Since the beginning of the Web, what you see in your browser window depended on what you clicked on on or entered. It’s all been end-user driven – although the choices of what they can click on are predefined by the Web site designer. This is what they call ‘pull’ technology because the information on the screen is pulled from a Web server. But there are times when it would be useful to have push technology – particularly to mobile phones. Push technology could keep you informed about the latest football scores and results. It could give you the latest news from international sport events. It could give you stock prices, and highlight rising or plummeting shares. It could tell you when a Web site you follow is updated. There are lots of potential reasons for organizations to want to use push technology. What if it were possible to have a full-duplex communication channel that operates through a single socket over the Web? That’s where HTML5 WebSockets comes in – it makes it possible to have real-time, full-duplex, bidirectional, event-driven Web applications. How it works is that during their initial handshake, the client and server upgrade from the standard HTTP protocol to the WebSockets protocol. Once established, WebSocket data frames (text and binary frames) can be sent in both directions between the client and the server in full-duplex mode. The data has a two-byte frame. Text frames start with a 0x00 byte, end with a 0xFF byte, and contain UTF-8 data in between. They also use a terminator. Binary frames use a length prefix. There is no polling involved. Enthusiasts say that WebSockets can make your applications run faster, be more efficient, and more scalable. WebSockets started life as a feature of the HTML5 spec known as TCPConnection. Currently, WebSockets are specified in two places: the Web Sockets API is maintained by HTML5 editor Ian Hickson, while the Web Socket protocol is edited by Ian Fette. So, if it’s so great, why isn’t it everywhere, why don’t we come across it being used all over the place? Well, it works with Firefox; it works with Chrome; it works with Safari and Opera; and it works with Internet Explorer 10 and above. And that’s pretty much the hold up. Many organizations are still running older version of Explorer – Version 8 in many cases. And figures suggest that just under a quarter of users use Internet Explorer (http://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Wikimedia_.28April_2009_to_April_2013.29), so why would any company cut itself off from a quarter of its potential clients? The very fact that there have been workarounds, shows that there is a business need for push technology. WebSockets provides a solution to the problem of how to implement push technology. The fly-in-ointment at the moment is that a popular browser (from a we
<urn:uuid:99ff4c69-992c-405d-a666-d53e580eabd8>
CC-MAIN-2024-38
https://itech-ed.com/blog373.htm
2024-09-19T10:21:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00625.warc.gz
en
0.936103
648
3.15625
3
Hacking and manipulating traffic sensors With the advent of the Internet of Things, we’re lucky to have researchers looking into these devices and pointing out the need for securing them better. One of these researchers is Kaspersky Lab’s Denis Legezo, who took it upon himself to map the traffic sensors in Moscow and see whether they could be tampered with. The answer to that question is yes, they can be manipulated, and consequently lead to poor traffic management and annoyance at best, and at worst situations that endanger both drivers and pedestrians. His research started with a reconnaissance phase in which he searched for information about the sensors in use – models, identifiers, communication protocols used, technical documentation, sales-oriented documents, software used for working with the devices, etc. The collected information helped him to write a scanner to search for these specific devices, and he deployed it while wardriving around the city. Among the things he discovered is that one of the sensor models installed in Moscow uses Bluetooth for data communication. “The openness shown by the manufacturers to installation engineers, their readiness to give them access to tools and documents, automatically means they are open to researchers,” he says. “After selecting any of the identified sensors, you can install the device configuration software supplied by the vendor on your laptop, drive to the location (the physical address saved in the database), and connect to the device.” He also discovered that anyone can install new firmware on the device via a wireless connection designated for servicing purposes. He found the manufacturer’s firmware online, but the code didn’t mean much to him as he didn’t know the architecture of the controllers in the device. He got that information from an engineer who used to work for the manufacturer, and would have likely found out from the same source what kind of encryption was used to protect the firmware, but decided against it as he didn’t have a device to test the modified firmware. But modifying the firmware is not necessary to make an impact – using the manufacturer’s software for configuring the devices and sending commands to them is much easier. “After establishing a connection to the traffic sensor using the manufacturer’s software, the commands are no longer a secret – they are visible using a sniffer,” he noted. “To sum up, a car driving slowly around the city, a laptop with a powerful Bluetooth transmitter and scanner software is capable of recording the locations of traffic sensors, collecting traffic information from them and, if desired, changing their configurations.” So what can be done to make this type of attack more difficult (if not impossible)? Legezo advises using non-standard names and identifiers, and adding proprietary authentication on top of the standard protection implemented in well-known protocols. He praised the manufacturer for how the firmware is protected, and for sharing publicly information about the devices. “Personally, I agree with the manufacturer and respect them for this, as I don’t think the ‘security through obscurity’ approach makes much sense these days; anyone determined enough will find out the command system and gain access to the engineering software,” he explained. “In my view, it makes more sense to combine openness, big bounty programs and a fast response to any identified vulnerabilities, if for the only reason that the number of researchers will always be bigger than the number of employees in any information security department.”
<urn:uuid:2af49864-0709-4a67-9ed6-196604d55212>
CC-MAIN-2024-38
https://www.helpnetsecurity.com/2016/04/20/hacking-manipulating-traffic-sensors/
2024-09-20T18:10:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00525.warc.gz
en
0.951237
718
2.828125
3
Capturing the potential of public transport stations by Sarah Palmer, Global Vice President of Market Development and Operations This blog is the first part of our three-part ‘Stations of the future’ series imagining the potential evolution of public transport stations - watch out for the next release soon. When you think about public transport, you likely think of the trains, subways or buses that take you from A to B. Before you even get on one of those vehicles, you probably passed through a station of some kind. Stations are a vital part of any public transport ecosystem. They’re the link between the public transport system and the rest of the world. But they can be so much more than just a place where people get on and off trains. To unlock the full potential of the station of the future, we need to start by understanding the station of today. What is it? Why does it matter – and to whom? What is a station? Functionally yes, a station is where passengers board and disembark from transit vehicles but we cannot see them through that purpose alone. In cities around the world, stations are critical pieces of infrastructure for transport operators, community and cultural hubs for municipalities as well as assets that lead to positive outcomes that can be very hard to measure in a quantifiable way. Stations need to be places that feel safe. This means people actively want to spend time there and don’t feel threatened. This might include features that ensure activity, like food and beverage amenities or retailers. They should be well lit, have access to staff, and if not, provide other ways to access support. It should be easy to buy tickets and plan a route so people can easily get to where they want to go. When you arrive at a new station, it should be easy for people to locate their connecting transportation or best exit to continue the journey on foot or by bike. Stations need to cater to the local everyday user, as well as to the new rider needing more information when trying to find their way. Beyond these basics, what stations look like – or what’s even possible within them – depends on a variety of factors, including: - Budget & passenger usage: Funding often dictates how well a station is maintained, what amenities it offers, staffing levels and the potential of improvement work. - Location: Downtown stations see more traffic than rural or suburban stations. However the mainline terminus will have more space than the likes of subway stops in business districts. - Climate: Stations in cities enduring frequent wet and cold weather need to provide shelter that offers warmth and cover, while warmer places may look for greater shade and airflow that offers respite from the heat. More than boarding platforms Around the world, transit operators are recognizing the need to update and improve stations, thinking about them more as community hubs, icons of the success of their transit programs, or in many cases, sources of non-fare revenue that can be returned into transit investments. The general attitude is moving away from the perception that they exist merely for boarding and disembarking. While business district and commuter stations are typically designed to move people through as quickly as possible, with minimal extras, neighborhood stations serve as welcome mats to their communities. These stations are often home to small local businesses that offer conveniences like coffee, dry cleaning and food services to streamline riders’ days. Some stations are destinations in and of themselves. New York Oculus, Toronto Union and Amsterdam Centraal all offer reasons to visit that have nothing to do with transportation, from spectacular architecture to shopping to sports. Historic stations like Bristol Temple Meads, Edinburgh Haymarket and Dublin Connolly are worth visiting just for the history and culture with their own limitations on what new development can take place. In San Francisco, some might see The Ferry Building on the Embarcardero in such light. These types of stations are focused on providing a great experience for riders and local residents. But many more stations are lagging behind. Too often, stations are treated as afterthoughts within the public transport system, with little consideration for their impact on the passenger experience. The stations of the future depend on collaboration To create stations that serve the needs of all stakeholders, all those stakeholders must be involved in deciding what they’ll look like and helping bring them into being. Transit operators can have conflicting desires. While they want to meet passenger needs at the lowest possible cost, this can be at odds with making the investments that will bring more frequent ridership, as well as new passengers to the system. Local governments – often with the operators – share this goal. Operators and governments can support the stations of the future by listening to the needs of riders and other citizens and recognizing public transport as a public good. There is still a lot of unrealized demand that must be nurtured in the system. Retailers want access to a captive market where their services are, and they can help by working with operators to integrate their offerings with the transport service to streamline the passenger experience. Real estate investors and asset managers want to get more value from their assets by driving more traffic to them. Building more housing, offices, and entertainment near stations can go a long way to achieving that goal. And of course, we can’t forget arguably the most important stakeholder group: the passengers themselves. They want safe, clean, timely trips. They want to be able to get around and pay their fares easily. They want a seamless door-to-door experience. If we design better for the passengers today, we can also bring in a group that is neglected: the car drivers. They want similar things, but have so far chosen not to use public transport. Operators need to hear from them as well to understand how they can move out of the car and onto the bus or train. The stations of today can be many things. By bringing together all stakeholders, really listening to what riders and potential riders need and want, and pushing the boundaries of the possible with great connectivity, the stations of tomorrow can be so much more. Discover what a station can be For a more in-depth look at what passengers expect from stations, watch for an upcoming blog from my colleague Josh MacKinnon. And check out our Stations of the future whitepaper released next week for more insights on how stations can change the face of public transport around the world.
<urn:uuid:fad801a0-b4c8-4292-9ae3-c7a36e542f9f>
CC-MAIN-2024-38
https://www.boldyn.com/us/blog/capturing-the-potential-of-public-transport-stations
2024-09-11T00:58:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00525.warc.gz
en
0.969477
1,317
2.765625
3
Imagine for a moment a scenario that Japan faced: a tsunami of epic size causing a disaster unimaginable in size and impact. Power would be lost, communications services completely wiped out. OK, how about an earthquake? Flood? Remember that any one of these disasters can cause a huge amount of people to be out of touch very quickly, due to the fact that we depend so heavily on our service providers to connect our homes and business to each other using the Internet. If you think that wireless networks would come to our aide, I remind you they are all connected back to the same infrastructure. So while a cell tower here or there may be up, its connection back to the network will most likely be down. Here is the interesting thing: all these wireless devices (from laptops to iPads) all have wireless 802.11 radios and a full protocol stack. Most have significant battery power as well. This first occurred to me when I received one of the “One Laptop per Child” laptops after donating one. That laptop had two antennas and came with a radar-like wireless capability that allows the laptop users to find each other and connect peer-to-peer. The assumption here is that there is no infrastructure in developing countries. This idea has been further developed by folks from Georgia Tech and it is called LifeNet: “LifeNet is a WiFi-based data communication solution designed for post-disaster scenarios. It is open-source software and designed to run on consumer devices such as laptops, smart-phones and wireless routers. LifeNet is an ad-hoc networking platform over which critical software applications including chat, voice messaging, MIS systems, etc. can be easily deployed. LifeNet can grow incrementally, is robust to node failures and enables Internet sharing. A novel multi-path ad-hoc routing protocol present at its core enables LifeNet to achieve these features.” LifeNet sets up an ad-hoc network between devices running their software. “In scenarios such as communication in disaster relief, wireless sensor networks, etc. reliability of connectivity is more important and bandwidth requirements are not too stringent. It is critical to establish a baseline wireless channel over which users can communicate and coordinate their on-field activities. The communication solution should be rapidly deployable, self-powered, robust to failures, locally maintainable and extremely easy to use.” This idea is brilliant. All that it requires is that we install their software in our devices and we can begin to set up this type of network connectivity. No infrastructure is needed. It is designed today for 802.11 a/b/g devices. You can read more about LifeNet here. We think this a great project and encourage our readers to contribute and try the code. If you try it, let us know your experiences.
<urn:uuid:7b7ae1f3-6e67-4b63-af61-b362eb4d4f5f>
CC-MAIN-2024-38
https://www.cellstream.com/2009/07/20/a-network-that-works-after-a-disaster/
2024-09-12T06:46:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00425.warc.gz
en
0.950178
573
2.5625
3
Iridium RUDICS (Router-Based Unrestricted Digital Internetworking Connectivity) was devised in the early 2000s as a means of allowing remote devices to connect to internet-connected servers using TCP/IP. The previous system, dial-up data, had a hefty overhead every time the service was activated, as a series of checks needed to take place before data could be transmitted. RUDICS improved upon this by connecting the call to a predefined IP address, dispensing with the checks, and making connection almost instantaneously. This had the advantage of requiring less power at the remote transmitter end, lowering latency, and generally being a more efficient means of accessing the Iridium system. RUDICS was – and still is – used for solutions that have multiple remote units in the field reporting back to an end point. Data buoys, water level stations, Unmanned Autonomous Vessels (UAVs), geotechnical and structural monitoring solutions, weather stations and many more applications have relied upon RUDICS for two-way communication for close to two decades. In 2019, Iridium launched its (at the time of writing) newest satellite capability, Iridium Certus. Leveraging the advanced technology on the latest generation of Iridium satellites, Iridium Certus is available in three speed classes: Certus 100, which is intended for IoT applications; Certus 200, which is good for basic internet and voice, and Certus 700, which delivers the fastest L-band internet broadband speeds currently available, up to 704 kbps. When we’re comparing RUDICS to Certus, we’re exclusively talking about Iridium Certus 100. They’re both aimed at the same use case of connecting remote devices to servers using TCP/IP (although Certus 100 has an alternative option here – more on that later). What are the key differences between RUDICS and Certus 100?
<urn:uuid:154c8ac9-a7d2-4b6a-b4dc-ac54905e5f56>
CC-MAIN-2024-38
https://www.groundcontrol.com/blog/rudics-vs-certus-100-time-to-switch/
2024-09-13T11:50:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00325.warc.gz
en
0.931699
399
2.609375
3
In our next post in this series, we look at another way to prevent forged email: DKIM. By using encryption techniques and digital signatures, the sender’s servers can transparently “sign “a message so that you can verify the source and ensure the message has not been modified. DKIM works together with SPF to stop fraud and improve email security. DKIM – Domain Keys Identified Mail: A Simple Explanation DKIM stands for “Domain Keys Identified Mail.” Actually, the acronym can be further expanded to “Domain-wide validation Mail Identity through use of cryptographic Keys.” To understand DKIM, we need to pause and look at what we mean by “cryptographic keys” and how they can be used. In security, there is a concept called symmetric encryption. Users pick a password and use a cipher to convert a regular (plaintext) message into an encrypted (ciphertext) message. Someone else who knows the password and cipher can reverse the process to get the regular message back in plaintext. Another prevalent but more complex concept is asymmetric encryption. Using this method, one can create a key pair or a combination of two keys. A message encrypted using Key One can only be decrypted with Key Two and vice versa. We typically call Key One our “private key” because we keep that safe and secret. Key Two is our “public key” and can be accessed by anyone. What are the benefits of asymmetric encryption? - Signatures: Anything encrypted using Key One can be decrypted by anyone. If they can decrypt it, that proves that you sent it. Only you have the private key, and thus, only you could have encrypted it in the first place. - Encryption: Anyone can use your public key to encrypt a message that you can only open (using your private key). How DKIM Works DKIM uses asymmetric encryption for signing email messages. This validates the sender’s identity and ensures the message contents are not altered in transit. Below is a simple overview of how it works. - Make a Key Pair: The owners of the sender’s servers create a cryptographic key pair. - Publish the Public Key: They publish the public key in the DNS records for their domain. - Sign Messages: Using the private key, the sender’s servers look at selected message headers (including the sender’s name and address, the subject, and the message ID) and the message body, and they use a cryptographic “hash” function to make a unique fingerprint of this information. Any change to that data would change the fingerprint. This fingerprint hash is also encrypted using the private key. Then, it is added to the message as a new header called “DKIM-Signature.” When you receive a message signed using DKIM, you know the purported sender, their IP address, and the additional DKIM-Signature. However, you cannot trust that the signature header is real or has not been tampered with. Fortunately, you do not have to trust it unquestioningly; DKIM allows you to verify it. Here is what happens on the recipient’s side: - Receipt: The recipient’s inbound email server receives the message. - Get the Signature: The encrypted DKIM fingerprint is detected and extracted from the message headers. - Get the Key: The recipient’s server looks in the sender’s DNS settings to get the public DKIM encryption key. - Decryption: The fingerprint is decrypted using the public key. - Fingerprint Check: The recipient then uses the message body and the same headers as the sender to make another fingerprint. If the fingerprints match, the message has not been altered since it was sent. As a result, you can verify the sender’s identity because: - We know that the message has not been modified since it was sent. The sender’s name and address (among other things) are the same as when it was sent. - We know the message was sent by a server authorized to send emails for the sender’s domain, as that server used the DKIM private key. So, through encryption, we have a way to verify that the message was sent by a server authorized to send email from their domain, and thus, we have a solid reason to believe the sender’s identity. Furthermore, this validation does not rely on server IP addresses alone and thus does not share the weaknesses of SPF. Setting up Domain Keys Identified Mail It is up to the domain owner to configure their DNS settings for DKIM to be checked by the recipients. You must have access to the domain and the ability to update your DNS records to implement DKIM. DKIM is set up by adding unique entries to the published DNS settings for the domain. You can use a tool like this DKIM Generator to create your DKIM cryptographic keys and tell you what you should enter into DNS. Your email provider may have their own tools to assist with this process. The private key must be installed on their mail servers, and DKIM must be enabled. We recommend asking your email provider for assistance. We will not spend time on the details of the configuration or setup here. Instead, we will look at the actual utility of DKIM, where it fails, and how attackers can get around it. The Benefits of DKIM Once DKIM has been set up and is used by your sending mail servers, it does a fantastic job with anti-fraud. It is more robust than SPF. It also helps ensure that messages have not been modified since they were sent. We can be sure who sent the message and what they said, while SPF does not provide any assurance that messages were unaltered. DKIM is highly recommended for every domain owner and every email filtering system. However, as we shall see next, it’s not time to throw a party celebrating the end of fraudulent emails. The Limitations of DKIM While Domain Keys Identified Mail is significantly stronger than SPF on its own, it continues to have limitations in the battle against email fraud. Identifying Email Sending Servers To properly use DKIM, all servers that send emails for your domain must have it set up and have keys for your domain. This can be challenging to implement if you use vendors or have partners send emails on your behalf. If DKIM cannot be used, emails should be sent using a different domain or a subdomain so that the primary domain can be fully DKIM-enabled and its DNS can tell everyone that DKIM signatures must be present on all messages. You want to be strict with DKIM usage in a way that is hard to do with SPF. If you cannot be strict, then DKIM allows you to be soft, which indicates that signatures may or may not be present. In such cases (like with SPF), the absence of a DKIM signature does not make a message invalid. However, if your DKIM setup is soft, it makes forgery simple. DKIM checks only the domain name and the server. If there are two different people in the same organization, Fred and Jane, either can send email legitimately from their @domain.com address using the servers they are authorized to use for domain.com email. However, if firstname.lastname@example.org uses his account to send a message forged from email@example.com, the DKIM will check out as okay, even if DKIM is strict. DKIM does not protect against inter-domain forgery at all. Note: using separate DKIM selectors and keys for each unique sender would resolve this problem (and the next one), but this is rarely done. Same Email Provider: Possible Shared Servers Forgery If firstname.lastname@example.org and email@example.com were using the same email service provider and servers, Jane’s goodguy.com domain would be set up with DKIM. The email provider’s servers are also set up to sign messages from @goodguy.com with appropriate DKIM signatures. What happens when firstname.lastname@example.org logs in to his account and sends a message pretending to be from Jane? The answer depends on the email provider: - The provider could prevent Fred from sending emails purporting to be from anyone except himself. This would solve the problem immediately, but it is very restrictive, so many providers do not do this. - The provider could associate DKIM keys to specific users or accounts (this is what LuxSci does). Fred’s messages would never be signed by valid the “goodguy.com” DKIM keys, no matter what. This also solves the problem. However, if the provider’s servers are not restrictive in one of these (or a similar way), Fred’s forged email messages will be DKIM-signed with the goodguy.com signature and appear DKIM-valid. Legitimate Message Modification DKIM is very sensitive to message modification. DKIM signature checks will return invalid if even one character has been changed. This is generally good, but email filtering systems may break DKIM. They often read and “re-write” messages in transit where the actual message content is unchanged, but specific (MIME) metadata is replaced with new data. This breaks DKIM, and it can happen more frequently than expected. Good spam filters check DKIM before modifying messages. Still, if you have multiple filtering systems scanning messages, the DKIM checks of later filters may be broken by the actions of earlier filters. DKIM does not protect against Spam This is not a limitation of DKIM, but it’s worth noting anyway. All DKIM does is help you identify if a message is forged or altered. Most spammers are savvy. They use legitimate domain names and create valid DKIM records to look legitimate. In truth, this does not make them look less spammy; it just says that the messages are not forged. Of course, if the spammer tries to get by your filters by forging the sender address to pretend that they are you or someone you know, then DKIM can help. How Attackers Subvert DKIM So, in the war of escalation where an attacker tries to get a forged email message into your inbox, what tricks do they use to get around sender identity validation by DKIM? The protections afforded by DKIM are more significant than those provided by SPF. From an attacker’s perspective, it all comes down to what sender’s email address (and domain) they are forging. Can they pick an address to construct an email that will evade DKIM? - If DKIM is not set up, it’s easy to forge the email. - If DKIM is set up as weak, the attacker can send a forged message with a missing DKIM signature, which will look legitimate. - Suppose the attacker can send a message from one of the servers authorized by the DKIM for the domain. If that server does not care who initiated the message but will sign any messages going through it with the proper DKIM keys, then the message will look legitimate. If the attacker signs up with the same email provider used by the forged domain and that provider’s servers do not restrict DKIM key usage, they can send an email from the same servers and have their messages properly signed. This makes the attacker’s email look valid even if the forged domain’s DKIM records are strict. An attacker’s options are much more limited with DKIM. They can only send fraudulent messages from domains with no or weak DKIM support, send through non-restrictive shared email servers, steal the private key used by the sender’s DKIM, or compromise the email account of someone using the same email domain as the address that is to be faked. The situation is better, but not perfect. Many organizations leave their DKIM configuration weak. They would rather take a chance on forged emails than have legitimate messages be missed due to accidental message modification or because they were sent from a server without DKIM. We will see in our next post how one can use DMARC to combine the best features of DKIM and SPF to enhance forged email detection further and where the gaps that attackers use remain. Read next: Preventing Email Forgery Part 3: DMARC
<urn:uuid:c9856ed1-4bcd-4113-a1af-5ba37c9ccd3a>
CC-MAIN-2024-38
https://luxsci.com/blog/tag/email-forgery
2024-09-14T16:58:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00225.warc.gz
en
0.924258
2,602
3.453125
3
What's a recovery point objective? Along with recovery time objective (RTO), recovery point objective (RPO) is one of the primary tools for establishing a disaster recovery plan or a data protection plan. It’s also a tool for helping an enterprise select the data backup plan that meets its needs. RPO and RTO establish the foundation for determining and discerning strong inclusion strategies for the business continuity plan. These strategies should allow for the speedy resumption of business processes within a time frame equal to or close to the RPO and RTO.1 Although they’re both tied into determining disaster recovery, an RPO must remain independent of an RTO or the minimum estimated time needed to restore regular operations following a system or network failure. Like RTO, RPO helps determine disaster recovery policies and procedures. The RPO helps administrators choose the best backup and recovery technologies to use, depending on the overall design strategy that data loss shouldn’t delay the RTO. How’s an RPO different from an RTO? Both RPOs and RTOs are concepts used for supporting business continuity. They also work as business metrics and can help you calculate how often your business should perform data backups. RPOs and RTOs are instrumental to a business’ disaster recovery plan or data protection plan. They’re both linked to business impact analysis, a systematic process that can help separate critical and urgent organization-specific functions and activities from non-critical and non-urgent ones. Functions can be considered critical if specified by law. RPO and RTO are the two values that are assigned for each function. Whereas RPO determines how much data will or will not be recovered after a disruption, RTO determines how much time it takes for a system to transition from disruption to regular operations functioning normally. How’s an RPO calculated? RPOs go backwards in time, stretching back from the instance of disruption to the last backup point where the data is usable. They can also measure how often you should regularly back up your data. In terms of calculation, RPOs are usually calculated in minutes or hours but, depending on the urgency or lack thereof, they can also be calculated in seconds or days. An RPO determines how far back into the data’s history you need to go for backup storage to resume normal operations after a computer, system, or network experiences disruption from a hardware, program, or communications failure.2 An RPO can also be considered the maximum acceptable amount of data loss that’s measured in time. In addition, it can describe how much time passes during a disruption before the amount of data lost during the period of disruption extends beyond the business continuity plan’s maximum allowable “tolerance” or threshold. For example, if a computer system has an RPO of 30 minutes, that means that the maximum window for data loss following a disruption is 30 minutes. Therefore, a backup of the system must be performed every 30 minutes. When should you schedule data backups for an RPO? RPOs are often easier to perform than RTOs. The reason is because data use provides few variables and is generally consistent. However, the opposite can also be true since calculating restore times is usually based on your whole operation and not just your data. When the disruption happens is also a factor in the restore time. When constructing your data backup schedule, consider what hours your business is the busiest and least busy. For example, if you have a disruption at 3 AM Eastern Standard Time and IT resolves the disruption by 5 AM, did you lose two hours’ worth of data? If this timeframe is a low-traffic period for your business, then probably not. For another example, let’s say your business backs up its data every 10 hours. There’s a disruption at noon and it’s quickly resolved with your business back to normal by 12:30 PM. Because there was only a 30-minute window of data loss from 12 PM until 12:30 PM, you don’t need to restore all the data from the previous 10 hours. You only need to restore data from the lost 30 minutes. Disaster recovery and disaster recovery plans Not to be confused with emergency management and disaster response, disaster recovery deals with IT infrastructure and systems in support of important business processes. Disaster recovery is a subset of business continuity, which works to maintain all vital aspects of a business despite any disruption. Disaster recovery includes the policies, tools, and procedures that compose the eponymous restoration or continuation of critical technology infrastructure and systems after a natural or manmade disaster has occurred. A disaster recovery plan (DRP) is the process or set of procedures that help restore and protect an organization’s IT infrastructure and systems following a disaster.3 This process can be expanded to take place before and during a disaster.4 - Jaspreet Singh. Understanding RPO and RTO, Druva, September 2019. - Recovery Point Objective (RPO), Techopedia, 11 November 2011. - Bill Abram. 5 Tips to Build an Effective Disaster Recovery Plan, Small Business Computing, 14 June 2012. - Geoffrey H. Wold. Disaster Recovery Planning Process, Disaster Recovery Journal, Adapted from Volume 5 #1. Disaster Recovery World. Archived from the original on 15 August 2012.
<urn:uuid:130863c3-24de-4db3-95b9-54fdf276ce16>
CC-MAIN-2024-38
https://www.kyndryl.com/au/en/learn/rpo
2024-09-14T17:07:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00225.warc.gz
en
0.933184
1,116
2.65625
3
Tech Note 0028 How MTP/IP ensures the integrity of the data it transports MTP/IP guarantees that upon successful completion of a data transport transaction, the data it delivered at the destination will be byte-for-byte identical to the data it was given at the source. Data in transit is protected by multiple checksums, a patented transport state management system, and optional encryption with tamper protection. Each data object, whether it is a file, memory buffer, or dynamic stream, is broken into network datagrams. Multiple checksums are performed on each datagram, and the datagrams are tracked to ensure that they are delivered and reassembled correctly. In most environments, each datagram is subject to at least three independent checksums: MTP/IP performs its own checksum, the operating system performs a checksum at the UDP/IP layer, and the network hardware (such as Ethernet or WiFi) performs another checksum. This triple redundancy ensures that data corruption will be detected and corrected even in extremely high-volume and high-error rate environments. Partial & Interrupted Transactions In addition to guaranteeing the integrity of successfully completed transactions, MTP/IP also provides integrity protection for partially received data objects even as they are arriving. As a transaction progresses, MTP/IP updates the parent application with a sequence number indicating that all data up to that point has been verified. This allows dynamic data processing to proceed immediately and allows interrupted transfers to be resumed from the last verified point. When encryption is enabled, the MTP/IP checksum is performed on the clear-text data. If a hostile network node were to modify an encrypted MTP/IP datagram, the hostile node could recalculate the hardware and UDP/IP checksums, but it would not be able to recalculate the MTP/IP checksum. For more information about the security features of MTP/IP software, see Tech Note 0016. Tech Note History Jan | 26 | 2012 | First Post |
<urn:uuid:ccb7a68e-bbc8-494b-b117-4a04026acfe1>
CC-MAIN-2024-38
https://www.dataexpedition.com/support/notes/tn0028.html
2024-09-15T23:01:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00125.warc.gz
en
0.907172
417
2.78125
3
Fibre Channel over IP is a proposed IETF standard for linking Fibre Channel fabrics over TCP/IP network links. The protocol can be used as an alternative to connecting storage-area networks via dense wavelength division multiplexing and dark fibre. Tapping a more affordable and available IP service can dramatically reduce the monthly cost of wide-area links and extend the maximum distance between Fibre Channel sites. FCIP transports Fibre Channel data by creating a tunnel between two endpoints in an IP network. Frames are encapsulated into TCP/IP at the sending end. At the receiving end, the IP wrapper is removed and native Fibre Channel frames are delivered to the destination fabric. This technique is commonly referred to as tunneling, and has historically been used with non-IP protocols such as AppleTalk and SNA. The technology is implemented using FCIP gateways, which typically attach to each local SAN through an expansion-port connection to a Fibre Channel switch. All storage traffic destined for the remote site goes through the common tunnel. The Fibre Channel switch at the receiving end is responsible for directing each frame to its appropriate Fibre Channel end device. Multiple storage conversations can concurrently travel through the FCIP tunnel, although there is no differentiation between conversations in the tunnel. From the standpoint of the IP network, the FCIP tunnel is opaque. An IP network management tool could view the gateways on either side of the tunnel, but can’t zero in on the individual Fibre Channel transactions moving within the tunnel. The tools would thus view two FCIP gateways on either side of the tunnel, but the traffic between them would appear to be between a single source and destination, not between multiple storage hosts and targets. Connecting Fibre Channel switches creates a single Fibre Channel fabric analogous to bridged LANs or other Layer 2 networks. This means that connecting two remote sites with FCIP gateways creates one Fibre Channel fabric that can extend over miles. This preserves Fibre Channel fabric behaviour between remote locations but could leave the bridged fabric vulnerable to fabric reconfigurations or excessive fabric-based broadcasts. FCIP gateways are commonly sold in pairs for each tunneled link. Connecting Site A to Site B, for example, would require one pair, while connecting Site A to Site C would require an additional pair of gateways. FCIP is more suitable for point-to-point connections than multi-point connections. Because FCIP simply wraps and unwraps Fibre Channel frames in IP, there are few ways for vendors to distinguish their gateways. Some manufacturers therefore are reducing FCIP functionality to a blade that inserts into a Fibre Channel switch. Another proposed IETF standard, Internet Fibre Channel Protocol (iFCP), uses the same Fibre Channel frame encapsulation scheme as FCIP. However, iFCP is a more complex protocol that was designed to overcome the potential vulnerabilities of stretched fabrics, enable multi-point deployments and provide native IP addressing to individual Fibre Channel transactions. For management, FCIP uses Service Locator Protocol (SLP) to identify FCIP gateways in the IP network. With relatively few FCIP gateways, SLP offers a suitable look-up table mechanism. ISCSI and iFCP can use SLP, but for more complex environments, the Internet Storage Name Server (iSNS) is preferred. FCIP gateways do not support iSNS. For security, IP Security (IPSec) provides authentication, encryption and data integrity. FCIP also uses IPSec’s automatic key-management protocol, Internet Key Management, for handling the creation and management of security keys. The FCIP standard is expected to be finalized within a year. Clark is director of technical marketing and Nishan Systems Inc. and author of IP SANs and Designing Storage Area Networks. He can be reached email@example.com.
<urn:uuid:879e5981-434e-418e-abb1-8cebdc9b1d92>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/fcip-bridges-fibre-channel-sans/20687
2024-09-15T22:59:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00125.warc.gz
en
0.918361
801
2.78125
3
U.S. manufacturing industry is at an intersection as technology and globalization have changed the entire scenario. U.S. manufacturers have had to either become accustomed to market forces or close down their business. One can say that they have been subjected to this environment as they have been pitted against numerous emerging market competitors backed by affirmative trade policies and low-wage labor drew off their market share. With these challenges in place, there is another new “global” trend affecting the manufacturing arena known as climate change. Although it may sound like a severe problem, it can be changed and looked into by implementing a few steps. U.S. manufacturers are well placed to motivate the rest of the world in the fight to lower carbon emissions and, at the very time, show why it will be a very successful endeavor. Using technology and innovation, U.S. manufacturers can decrease their energy consumption, reduce their costs, optimize their supply chain, modify their brand, and build the worldwide standards for environmental sustainability. The “carbon footprint” is generally associated inherently in various ways to the different people and there are numerous contributing circumstances to a manufacturer’s footprint. Here are just two of them. First, there is the carbon impression of global supply chain networks. Carbon footprinting can display onerously complex when one considers the upstream and downstream benefactions of associates and suppliers. Second, there are the carbon discharges produced from plants, which includes all on-premises generation assets and services. You can read here on how to design the supply chain network to decrease the carbon footprints being emitted. There are various ways to promote green supply chains. These comprise optimizing the dynamic supply chain and the warehouse and transit of product across it, reducing energy usage in the manufacturing transformation process, and changing product design and packaging to decrease waste and enhance the recycled content of individual products. Manufacturers can radically reduce shipping, inventory, and product costs by utilizing supply chain management (SCM) principles to design the most effective logistics network possible — and be greener for the industry. Indeed, most businesses have ample possibilities to reduce costs and increase customer service through SCM. Numerous manufacturers depend on just-in-time methods in their supply chain that concentrate on expenses and delivery. Supply chain design can take this to the next level by helping factor in the value and profits of variables like alternative shipping modes, fuel expenses and the carbon impact of these arrangements. These choices do not have to decrease customer service or delivery times. A locus on supply chain configuration and modeling can enhance these performance signs while reducing expenses and environmental influence. Supply chain configuration is a rapidly evolving area, and there are experts and technology solutions that can assist manufacturers to produce more efficient supply chain standards that can respond to dynamic requirements and market requirements. The method to green the supply chain has faced its fair share of complainers who cite it is not desirable to design a supply chain that decreases a company’s carbon footprint without expanding its costs and suffocating growth. Recent comparisons by industry analysts show businesses that execute best practices to green their supply chain do diminish overall supply chain expenses. Evolving your supply chain to meet 21st-century difficulties does not mean substituting an entire fleet with the next generation of transportations. It is about building efficiencies to eliminate waste from the system. By creating carbon emissions within basic network design, businesses can realize almost paramount benefits. This technology recognizes associations to track and model costs and discharges that can affect the supply chain so companies can speedily discover the most efficient number of locations, dimensions, and capacities of departments to meet customer demand while also adhering to a green measure.
<urn:uuid:2efe8f8d-8017-4411-8305-cf791f14dc48>
CC-MAIN-2024-38
https://www.mytechmag.com/greener-supply-chain/
2024-09-15T22:55:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00125.warc.gz
en
0.954204
742
3.109375
3
The National Nuclear Security Administration is upping its computing power with the purchase of new supercomputer it says will help drive the development of artificial intelligence to maintain the U.S. nuclear stockpile. NNSA announced the $600 million deal Tuesday morning with Cray, a supercomputing company that regularly contracts with the government. The computer, dubbed El Capitan, is expected to arrive at the Lawrence Livermore National Laboratory in California in late 2022 and be up and running by 2023. Cray said the supercomputer will have a peak performance of 1.5 exaFLOPS, or 1.5 quintillion calculations per second—that’s 1.5 trailed by 17 zeros—and run some applications 50 times faster than Lawrence Livermore’s current system. All that added computing power is critical to ensuring the NNSA’s mission of maintaining the nation’s nuclear stockpile, NNSA Administrator Lisa Gordon-Hagerty said. “El Capitan will allow us to be more responsive, innovative and forward-thinking when it comes to maintaining a nuclear deterrent that is second to none in a rapidly-evolving threat environment,” Gordon-Hagerty said. The enhanced capabilities will be especially significant in NNSA’s development and use of artificial intelligence and machine learning, said Bill Goldstein, Lawrence Livermore’s lab director. The lab will use the supercomputer to expand simulations using advanced statistical sampling in physical and chemical “uncertainties,” Goldstein said. The math behind the simulations is ideally suited to be done by machine learning, he added. The simulations run with the assistance of machine learning are dubbed “cognitive simulations,” Goldstein said. With the nuclear stockpile aging, and without real-world testing, NNSA will use cognitive simulations powered by the El Capitan system to better maintain the warheads and radioactive materials. “Machine learning can be a huge boost to our abilities,” Goldstein said. The threat is growing in complexity and scale. China and Russia continue to field new nuclear weapons and capabilities, Gordon-Hagerty said. Securing nuclear deterrents is at the heart of the U.S.’s national security apparatus as it pivots from focusing on counterterrorism to defending against so-called great power conflict. “We really think that this is the next generation,” Goldstein said. Last week Cray announced a deal with the Air Force to provide supercomputing power to their weather predictions. That system will be housed in the Department of Energy’s Oak Ridge lab with capabilities delivered as-a-service to the Air Force. The El Capitan system will be installed directly in the Lawrence Livermore lab and be air-gapped after an initial general research test period.
<urn:uuid:458b727e-d0ef-43f4-a45d-1d0bb268aceb>
CC-MAIN-2024-38
https://develop.fedscoop.com/nnsa-supercomputer-ai/
2024-09-19T14:04:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00725.warc.gz
en
0.90999
574
2.53125
3
Cloud Computing Glossary App virtualization encapsulates an app from the underlying operating system on which it is executed. Apps and files are not installed on the local device, although they behave as if they are. Cloud automation provides tools and processes that reduce IT overhead by automating orchestration and management of cloud resources. With cloud automation, administrators can eliminate much of the manual labor of deploying and managing infrastructure configurations and desktop environments through a self-service portal. Cloud computing enables access to shared pools of infrastructure resources and other enhanced services over an Internet connection. Through orchestration and automation, cloud computing allows businesses to achieve economies of scale without having to invest in on-premises infrastructure and management. Cloud desktops (aka hosted desktops) are virtualized desktop environments hosted in a cloud data center. Users can access their desktops over an Internet connection through a connection broker. Windows-based cloud desktops can be accessed through Remote Desktop Services. Cloud management is the practice of deploying, maintaining, and securing data, apps and infrastructure in the cloud. This is done through software tools that enable companies to control their cloud environments remotely through a self-service, web-based management portal. Desktop-as-a-Service (DaaS) is a component of cloud computing where client desktops and apps are virtualized and hosted in a cloud data center. Traditional DaaS vendors offer DaaS as a managed service. DaaS 2.0 is powered by automated migration, orchestration and management software-only platforms like itopia Cloud Automation Stack (CAS). Disaster recovery is a set of policies, tools and procedures to enable the recovery of critical IT resources and data in case of an unforeseen natural or human-induced disruption. Disaster recovery is a subset of business continuity. Google Cloud is a suite of cloud services that allow businesses to use Google’s infrastructure for storage, servers, network and more. It also equips app developers to leverage tools like Kubernetes (container service), BigQuery for data analytics, BigTable for database services and artificial intelligence. Google Cloud-based desktops run on Google’s infrastructure and data centers and offer a more affordable alternative to on-premises data centers. They can be directly managed with itopia CAS, an end-to-end DaaS automation, migration and orchestration software platform. Remote Desktop Services (RDS) is Microsoft’s remote access software suite, which allows administrators to take control of and manage remote computers or virtual machines over a network connection. RDS makes Windows software and an entire desktop running RDS accessible to a remote client machine that supports Remote Desktop Protocol (RDP). VM uptime scheduling is a feature of itopia’s Cloud Automation Stack (CAS) software platform that enables IT administrators to schedule when servers spin up or down according to business demand. For example, if your employees aren’t working on the weekends, you can schedule server resources to turn off each weekend. Virtual desktops are desktops environments that have been virtualized and abstracted from the physical devices on which they are executed. Virtual desktops can be made available to users regardless of their location and on any supported device. The desktops are hosted in an on-premises, cloud or hybrid environment.
<urn:uuid:157a7f03-3b9d-4a72-9c6f-33e929d8d0b3>
CC-MAIN-2024-38
https://itopia.com/glossary/
2024-09-20T19:35:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00625.warc.gz
en
0.903274
678
2.625
3
Building Secure Wireless Networks with 802.11 Authors: Jahanzeb Khan and Anis Khwaja As you can see, we have yet another wireless review on Help Net Security. As more and more people are migrating their wired networks into wire-free environment, wireless security is becoming one of the most talked about IT topics. What is this book all about? Read on. About the authors Jahanzeb Khan is Principal Engineer with RSA Security, Inc., where he is responsible for the research and development of encryption, Public Key Infrastructure, and wireless LAN Security standards. He is active in the 802.11b community and is a member of IEEE International. Anis Khwaja works in the IT department of a leading financial services firm. Khwaja has more than fifteen years of experience in networking and is currently involved in deployment of 802.11b (WiFi) networks Inside the book On September 11, 1940, George Steblitz used a Teletype machine at Dartmouth College in New Hampshire to transmit a problem to his Complex Number Calculator in New York and received the results of the calculation on his Teletype terminal. This transfer of data is considered the first example of a computer network. Fast forward 60 years where computers networks are connected over the air. The book doesn’t start with an overview of WLAN basics, but with a historical view on computer networks from day one, to the modern Ethernet networks and Internet. As general network knowledge is needed to understand and setup a wireless network, early chapters present the information on different network types, standards and protocols. What follows next is a similar chapter which is focused on wireless networks. Here the readers stumble across all the WiFi and WLAN basics, including standards and operating modes. Besides all the good things wireless networks provide, there is a number of technological and security issues that should be closely considered. Some of the possible pros/cons can be found in the following chapter titled “Is Wireless LAN right for you?” Following the idea on explaining wireless networks with having the “usual” wired networks on mind, the authors divide the part on secure wireless LANs to two chapters, each dealing with security issues and concepts of one of these network types. The security aspects of wired networks receive 10 more pages than their wireless counterparts, which is normal, as wired networks are much older and are used much more than wireless ones. After talking a look at both basics and security issues of wireless LANs, authors now dedicate several chapters on building and securing WLANs. Here, the first time implementers will learn how to deploy wireless networks through seven logical steps. The steps are varying from understanding the wireless needs to the product consideration and ROI (return of investment). All of the steps are practically shown through a non-existing Bonanza Corporation wireless LAN installation. From the advanced user’s point of view, authors provide a chapter on 802.1X authentication mechanism, which is presented through a semi-visual guide on setting up 802.1X with Microsoft Windows XP and Cisco 350 series AP. Besides this setup, there is a nice scope on using a Virtual Private Network to secure wireless communications. The ast part of the book takes care of methods related to troubleshooting and maintaining secure operations in your WLAN. In a brief overview manner, authors give the future administrators tips on the things that can go wrong. A nice addition to this chapter is a sample security policy that can be easily modified for usage in different environments. The book contains two interesting appendixes. The first contains several actual mini case studies, where the readers can take a look at several different wireless deployment scenarios. All of these wire free setups (home, small corporation, campus wide and wireless ISP) are presented through the same template detailing on each network’s background, problem, solution and the final result. These examples aren’t so in-depth, but provide a good read on several possible WLAN installations. If you are interested in wireless LAN technologies, you probably realized that a number of book publications and articles, reference some of the Orinoco hardware. The most talked about wireless adapters, especially when taking a look at Wardriving and similar activities, are surely Orinoco Gold and Silver PC cards. The authors carry on the Orinoco fame, with an appendix detailing Orinoco PC card on a number of operating systems, including Windows 98/ME/2000/NT, MacOS and Linux. As I’m an owner of Orinoco Residential Gateway (RG 1000) access point, it was nice to see that the authors use this AP for a sample LAN setup. For those who don’t know, RG1000 is like a clam-shell – when you open it you will find a pearl in the way of the ever useful Orinoco Silver PC (pcmcia) card. Information security experts Khan and Kwaja combined their WiFi knowledge and created this step-by-step guide covering all the major aspects of 802.11 networks. They cover the whole circle, from initial network and product considerations, over installation and security, to troubleshooting the existing network. “Bulding Secure Wireless Networks with 802.11” is easy to read, informative and deals with a number of WiFi security facts. The difference between this and other WiFi security publications is that the book is intended to be an implementers guide into building a secure wireless Local Area Network. I should note that the book is strictly Windows related, so, besides the Orinoco guide, don’t expect any implementation methods for other operating systems. From the implementer’s perspective, the book should be suitable to novice and inter mediate readers, because the topics surrounding actual implementations and advanced techniques are covered in a less technical way. Don’t get me wrong, there are technicalities inside the book, but not so deep enough to be of interest to advanced users familiar with WLANs. Yet another fact that proves that the book is more novice user suited is that software installations are started with “Insert CD”, click “setup.exe” etc.
<urn:uuid:6ca731d7-cafc-4871-b927-b167bbd188d4>
CC-MAIN-2024-38
https://www.helpnetsecurity.com/2003/08/28/building-secure-wireless-networks-with-80211/
2024-09-20T20:29:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00625.warc.gz
en
0.926793
1,261
2.546875
3
The pandemic has changed our lifestyles in many ways, and the increase in digitalization is one of the major developments. People are digitally connected more than ever. Besides, the work from home culture and online learning certainly boosted the global digital connectivity amid the pandemic. Looking at the spiking trend of online networking and accessibility, security solutions provider McAfee conducted the “2021 Consumer Security Mindset Survey,” which revealed that consumers in India are more cautious about the security of their connected devices. It also stated that 88% of Indian consumers feel they are more digitally connected and 86% have implemented more protection for their digital devices. - Nearly 57% of Indians agree that digital hygiene or the lack of it can put them and their families at risk. - 2 out of 3 Indians (68%) check if the network that they are joining is secure before connecting. - Increase in COVID-19-themed attacks targeting people working remotely. - 53% feel more vulnerable to risks when someone has visited their home and has connected to their internet. - Perceived to be most vulnerable to cyberthreats are Wi-Fi networks (57%), someone’s home computer (46%), smart home assistants (26%), smart TV (28%), and gaming systems (29%). - 62% of the respondents believe that digital wellness and protection should have a separate curriculum and be taught throughout primary school. Online learners mostly concerned about exposure to scams (53%), sharing personal information (53%), illegal content (55%), cyber-bullying (52%), and misinformation (49%). Increase in Cyber Hygiene It was found that Indians are taking online security seriously — given the rise in COVID-19-themed attacks, which increased by 240% in Q3 and 114% in Q4 last year, with an average of 648 new threats per minute. Nearly 58% of Indians stated that they have a good understanding of the data they download/store on their mobile devices. Over 72% use a mobile security software solution to protect their mobile data, of which, 46% use preinstalled security software. And 58% of Indians believe that the information stored on their mobile phone is secure from cyber risks. “Remote working, online learning, and a surge in the usage of connected devices due to more time being spent indoors have resulted in increased digital dependence among Indians. While our study indicates that more Indians are digitally connected owing to the pandemic, they are also now actively taking steps to keep themselves protected from online threats. The spike in our digital footprints during this time, makes it critical for everyone to understand the importance of online security and take measures towards protecting themselves,” said Venkat Krishnapur, Vice-President of Engineering and Managing Director, McAfee India. How to Enhance Your Online Security? With rising attacks on connected devices, consumers must understand the seriousness of potential risks and must follow the required security measures to protect their personal information. Here are some security tips to enhance cyber hygiene: - Prioritize digital health by enhancing security standards across devices and home networks can also go a long way in maintaining digital wellness. - Use multi-factor authentication to double-check digital authenticity and add a layer of security to protect personal data and information. - Be cautious when connecting to any public Wi-Fi, or even your friend’s Wi-Fi connection, and make sure the network is secure and attached to a trusted source. Ensure that you don’t conduct any financial transactions or share any personal details while on an untrustworthy Wi-Fi network. - Separate your devices for business and personal use. We may have brought work home with us, but it’s important to set boundaries between personal and work life. - A comprehensive security software that can detect and block a variety of threats is always a good investment. Also, check if it includes a firewall, as this will ensure that all the computers and devices on your home network are well protected. While we cannot expect 100% online data security, maintaining robust cyber hygiene will certainly help deter cyberthreats in the long run.
<urn:uuid:65474090-0dc3-4f7f-8269-00bcbd43d832>
CC-MAIN-2024-38
https://cisomag.com/do-digitally-connected-indians-feel-secure-during-the-pandemic/
2024-09-12T08:46:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00589.warc.gz
en
0.948919
842
2.65625
3
The benefits of decentralised clinical trials are becoming well known as many contract research organisations (CROs) and pharmaceutical companies increasingly adopt hybrid or fully-virtual models. Among these benefits are improved patient care, simplified patient experience, and increased data accuracy which results in a faster time-to-market. A critical component of decentralised clinical trials is the use of electronic diaries, or e-diaries, to collect data directly from patients. In standard clinical trials, patient data is often collected in the form of paper diaries, which may not have the same level of data accuracy or convenience as e-diaries. The EMA (European Medicines Agency) understands that conducting clinical trials requires the highest standard of patient safety and transparency of trial information. One great example, is the use of e-diaries to improve patient compliance. A recent study by U.S. National Library of Medicine, sites an observed reduction in behaviours such as parking lot compliance, which refers to patients who fill in the entire diary immediately before a study visit, and forward filling, which is when a patient enters data ahead of the scheduled time. eCOA refers to electronic clinical outcome assessment, which is a method of electronic data capture used in clinical trials. According to Data Bridge Market Research, the eCOA market is set to grow by 15.5% (CAGR) from 2020 to 2027 - reaching $820.38 million. Growth in Europe is being attributed to a large number of pharma's willingness to adopt new technologies, high R&D spending and 'increasing prevalence of diseases creating a demand for highly efficient pharmaceutical activities for quick recovery.' For decentralised clinical trials, there are a number of eCOA measures used to determine how a treatment is working, and one of those measures is ePRO. Electronic patient-reported outcome, or ePRO, is the use of an electronic device to capture and transmit patient-provided data, including dosage and timing, side effects, and symptoms. Helping streamline the information submission from clinical trials in the European Union to CTIS (Clinical Trials Information System), which covers all regulatory and ethics assessments. Paper methods of patient data collection have no way of controlling the quality or timeliness of data entry, making it impossible to avoid issues like skipped items, poor handwriting, and inappropriate responses. ePRO is able to deliver reliable data and higher patient protocol compliance. Global clinical software companies provide unique eCOA and ePRO platforms that facilitate accurate data collection and regulatory compliance. Mobile devices such as tablets, smartphones, and wearables can reduce human error and provide accurate, medical-grade biometrics for eCOA/ePRO data collection. Their use has been shown to improve the patient experience, which results in increased patient compliance. Deploying connected devices for decentralised clinical trials across the globe is a daunting task — selecting the right technology partner can streamline the process of sourcing, securing, and connecting mobile devices that are used for patient data collection within clinical trials across the globe. KORE powers connected health devices for decentralised clinical trials with a combination of wireless connectivity and network management, mobile device management, and deployment services — reducing time-to-market and total cost of ownership. For a guide to the primary questions that you should be asking when considering decentralised clinical trials, download our eBook “Key Considerations for Clinical Trials.” Stay up to date on all things IoT by signing up for email notifications.
<urn:uuid:b7330f3a-5e07-44a0-8f24-b22ff5f9c32b>
CC-MAIN-2024-38
https://eu.korewireless.com/blog/ecoa-and-epro-improving-data-accuracy-in-decentralized-clinical-trials
2024-09-12T08:32:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00589.warc.gz
en
0.928279
712
2.53125
3
Ransomware is now commonplace within many industries around the world. Over the past three years, especially during the pandemic, ransomware attacks have increased in many different sectors including government, healthcare, education, professional services, and manufacturing. According to the FBI’s Internet Crime Report 2021, America experienced an unprecedented increase in cyber-attacks and malicious cyber activity during 2021. These cyber-attacks compromised businesses in an extensive array of sectors as well as the American public. This trend extended right around the world. Who is targeted by ransomware? Private businesses, universities, and governments have spent hundreds of millions on ransomware attacks. Not only is it the cost of paying the ransom (which the FBI advises against), but extensive costs on cleaning up the aftermath – from rebuilding systems and networks to restoring backups and increasing cybersecurity measures. What’s more, the cost of downtime ransomware targets face after the attack is between five to ten times more than the ransom amount. The average downtime is now estimated to be at around 10 days with costs escalating year on year. With that in mind, let’s take a closer look at the industries impacted by ransomware. Top ransomware targets Ransomware attacks have tended to focus on government agencies and big businesses because they’re the ones who usually pay up. Ransomware targets are usually those with a weak cybersecurity infrastructure. However, attacks have evolved, and hackers are now targeting specific organisations to reach specific targets and assets. Some of the key industries becoming ransomware targets include: The FBI’s internet crime report outlined how there were at least 649 ransomware attacks on critical infrastructure healthcare organisations in the United States alone from June 2021 to December 2021. The United Kingdom’s National Health Service also revealed a significant increase in ransomware attacks which highlights only a small snippet of the threat globally. Healthcare bosses are now only too happy to deploy an array of methods, policies, and technologies to prevent ransomware attacks from bringing down their network or systems, and from leaking important data and sensitive information. Education is also one of the main ransomware targets. University College London, the University of Calgary, and Los Angeles Valley College are just a few educational facilities that have suffered at the hands of hackers. The education sector presents an easy target to ransomware attackers for many reasons. Students are often easy targets and not really aware of ransomware attack methods. They can be targeted through malicious files or attachments, or by visiting websites that could prove damaging to educational networks or systems. The interconnected nature of university campuses, and the way the networks are configured, make way for malware infiltration points. Educational facilities also suffer from cost constraints surrounding their IT systems, which can cause lapses in security and creates a feeding ground for hackers. For instance, Howard University, one of America’s leading colleges, cancelled classes in September 2021 after it was hit by a ransomware attack. More than 11,000 undergraduate, graduate, and professional students are enrolled in the university and their sensitive data was put at risk. Howard worked with the FBI and city officials to rectify the damage and install extra safety measures to protect the university’s data. Cyber attackers targeted county and city governments across America with 79 recorded ransomware attacks in 2020, which impacted an estimated 71 million people. During the same timeframe, the average ransom demanded from governmental related organisations stood at US$570,857, with millions being paid out to hackers at the same time. Government entities right around the world continued to be threatened throughout 2021, which has seen a lot of rules, regulations and policies put in place to ensure government agencies and third-party vendors are protecting themselves and their supply chain against attack. Energy & utility The energy & utility industry is the lifeline of every economy around the world. Electricity powers our hospitals, traffic systems, and water treatment plants, whereas the oil producers keep our cars on the roads. With so much riding on the industry, hackers are targeting firms to infiltrate their systems, charging large sums to set them free and causing widespread damage at the same time. The three most important attacks of recent times include the Colonial Pipeline attack, the Volue ASA attack, and the infamous COPEL and Electrobras attack. All three caused many millions in damages and affected the ‘production’ line. Ransomware remediation strategies Here, RiskXchange has highlighted the top three ransomware remediation strategies to help fight ransomware attacks: - Strong patching cadence Ensure a strong patching cadence. Going “back to basics” is the best way to prevent ransomware attacks. Ensuring a robust cybersecurity hygiene model and strong, consistent performance is important when it comes to protecting your network against attack. - Identify misconfigured systems Ransomware targets and regularly exploits weak configuration management protocols, the misconfigured TLS/SSL configurations being the most notable. TLS/SSL certificate and configuration management have so far proven particularly challenging for security teams. Organisations tend to have hundreds, if not thousands of TLS/SSL certificates that identify each internet-connected device in their network. For security teams trying to pinpoint a TLS/SSL security misconfiguration is like looking for a needle in a haystack. RiskXchange, however, can scan for misconfigured TLS/SSL certificates alongside other vulnerabilities to help secure your network or system. - Continuously monitor your vendors’ security postures The key to mitigating third-party risk is through understanding your vendors’ security postures. However, vendor evaluation via security assessments and questionnaires doesn’t always paint a full picture and only records a moment-in-time view of risk instead of an overall accurate assessment. Continuously monitoring your vendors’ security posture is key to preventing an attack. Get in touch with RiskXchange to find out more about who is targeted by ransomware and how to fight against ransomware attacks.
<urn:uuid:47f97239-8e57-4ef5-9ff1-c80ce80f7fc1>
CC-MAIN-2024-38
https://riskxchange.co/5374/who-is-targeted-by-ransomware/
2024-09-13T14:52:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00489.warc.gz
en
0.952211
1,196
2.734375
3
In the run-up to the 2016 U.S. presidential elections, Democratic candidate Hillary Clinton received a serious blow from a series of leaks coming from the email account of her campaign chairman John Podesta. Hackers were able to access the contents of Podesta’s account by staging a successful phishing attack and stealing his credentials.pass Podesta is one of the millions of people whose passwords get stolen as a result of social engineering attacks or data breaches every year. A recent research by security firm 4iQ found a 41-gigabyte file being sold on the dark web, which contained 1.4 billion usernames and passwords. It is now evident more than ever that passwords are not enough to protect online accounts. With each of us managing dozens of online accounts, keeping every password unique is becoming increasingly burdensome. That’s why we often reuse passwords, which can lead to chain attacks when one password is revealed to hackers. As computers grow faster, stronger and more affordable, we’re forced to create more complex passwords to protect our accounts against brute-force attacks. And as quantum computing gradually turns from myth to reality, no amount of complexity will protect us against hackers. And finally, as long as our passwords are stored somewhere in servers, a hacker can always get a hold of them by breaking into those servers. The Role of MFA in Protecting Online Accounts Some businesses have tried to move toward two-factor and multi-factor authentication (MFA) to make up for the flaws of passwords. Multi-factor authentication implies using two or more varying methods to authenticate users when they try to access sensitive accounts and digital assets. MFA removes the single point of failure imposed by passwords. This means hackers who gain access to online account passwords through phishing attacks or data breaches still won’t be able to access those accounts because they won’t be able to produce the second factor. Older generations of multi-factor authentication involved using passwords and a second token. For instance, this could be a password and an SMS code sent to a phone associated with the account, or a time-based passcode generated by a mobile app. However, these methods had two fundamental problems: - Unfriendly experience: Most users find it bothersome to go through an extra step to verify their identity each time they want to access their accounts. This consequently pushed users to deactivate 2FA on accounts or frequently used devices, which opens the way for new types of attacks and account takeovers. - Insecure methods: Although traditional two-factor authentication (2FA) is more secure than plain passwords, it’s not uncompromisable. It still involves the use of passwords, which have very distinct vulnerabilities, and the secondary factors often have their own security holes. A crafty hacker will be able to intercept, replicate or deactivate 2FA codes with enough effort. What the Future Holds for MFA The next generation of multi-factor authentication (MFA) mechanisms will combine impregnable security and ease of use, ensuring that users have a frictionless experience while preventing hackers from finding and exploiting loopholes. Passwords will most likely disappear and give way to more reliable and user-friendly methods. A recent survey conducted by Secret Double Octopus found that most company employees find passwords unwieldy and burdensome, and would prefer biometric authentication as the main method for securing their online accounts. “There is no doubt that over time, people are going to rely less and less on passwords,” adding that passwords “just don’t meet the challenge for anything you really want to secure.” Bill Gates Biometrics were previously expensive and inaccurate, but recent years have seen precise and affordable fingerprint, iris and face scanners integrated into a large number of consumer devices. Companies will be able to leverage these technologies to replace passwords a Bring Your Own Device (BYOD) approach to authentication. An example of modern multi-factor authentication is Secret Double Octopus’s passwordless identity verification solution. Secret Double Octopus obviates the need for storing any form of secrets, be it passwords or security keys. Moreover, every authentication attempt is performed over multiple channels, each using a separate security method. Meanwhile, the user experience is seamless and frictionless, requiring only a tap or a fingerprint verification on the Octopus Authenticator app. As hackers become more sophisticated in their methods to take over sensitive accounts and steal critical information, enterprises must also improve their defenses. The next generation of multi-factor authentication technologies will make sure you’re ready to face the security challenges that lie ahead.
<urn:uuid:3e830f58-8b29-4799-8bec-214abac8f077>
CC-MAIN-2024-38
https://doubleoctopus.com/blog/access-management/future-multi-factor-authentication/
2024-09-10T00:46:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00889.warc.gz
en
0.945944
950
3.015625
3
Ransomware attack is a threat that has been hanging over businesses and individuals since the mid-2000s. In 2017, the FBI’s Internet Crime Complaint Center (IC3) received 1,783 ransomware complaints that cost victims over $ 2.3 million. These complaints, however, only represent attacks reported to IC3. The actual number of ransomware attacks and costs are much higher – around 184 million ransomware attacks occurred just last year. The ransomware’s original intent was to target individuals, who still make up the majority of attacks today. This article will look at the history of ransomware from its first documented attack in 1989 to the present day. We will show you in detail some of the most significant attacks and ransomware variants. Finally, we’ll take a look at future evolution and ransomware solutions. What is ransomware? Understanding what ransomware is is very simple: we are talking about a type of malicious software that takes over files or systems and blocks users from accessing them. All files, or even entire devices, are held hostage using encryption until the victim pays a ransom in exchange for the decryption key. Although ransomware has been around for decades, its variants have become increasingly advanced in their abilities to spread, evade detection, encrypt files, and force users to pay the ransom. This means that ransomware attacks have become more dangerous and damaging to victims. How ransomware attacks work Now that we have a rough definition of ransomware let’s move on to a more detailed account of how these malicious programs gain access to a company’s files and systems. The term “ransomware” describes the software’s function, which is to extort users or companies for profit. However, the program must gain access to the files or system it will hold hostage, and this access takes place through infections or attack vectors. Technically, an attack vector or infection is the means by which the ransomware gains access. Examples of vector types include: - Email attachments: a commonly used method to deceive users involves inserting a malicious attachment within an email disguised as urgent. - Messages: this includes both instant messaging systems and chats related to social networks. One of the most important channels used in this approach is, in fact, Facebook Messenger. - Pop-ups: pop-ups are created to mimic the software currently in use on the victim’s device so that the victim feels comfortable following the instructions which, in the end, will harm the user. The first ransomware attack Although ransomware has maintained preeminence as a major threat since 2005, the first attacks occurred much earlier. According to Becker’s Hospital Review, the first known ransomware attack occurred in 1989 and targeted the healthcare industry. Thirty years later, the healthcare sector remains a major target of ransomware attacks. The first attack was launched in 1989 by Joseph Popp, an AIDS researcher, who initiated the attack by distributing 20,000 floppy disks to other AIDS researchers in more than 90 countries, claiming that the disks contained a program that analyzed, through a questionnaire, the risk for an individual of contracting AIDS. However, the disk also contained a malware program that would have been activated only after turning on the computer 90 times. Upon reaching the 90th, the malware displayed a message requesting a payment of $ 189 and an additional $ 378 for the software rental. This ransomware attack became known as the AIDS Trojan or PC Cyborg. The evolution of ransomware This first ransomware attack was, to be polite, rudimentary and reports indicate it had flaws. Nonetheless, it laid the foundation for the evolution of ransomware in the sophisticated attacks taking place today. According to Fast Company, the first developed ransomware players used to write their own encryption code themselves. On the other hand, today’s attackers are increasingly relying on “ready-to-use libraries that are significantly harder to crack” and are leveraging more sophisticated delivery methods such as spear-phishing. Besides, some more sophisticated attackers are developing toolkits that can be downloaded and implemented by attackers with less technical skills. Meanwhile, some of the most advanced cybercriminals are monetizing ransomware by offering ransomware-as-a-service programs, which has led to the growth of well-known ransomware such as CryptoLocker, CryptoWall, Locky, and TeslaCrypt. These are some examples of common types of advanced malware. CryptoWall alone has generated over $ 320 million in revenue. After the first documented ransomware attack in 1989, this type of cybercrime remained rare until the mid-2000s, when attacks started using more sophisticated and harder-to-crack encryption algorithms such as RSA encryption. Popular during this time were Gpcode, TROJ.RANSOM.A, Archiveus, Krotten, Cryzip, and MayArchive. Kaspersky’s SecureList reports that from April 2014 to March 2015, the most prominent ransomware threats were CryptoWall, Cryakl, Scatter, Mor, CTB-Locker, TorrentLocker, Fury, Lortok, Aura, and Shade. “Between them they were able to attack 101,568 users around the world, accounting for 77.48% of all users attacked with crypto-ransomware during the period” – states the report. In just one year, the landscape has changed significantly. According to Kaspersky 2015-2016 research, “TeslaCrypt, together with CTB-Locker, Scatter and Cryakl were responsible for attacks against 79.21% of those who encountered any crypto-ransomware.” The biggest ransomware attacks and the most prominent variants Considering the advancement of ransomware, it is not surprising that the largest ransomware attacks have occurred in recent years. Reports indicate that, in the mid-2000s, requests were averaging around $ 300, while today they are close to $ 500. Usually, there is a specific deadline for the ransom by which the ransom note doubles or it is no longer possible to access the files because they are destroyed or permanently blocked. CryptoLocker was one of the most profitable ransomware of its time: between September and December 2013, it infected more than 250,000 systems, earning more than $ 3 million before being KOed in 2014 via an international operation. Later, its encryption model was analyzed and now an online tool is available to recover encrypted files compromised by CryptoLocker. The demise of CryptoLocker only led to the emergence of several imitation ransomware variants. From April 2014 until early 2016, CryptoWall was among the most commonly used ransomware varieties, with several variants targeting hundreds of thousands of individuals and companies. In mid-2015, CryptoWall had extorted over $ 18 million from victims, prompting the FBI to warn users of the threat. In 2015, a variety of ransomware known as TeslaCrypt or Alpha Crypt damaged 163 victims, earning attackers $ 76,522. TeslaCrypt usually requested redemptions in Bitcoin, although PayPal or My Cash cards were used in some cases, with amounts ranging from $ 150 to $ 1,000. Also in 2015, a group known as the Armada Collective carried out a series of attacks on Greek banks. Instead of paying for it, the banks increased their defenses and avoided further interruptions of the service, despite Armada’s subsequent attempts. For attacks on larger corporations, ransoms are estimated to reach $ 50,000. However, a ransomware attack last year against a Los Angeles hospital system, the Hollywood Presbyterian Medical Center (HPMC), would have demanded a ransom of $. 3.4 million. The attack forced the hospital back into the pre-IT era, blocking access to the corporate network, email and critical patient data for ten days. Petya is an advanced ransomware that encrypts a computer’s master file table and replaces the master boot record with a ransom note, rendering the computer unusable unless the ransom is paid. In May, it further evolved to include direct file encryption capabilities. Petya was also among the first ransomware variants to be included as part of a ransomware-as-a-service operation. By mid-2016, Locky had solidified its place as one of the most commonly used ransomware varieties, with PhishMe research reporting that Locky’s use had overtaken CryptoWall as early as February 2016. During Black Friday (November 25) of 2016, the San Francisco Municipal Transportation Agency was the victim of a ransomware attack that disrupted train ticketing and bus management systems. The attackers demanded a whopping 100 Bitcoin ransom (equivalent to around $ 73,000 at the time), but thanks to the quick response and full backup processes, SFMTA was able to restore its systems within two days. The ransomware used in the attack might have been Mamba or HDDCryptor. In 2016, also emerged one of the first ransomware variants to target Apple OS X. KeRanger mainly affected users using the Transmission application, but it affected around 6,500 computers in a day and a half, after which it was quickly removed. The future of ransomware These incidents are catapulting ransomware into a new era, in which cybercriminals can easily replicate small-scale attacks while targeting much larger companies to demand large ransoms from. While some victims can mitigate attacks and restore their files or systems without paying a ransom, even a small percentage of attacks are enough to produce substantial revenue and incentives for cybercriminals. As we have said several times in our articles, it is important to reiterate that paying the ransom does not guarantee access to your files. The best ransomware solution is decrypt files without paying the ransom. CryptoLocker ransomware “extorted $ 3 million from users but did not decrypt the files of everyone who paid,” CNET reports based on the results of a Security Ledger article. A Datto survey found that in ¼ of incidents in which ransom was paid, attackers avoided unlocking victim data. Incidentally, ransomware operations continue to find more creative ways to monetize their efforts, as was the Petya and Cerber ransomware that pioneered ransomware-as-a-service schemes. The potential profit for ransomware authors and operators also drives rapid innovation and fierce competition among cybercriminals. PetrWrap ransomware, for example, was built using cracked code from Petya. For the victims, knowing where the code comes from does not matter: whether you have been infected with Petya or PetrWrap, the result is the same, your files are encrypted with such a powerful algorithm that no ransomware decryption tools currently exist. What are the prospects for ransomware? A new report from the National Cyber Security Center (NCSC) and the UK’s National Crime Agency (NCA) warns of developing threats such as ransomware-as-a-service and mobile ransomware. The pace at which the IoT is growing, combined with IoT devices’ widely reported insecurity, provides a whole new frontier for ransomware operators. Best practices for ransomware protection, such as regular backups and software updates, don’t apply to most connected devices, and many IoT vendors are slow or just plain careless when it comes to releasing software patches. Now, as businesses increasingly rely on IoT devices to perform operations, there could be a spike in ransomware attacks on these types of devices. Critical Infrastructure represents another worrying target for future ransomware attacks: DHS Business Performance Management Office Director Neil Jenkins warned at the 2017 RSA conference that water services and similar infrastructure could be attractive targets for attackers. Jenkins referred to a January 2017 ransomware attack that temporarily disabled components of an Austrian hotel keycard system as a potential predecessor for more significant infrastructure attacks to come. How to protect yourself from ransomware attacks There are some steps that end-users and businesses can take to reduce the risk of falling victim to ransomware significantly. Here are four basic security practices any business must follow: - Frequent and verified backups: Backing up all files and systems is one of the most powerful defenses against ransomware. All data can be restored to a previous save point. - Structured and regular updates: The software creator regularly updates most software used by companies. - Reasonable Restrictions: Certain restrictions should apply to those who: - Work with devices that contain company files, documents and/or programs; - use devices connected to corporate networks that could be made vulnerable; - are third party or temporary workers. - Proper credential monitoring: any employee, contractor, and person who is granted access to systems create a potential point of vulnerability for ransomware. Replacement or failure to update passwords and improper restrictions can lead to even greater chances of attack. While these practices are fairly well known, many people fail to back up their data regularly, and some companies only do so within their own networks, which means that a single ransomware attack can compromise all backups. Effective defense against ransomware largely depends on proper education. Educating employees about the telltale signs of ransomware distribution tactics, such as phishing attacks, drive-by downloads, and spoofed websites, should be a top priority for anyone who uses a device today. Businesses should also implement security solutions that enable advanced threat protection. In any case, if you have been damaged by ransomware, do not despair, avoid doing it yourself and contact qualified specialists like those of HelpRansomware, who will be able to solve your problem with the best solutions.
<urn:uuid:e4fdc044-8d4e-47db-8286-b6bb39b75615>
CC-MAIN-2024-38
https://helpransomware.com/ransomware-attack/
2024-09-12T13:39:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00689.warc.gz
en
0.953218
2,792
3.28125
3
Let’s start with where the word ‘captcha’ came from. It turns out that captcha is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart. The idea behind the captcha technology (and behind the original Turing test as well) is simple: it’s a test humans are able to pass, but online bots cannot. Captcha usually comes in the form of a distorted text image that must be re-typed in order to verify you’re not a computer. Captcha technology is important because it provides simple and practical security to a variety of different things, such as protecting website registration, preventing comment spam on blogs, making sure only humans vote in online polls and more. Without the captcha technology, spammers could potentially abuse these situations by setting up numerous accounts, leaving a ridiculous number of comments, or voting an unlimited number of times in the same poll. The first versions of captcha were relatively easy for computers to bypass. It resulted in an arms race between hackers and captcha developers. After the former brought a new version of captcha down, the latter would come with a newer, stronger version. — WP Engine (@wpengine) April 14, 2016 At some point Google took the stage and released it’s reCAPTCHA, which is now unofficially considered a captcha standard. It uses not only distorted text but also images and is believed to be one of the strongest captcha services. Google’s reCAPTCHA technology is used by Google itself, Facebook and many other websites as a means of protecting against spam and abuse. In fact, reCAPTCHA is the most popular captcha provider in the world. Unfortunately, it seems the technology might not be as foolproof as once thought. Security researchers at Columbia University have discovered flaws in Google’s reCAPTCHA technology that open the door for hackers to influence the risk analysis, bypass restrictions, and deploy large-scale attacks. The researchers stated they were able to design a low-cost attack that successfully solved more than 70% of the image reCAPTCHA challenges, and each challenge required an average of only 19 seconds to solve. They also applied the system to the Facebook image captcha and found an accuracy level of 83.5%. The higher accuracy on Facebook is believed to be a result of Facebook’s images being of a higher resolution. The system used techniques that allowed it to bypass cookies and tokens, and used machine learning as a way of correctly guessing the images. The funny part is that this reCAPTCHA breaking system is powered by Google’s own reverse image search. But it also can work offline. “Nonetheless, our completely offline captcha-breaking system is comparable to a professional solving service in both accuracy and attack duration, with the added benefit of not incurring any cost on the attacker,” the researchers stated, emphasizing the simplicity and cost-effectiveness of this particular attack. Before the findings were made public, the researchers alerted Google and Facebook to inform them of these potential flaws. They stated that Google responded by attempting to improve the security of reCAPTCHA, but Facebook does not appear to have taken any steps towards improvement. Best captcha I've seen for a while. pic.twitter.com/tVvbwjmTLC — Siddharth Vadgama (@siddvee) April 12, 2016 Researchers believe that hackers could reasonably charge $2 for 1000 solved captchas, and as a result could make over $100 a day. They could stand to make even more if they launch multiple attacks at once or utilize additional techniques. The research shows that there is a still a lot to be done in the world of cybersecurity, but also gives the chance for many companies, such as Google, to move forward and more actively look at their current security measures. Google has already shown interest in strengthening its security, and hopefully other websites aren’t far behind.
<urn:uuid:368c6379-57c1-47c4-8f0e-6c85611150a6>
CC-MAIN-2024-38
https://www.kaspersky.com/blog/googles-recaptcha-defeated-by-security-researchers/11880/
2024-09-15T00:45:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00489.warc.gz
en
0.957067
819
3.8125
4
Supply Chain Planning (SCP) is a process that helps businesses ensure that they have the right amount of goods and services to meet customer demand. It involves forecasting future demand and then organizing production and inventory accordingly. In this blog post, we will discuss the definition of SCP, its types, process, examples, benefits, and processes involved in supply chain planning. What is Supply Chain Planning? It is part of supply chain management (SCM). Supply chain planning can be defined as ensuring a business has the right amount of goods and services to meet the market demand. It involves forecasting future demand and then organizing production and inventory accordingly. There are various types of SCP, each with its own set of processes. Types of SCP There are three types of supply chain planning: Short term planning This type is used to plan and manage production for weeks or months. It involves forecasting demand, organizing production, and managing inventory. Long term planning This type is used to plan and manage production for the next few years. Integrated supply chain management This type is a comprehensive approach that integrates short- and long-term planning into a single process. Supply Chain Planning Strategies There are a few key strategies that can help make SCP more effective. 1. Having accurate and up-to-date data is essential for making informed decisions. This means having a system to track inventory levels, sales trends, and supplier performance. 2. Planning is vital for ensuring enough stock to meet market demand while avoiding overstocking, leading to waste and lost profits. 3. Regularly reviewing and adjusting plans based on changing conditions is critical for ensuring that the supply chain runs smoothly. Factors that can impact plan adjustments include changes in customer needs, supplier reliability, and market conditions. By following these key supply chain strategies, supply chain planners can develop more effective plans that will help improve the supply chain’s overall efficiency. Supply Chain Planning Process There are five main processes involved in supply chain planning: 1. Demand planning It is part of the demand management process. The demand planning process involves predicting future demand for goods and services. This forecasting is done by analyzing past sales data and trends in market demand. The output of this process is a demand plan. 2. Capacity planning Capacity planning determines how much supply a company has to meet forecasted demand. It includes organizing production, setting up suppliers, and managing inventory. 3. Production planning Production planning determines what products will be made and when they will be made. This includes setting up schedules, ordering materials, and preparing production lines. 5. Sales and operations planning (S&OP) Sales and operations planning (S&OP) brings sales and operations together to make decisions affecting both departments. The goal of S&OP is to ensure that the two departments are working together to meet consumer demand. Therefore, sales and operations planning should be done regularly, preferably monthly. The process of sales and operations planning usually involves the following steps: - Collecting data - Analyzing data - Developing plans - Communicating plans - Tracking results Five Steps to Supply Planning and S&OP Success 1. A supply plan is created by forecasting future demand and organizing production and inventory to meet that demand. 2. Forecasting future demand – Forecasting future requests is made by analyzing past sales data and trends in customer demand. Various forecasting methods can be used, including trend analysis, regression analysis, and causal models. 3. Determining inventory levels – The inventory levels are determined by calculating how much supply is needed to meet forecasted demand. This process includes setting reorder points, ordering stock, and tracking inventory levels. 4. Planning production – Planning production by setting up schedules, ordering materials, and preparing production lines. S&OP integrates short-term and long-term supply chain planning into a single process. 5. Adjusting plans as needed – The supply plan and inventory levels are adjusted to meet changes in demand. This can involve modifying production schedules, ordering more or less stock, and adjusting inventory levels. These five steps work together to help a company keep its supply chain running smoothly and meet customer demands. Flows in the supply chain - The flow of products and materials between suppliers and manufacturers - The flow of products and materials between manufacturers and distributors - The flow of products and materials between distributors and retailers - The flow of information throughout the supply chain Supply Chain Planning Examples One standard supply chain planning example is for a company that sells products online. In this case, the SCP process would involve forecasting how the company will sell many products in the next month. Then you need to figure out how many resources (such as employees, products, and shipping supplies) you need to meet that demand. And then arrange for the production and delivery of those resources. Another example would be a company that needs to replenish its stock of products. The company would need to forecast how much product it will need and then order that amount from its suppliers. The company needs to plan for changes, like increasing demand for a product. This could require the company to place a larger order from its suppliers, or it could mean that it would need to find a new supplier if its current supplier can’t meet the increased demand. Benefits of Supply Chain Planning Mainly they benefit production operations substantially. The following are other benefits of supply chain planning: Planning helps companies produce the right products in the right quantities, increasing profits. Improved customer service Supply chain planning helps businesses meet customer orders quickly and efficiently by forecasting demand and organizing production. Reduced inventory costs Properly managed inventory can help reduce stock outages and excess inventory costs. Improved supply chain coordination When all parts of the supply chain work together, it leads to a more efficient and coordinated supply chain. This can lead to increased profits and improved customer service. Less waste and fewer delays By forecasting demand, companies can plan production, which helps avoid wasted time and resources due to last-minute changes in the market. This also helps eliminate or reduce delays when different parts of the supply chain are not coordinated. How to Get Started with Supply Chain Planning? If you’re interested in getting started with SCP, here are a few tips: Talk to your suppliers Your suppliers can be a valuable resource when it comes to capacity planning. First, ask them about their current production capabilities and plans. Use historical data Historical data can help forecast future demand. Use past sales, surveys, and market research data to help you plan demand. Use software tools Software tools can automate the supply chain planning process and make it easier to manage. Various supply chain management software options are available, so find one that fits your needs. SCP is critical for any business to meet customer demand while maximizing profitability. What are the Tools Used for SCP? Companies can use various management tools and techniques for effective supply chain planning. However, some of the critical supply chain drivers are: - Sales and Operations Planning (S&OP) – S&OP is a process that helps companies to align supply and demand. It involves forecasting future sales, organizing production, and managing inventory. - Master Production Schedule (MPS) – An MPS is a schedule showing each product’s planned production. It can plan production, track inventory levels, and set supply chain deadlines. - Material Requirements Planning (MRP) – MRP is a process that helps companies order the correct amount of materials to meet production demands. It uses past sales data and current inventory levels to calculate how much material needs to be ordered. - Just-in-Time (JIT) – JIT is a supply chain strategy that aims to reduce inventory levels and improve delivery times. It relies on suppliers to deliver materials just in time for production. - Capacity Requirements Planning (CRP) – CRP is a process that helps companies to plan for future production needs. It uses historical data and current capacity levels to forecast future production demands. - Inventory Management – Inventory management is tracking and managing inventory levels. This can include tasks such as ordering, stocking, and shipping inventory. - Transportation Management – Transportation management is organizing and coordinating transportation resources. This can include selecting carriers, arranging shipments, and tracking freight. Who does the supply chain planning in an enterprise? In most organizations, supply chain planning is a centralized function carried out by supply chain planners. The supply chain planners are responsible for developing and executing the supply chain strategy, ensuring that the supply chain can meet the demands of the business. Supply chain professionals use various tools and processes to carry out their work. Some of the most common tools include supply chain modeling, inventory management, and forecasting. The New Trend in Supply Chain Planning The new trend in supply chain planning is moving away from traditional, siloed planning processes and towards collaborative, end-to-end supply chains. This involves breaking down the barriers between different parts of the supply chain and working together to plan and execute supply chain operations. Another trend is the use of big data and analytics. With so much data available, businesses can use analytics to develop predictive models that help them forecast demand more accurately. This allows them to better plan their production and inventory levels, which leads to increased profitability and improved customer service. The Future of Supply Chain Planning The future of supply chain planning will likely include more automation and artificial intelligence (AI). AI can help planners optimize their supply chains by predicting demand, identifying bottlenecks, and recommending solutions. There are several ways that businesses can use AI in SCP, including the following: Companies can use AI to predict future demand based on past data. Demand planning can help companies to plan their production and inventory levels better. Supply network design AI can help companies optimize their supply networks, ensuring they have the right suppliers and inventory at suitable locations. AI can help companies optimize their order processing, reducing time and money spent on processing orders. Overall, AI can play a vital role in improving supply chain execution and ensuring that companies meet customer satisfaction. In addition, companies can improve their efficiency and competitiveness in the market by using AI. Another potential trend in supply chain planning is the use of blockchain technology. For example, companies could use blockchain to create a distributed ledger of transactions they can share between supply chain partners. This would help to ensure transparency and trust among supply chain partners. It is still early for blockchain in supply chain planning, but it holds great potential for the future. As businesses become more familiar with the technology and its benefits, we expect to see more blockchain implementations in supply chain processes. Big data and analytics The use of big data and analytics is a trend that is likely to continue in supply chain planning. With so much data available, businesses can use analytics to develop predictive models that help them forecast demand more accurately. This allows them to achieve integrated business planning, increase profitability, and improve customer service. In supply chain planning, you should consider three key things: minimizing stock-outs and other supply disruptions, ensuring your suppliers are reliable and cost-effective, and meeting customer demand. This article details supply chain planning and the tools that can help you do it more effectively.
<urn:uuid:3753d344-77c6-4deb-a46c-a468ca1a797e>
CC-MAIN-2024-38
https://www.erp-information.com/supply-chain-planning-scp
2024-09-19T23:49:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00089.warc.gz
en
0.942806
2,326
3.515625
4
In the world of software and web applications, we inevitably run into problems with what we call “bugs”. Some of these bugs can range from a small annoyance to something that could critically disrupt the use of a client’s software. No program, big or small, is free of bugs. Some bugs are introduced when developers make changes to the code behind the program for new features or even fixing other bugs. The code for most applications is so complex and holds many moving parts, even the best developers in the world can have a bug in their code. How does RSI reduce the number of bugs in a client’s software? The answer may seem obvious but may not always be completed like it should be. Testing is the #1 priority and is imperative to reduce the number of bugs. When code goes through the development process, there are several stages before it’s delivered to the client. In each stage, testing is done to filter out bugs and create the best user experience when the software goes live. The first stage in development is when the developers test their code, known as “Unit Testing,” to ensure the changes work correctly and as intended. What is Unit testing? Unit testing are to be completed in two ways, Manual and Automated Unit Testing. - Manual Unit Testing Testing can be accomplished in several ways, one approach is by using a debugging tool. The process runs the program and allows the developer to review their code line by line to ensure the data running through the changed code is functioning as expected. Manual Unit Testing can be time consuming and must be completed thoroughly each time the codebase is changed. - Automated Unit Testing Separate files are written to isolate each piece of code to ensure the code is working correctly. Just like the name suggests, the tests run automatically at the click of a button. The automated testing assists with the lengthy debugging process and initially these tests are more time consuming than a manual test. The separate test file must be written by the developer, sometimes taking as long as the code changes itself. If Automated Unit Testing takes longer to complete, why would you want to use this method? The subject becomes one of the most debated topics among developers. Despite the lengthy process of creating unit tests, they do hold huge benefits to the developer, company, and the client. The key advantage of unit testing is if done correctly, the testing only needs to be written once. From there it can be used to test changes, to code quickly and efficiently which reduces overall time in future development, and/ or changes to code. Another advantage of Automated Unit Testing is it efficiently reduces the chances of bugs getting through to the client because of consistency and efficiency. Consistency allows unit tests to be well thought out and check the same test cases every time. Efficiency can test all the code to a change to one piece of code doesn’t affect or break another piece of code somewhere else.
<urn:uuid:6a2b4557-dd9a-4804-b2db-b6274a837fd9>
CC-MAIN-2024-38
https://www.myrsi.com/manual-automated-unit-development-testing/
2024-09-21T02:11:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00889.warc.gz
en
0.951853
606
2.640625
3
What is Data Deduplication? Data deduplication, also known as “dedupe” for short, is a method of reducing the amount of data stored in backup systems by eliminating duplicate copies of the same data. This process involves identifying and removing identical pieces of data within a storage system, resulting in significant space savings. Traditionally, when a backup is performed, all files are scanned and backed up regardless if they have previously been backed up or not. This leads to redundant copies of the same data being stored multiple times, consuming valuable storage space and increasing costs. With data deduplication, only unique data is stored during each backup. The system compares new data against existing data that has already been backed up and identifies any duplicates. Instead of storing multiple copies of the same data, only the unique pieces are stored, resulting in significant storage savings. Not only does data deduplication save storage space and costs, but it also improves backup and recovery times. Since only unique data is being backed up, the process is much faster and more efficient. In the event of a disaster or data loss, recovery times are also improved as there are fewer files to restore. In addition to these benefits, data deduplication can also improve network performance. With less data being transferred during backups and restores, network bandwidth is conserved, allowing for smoother operations and reduced strain on the network. Furthermore, implementing data deduplication can enhance overall system reliability. By reducing the amount of duplicate data being stored, there is less risk of errors. Advantages of Data Deduplication Some advantages of Data Deduplication include: - Reduced Storage Costs – By eliminating redundant data, organizations can save money on storage costs. This is especially beneficial for businesses with large amounts of data to manage. - Improved Backup and Recovery – As mentioned earlier, data deduplication speeds up the backup and recovery process by only backing up unique data and reducing the amount of data that needs to be restored. - Increased Network Performance – By reducing the amount of data being transferred over a network, data deduplication can improve network performance and reduce bandwidth usage. This is especially beneficial for remote offices or employees who may have limited internet access. - Enhanced System Reliability – By reducing the risk of errors caused by duplicate data, overall system reliability is improved. - Scalability – Data deduplication technology is scalable, meaning it can handle large amounts of data and grow with your business as your data grows. - Cost Savings – With less storage space required, businesses can save on both hardware costs and operational costs associated with managing and maintaining large amounts of data. - Disaster Recovery – In the event of a disaster or data loss, having efficient backup and recovery processes in place is crucial. Data deduplication ensures that only unique data needs to be restored, saving time and resources during the recovery process. - Compliance and Data Governance – For businesses in industries such as healthcare or finance, compliance regulations often require the retention and secure storage of large amounts of data. Data deduplication can help businesses meet these requirements by reducing storage costs and providing efficient backup and recovery processes. Best Practices to Implement a Data Deduplication Backup Strategy - Understand your data – Before implementing a data deduplication backup strategy, it’s important to understand the type of data you are dealing with. This includes knowing the size and frequency of data changes, as well as understanding any compliance regulations that may apply. - Determine the appropriate level of deduplication – There are different levels of data deduplication that can be implemented based on business needs. These include file-level deduplication, block-level deduplication, and byte-level deduplication. Determining which level is most suitable for your data will help maximize storage savings and minimize backup time. - Consider both hardware and software options – Data deduplication can be achieved through both hardware solutions such as storage appliances or software solutions that can be installed on servers or virtual machines. It’s important to evaluate both options and choose the one that best fits your organization’s needs. - Choose a reliable solution – When it comes to data deduplication, reliability is key. Make sure to thoroughly research and compare different solutions before making a decision. Look for features such as automatic error correction, data verification, and support for multiple backup software systems. - Test before implementation – Before fully implementing a data deduplication solution, it’s important to test it out first. This will help identify any potential issues or compatibility problems before they affect your production environment. - Monitor and maintain regularly – Data deduplication requires ongoing monitoring and maintenance to ensure its effectiveness over time. Regular Secure your Data with CloudAlly’s Data Deduplication Backup CloudAlly offers enterprise-grade data deduplication backup that is secure, reliable, and cost-effective. CloudAlly’s solutions back up data using automated incremental backup once a day (that can be increased based on your preference). These are the advantages of CloudAlly’s incremental SaaS backup with data deduplication: - As with every incremental backup we only store the delta changes, backups are faster. It also helps us to index the data better leading to faster granular or point-in-time item-level recovery. - As our backups are stored on high-performance Amazon S3 storage, we subvert the slow recovery typically associated with traditional incremental backup. - Our backup and recovery workflows are also designed to limit API calls thus improving backup scalability and enhancing security. - Incremental backups, lastly, but surely not the least, reduce storage space leading to cost savings. - Our secure cloud platform uses advanced encryption methods to protect your data from unauthorized access.
<urn:uuid:2af73d2d-2611-43c2-be01-071685841744>
CC-MAIN-2024-38
https://www.cloudally.com/glossary/data-deduplication-backup/
2024-09-07T20:49:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00289.warc.gz
en
0.931933
1,218
3.40625
3
October 28, 2019 Much has been written about the digital divide, and usually when it comes to connectivity, rural areas have been given short shrift. Although virtually all urban areas are connected, creating the infrastructure to bring Internet connectivity to every resident of remote communities has historically been too expensive. Around the world, many rural areas face challenges related to geography, population density, and deployment costs that make it unprofitable and unappealing for companies to expand or operate networks. Hungry for broadband However, this lack of high-speed broadband connections comes with high economic, social, and educational costs for people in rural areas. Much of the innovation that has propelled success in industries ranging from automotive to retail have bypassed agriculture, largely because of something farmers can’t control: the availability of decent broadband. Many people don’t think about big data in terms of farming, but every planting date, seed, raindrop, soil test result, and location in a field represents a data point that could be analyzed. Today’s farmers may read about new tech like robots, self-driving tractors, or drones, but without connectivity, none of it can happen. 5G and food Although most of the focus on the “fifth generation” 5G technology has been connecting urban areas, Cisco has been working with a consortium of partners in the UK on an initiative called 5G RuralFirst. The project has demonstrated how it’s possible to bring high-speed networks to even the most remote and challenging rural locations economically and as result can transform agriculture over the next decade. Led by Cisco alongside principal partner University of Strathclyde, the first goal of the project has created rural test-beds and trials for 5G wireless and mobile connectivity across three main sites in the Orkney Islands, Shropshire, and Somerset. Like many rural areas, the UK doesn’t have good connectivity in rural areas. In fact, only 63% of the UK has mobile data coverage. Connecting the unconnected and underconnected with 5G The 5G RuralFirst initiative is focusing on seven different projects to show the range of benefits of 5G. The Arigitech project tests the potential of 5G technologies to improve how farms grow crops and look after livestock. The term agritech refers to the use of technology in agriculture to improve yield, efficiency, and profitability. One of the Agritech tests in process involves cows. Connecting dairy cows At the government-funded Agricultural Engineering Precision Innovation Centre in Shepton Mallet, a farm has put 5G collars and tags on one-third of their 180 diary cows. Using the connected collars, the farmer can keep track of the eating pattern, rumination, fertility, and day-to-day health of each specific cow. The collar also detects when an individual cow ovulates, so the farmer can optimize the timing of insemination, maximize the potential for pregnancy, and milk yield. The 5G smart collars also help automate milking by wirelessly communicating with a robotic milking system. Without any human intervention, cows can walk up to the milking station and hook up to the milking robot. As a cow steps into one of the robotic milkers, sensors recognize the animal, record her health and fertility status and know how much milk she is expected to give. The robotic system also provides the cow with a food reward. The second phase of the connected cow project adds a pedometer that will help score the mobility of each cow. Ideally, cows should lie down and rest for 10-14 hours per day. Getting enough rest reduces the risk of lameness, increases blood flow to the udder, and decreases stress hormones. Not surprisingly, healthy well-rested cows produce more milk. With the pedometer tags, farmers will gain better visibility into the health of their cows, so they can better manage the herd. Using 5G, data is picked up directly from the sensors on the cows, bypassing the need for a desktop computer on the farm. The data goes directly to the cloud where it can be combined with other relevant data and sent to farm staff. With 5G, this data round trip takes only milliseconds, so the staff can make instant, informed decisions about the welfare and management of their cows. Healthy food is good for everyone Technology that can monitor the health of animals and their environment makes it easier for farmers to keep track of their health and welfare, even over large remote areas. This tracking is good for the welfare of the animals, but also benefits farmers economically. And of course, it benefits those of us who want to eat good, healthy food. Reimagine what’s possible 5G RuralFirst shows us that we can and should develop 5G technologies across the rural areas of the US and other countries. The project introduced innovative 5G stand-alone service at fractions of the cost of traditional mobile networks. New virtualized and cloud-native solutions along with advanced automation greatly reduce how long it takes to implement services and the total cost of ownership. For more information about the 5G RuralFirst project, 5G, and related topics please visit: — Dan Kurschner, 5G and SP Mobility Marketing lead in Cisco Global Service Provider Marketing — Susan Daffron, Technical Storyteller for Global Service Provider Marketing at Cisco This content is sponsored by Cisco. You May Also Like
<urn:uuid:17a42a2a-6bb4-4776-8fc4-22921ce5f83a>
CC-MAIN-2024-38
https://www.lightreading.com/5g/farms-cows-5g
2024-09-11T10:35:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00889.warc.gz
en
0.945258
1,120
3.265625
3
Infonomics is the discipline of managing and accounting for information as a formal business asset. By using infonomics, organizations can discover the hidden value of their data and use it to achieve business results. What is Infonomics? Infonomics is a term coined by Douglas B. Laney, a distinguished analyst and vice president at Gartner, Inc. He defines infonomics as the theory and practice of treating information as an actual enterprise asset. Organizations should treat information as carefully as they do with money, property, and employees. Managing and keeping track of information is important, just like other valuable assets. The Three Pillars of Infonomics To successfully implement infonomics, organizations need to focus on three key pillars: monetization, management, and measurement. Monetizing information doesn’t necessarily mean selling data for cash. Instead, it involves using data to create value for the organization. You can do this by using information to streamline processes, enhance product quality, improve customer relationships, or reduce risks. By monetizing information, businesses can turn their data into a revenue-generating asset. For example, a retailer could use customer data to personalize marketing campaigns, leading to increased sales and customer loyalty. A manufacturing company could analyze sensor data from its equipment to predict maintenance needs and prevent costly downtime. These are just a few examples of how organizations can monetize their information assets. Effective information management is crucial for the success of infonomics. Organizations should set up systems and user hierarchies to keep data safe, compliant, and accessible to the right people when needed. This involves implementing robust data governance frameworks, access controls, and real-time monitoring capabilities. Data governance is particularly important in today’s regulatory environment. Organizations must follow laws like GDPR and CCPA to collect, store, and use data correctly. Failure to do so can result in significant fines. Measuring the value of information is a complex task, but it’s essential for infonomics. Organizations need to consider the cost of obtaining and maintaining data. They should also assess the accuracy and rarity of the data. Additionally, they should determine the potential uses of the data. Businesses can find out how valuable their data is by asking the right questions and analyzing its quality, cost, and importance. One approach to measuring information value is to use a framework like the Information Valuation Method (IVM). This framework considers the cost of obtaining and maintaining data, its potential profit, and its significance to the organization. The framework evaluates the expenses associated with data acquisition and storage. It also evaluates the potential revenue that the data can generate. Additionally, it examines the importance of the data to the organization. Organizations can make better decisions about managing data by putting a price on information assets and understanding their value. The Benefits of Infonomics Implementing infonomics can bring numerous benefits to organizations. Unlike traditional assets, information is non-depletable and retains its value through repeated use. In fact, processing, analyzing, and applying information can increase its value. Another advantage of infonomics is that data assets are not subject to the same scrutiny as traditional assets. Businesses can manage and use their information more flexibly. This is because accounting standards and the IRS do not consider information as capital assets. This means businesses have more freedom in how they handle their information. Infonomics also enables immediate transactions. Digital data makes it easy and cheap for businesses to quickly send information to clients all over the world. This agility can help organizations disrupt conventional business models and gain a competitive edge. Implementing Infonomics in Your Organization To successfully implement infonomics in your organization, you need to start by recognizing information as a valuable asset. To do this, you need to change your thinking. Make sure you value data just as much as other business assets. Next, you need to establish a robust data management framework. This involves implementing policies, procedures, and technologies to ensure that data is secure, compliant, and accessible. You should also invest in data quality initiatives to ensure that your information is accurate, complete, and reliable. Finally, you need to develop a strategy for monetizing your information assets. This may involve using data to improve internal processes, enhance customer experiences, or create new products and services. By aligning your data strategy with your overall business objectives, you can maximize the value of your information assets. Challenges and Considerations While the benefits of infonomics are clear, implementing it in an organization is not without challenges. One of the biggest obstacles is the lack of standardized methods for valuing information assets. Unlike traditional assets, there are no widely accepted accounting principles for measuring the value of data. Another challenge is the need for a cultural shift within the organization. To see information as valuable, everyone from top executives to front-line workers must agree. Training and education may be needed to help people understand how data can be used to create value for businesses. Organizations must also consider the ethical implications of monetizing data. Businesses need to be clear about how they collect, use, and share data due to growing worries about privacy and security. They need to make sure they use data responsibly and ethically, while respecting people’s rights and preferences. The Future of Infonomics As data continues to grow in volume and importance, the concept of infonomics is likely to become even more relevant in the years ahead. As organizations see the importance of their information, they will focus more on managing, governing, and making money from data. One area where infonomics may have a significant impact is in the development of new business models. Organizations can use data as a strategic asset to develop new products and services that were previously impossible. This could lead to the emergence of companies that generate revenue primarily through the monetization of information. Another area where infonomics may play a role is in the valuation of companies. Data is becoming increasingly important for businesses. Investors and buyers may pay more attention to a company’s information assets when assessing its value. This could lead to changes in how companies are valued and how mergers and acquisitions are conducted. Infonomics is a powerful concept that can help organizations unlock the hidden value of their information assets. By treating information as a formal business asset, organizations can monetize, manage, and measure their data more effectively. This can lead to improved decision-making, increased efficiency, and competitive advantage. While implementing infonomics can be challenging, the benefits are clear. As data continues to grow in volume and importance, organizations that embrace infonomics will be better positioned to thrive in the digital economy. Businesses can use their data as a strategic asset to drive growth and innovation by recognizing its value. As we progress, it is evident that infonomics will become more crucial in how organizations function and compete. By staying ahead of the curve and embracing this new discipline, businesses can position themselves for success.
<urn:uuid:e372f59f-acd1-404f-b85a-7ac26f700eba>
CC-MAIN-2024-38
https://www.datasunrise.com/knowledge-center/infonomics/
2024-09-12T15:32:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00789.warc.gz
en
0.939901
1,438
3.03125
3
Best Practices What is DMARC? By MailChannels | 4 minute read If the COVID-19 pandemic taught us anything, it’s that having an online presence is more important than ever for businesses of all sizes. Yet as more businesses have gone online, the threat of cybersecurity issues like ransomware is compelling everyone to take steps to protect and secure their online presence. Most cyberattacks begin with email, and more often than not, email attacks involve some degree of brand impersonation. The attacker sends email from an address that looks familiar to the recipient, who then engages in a trust-based conversation, leading to eventual compromise. Domain-based Message Authentication, Reporting and Conformance (DMARC) is one way that domain owners can help to prevent many such attacks and has become a cornerstone of every organization’s online security policy. What is DMARC? DMARC allows domain owners (i.e. brand owners) to publish a security policy relating to the email that they send. In plain English, a DMARC policy tells email recipients what they should do when they encounter an email message that was not correctly digitally signed by the domain owner, or which was sent out of IP addresses not authorized to send for the domain. DMARC policies are published in the Domain Name System (DNS), a global database of domain name information that is accessible to everyone on the internet. Benefits of DMARC In addition to setting out your domain name’s email security policy, DMARC also allows domain owners to receive reports from email receivers that help the domain owner to track down security problems such as domain name spoofing. For example, if you specify a so-called forensic reporting address in your DMARC record, then receivers like Yahoo will tell you about all of the IP addresses that are sending improperly signed or originated messages using your domain name. These reports help to locate gaps in your email sending infrastructure, such as sending services that have been incorrectly configured to sign messages using your domain’s DKIM key. How to use DMARC Using DMARC is easy, but there are a couple of prerequisites that require a bit more work. First, you need to set up Domain Keys Identified Mail (DKIM) and Sender Policy Framework (SPF) records in the DNS to allow email receivers to authenticate that email messages appearing to be sent from your domain actually originate from your organization, rather than an imposter. Read more about how MailChannels support DKIM and SPF. Once you have set up DKIM and SPF, you need a DMARC record, which looks like this: v=DMARC1; p=100; rua=mailto:firstname.lastname@example.org; ruf=mailto:email@example.com; The typical elements of the DMARC record include: The version (v=DMARC1) – this string must be present at the start of your DMARC record to tell email receivers which version of the standard you are using. Today, only DMARC1 is valid, but in future we expect there could be a DMARC2, DMARC3, etc. The rejection percentage indicator (p=100) expresses what fraction of “unaligned” messages should be discarded or blocked by receivers. You can use this field to gradually roll out enforcement of your DMARC policy over time, starting with p=0 and then gradually increasing to p=100 as you plug all of the holes in your DKIM signing and SPF policy adherence. The reporting addresses (rua= and ruf=) set out optional email addresses to which you will be sent email statistics and forensic reports relating to the messages that receivers are seeing from your domain. These reports can be fed into a report analysis tool to provide valuable insights about ongoing spoofing of your domain and to locate places from which your domain’s email is being sent without a proper DKIM signature or from an unauthorized, yet legitimate IP address. Many excellent DMARC implementations exist as hosted services, automating the tricky bits and helping you to clean up your DKIM and SPF compliance incrementally to improve the delivery of your domain’s email. Summary To secure your brand’s reputation online and provide the maximum possible protection for your employees and customers against cyber attacks such as ransomware and phishing, it’s essential to deploy DMARC. Fortunately, many excellent service providers exist to help you roll out SPF and DKIM across the different channels through which your organization sends email, and DMARC itself can be automated as well. Gain insights about your email delivery and security with DMARC: it’s an unbeatable proposition.
<urn:uuid:28ae42ae-ff55-4ecb-84bd-6e5a9e12ef55>
CC-MAIN-2024-38
https://blog.mailchannels.com/what-is-dmarc/
2024-09-16T09:03:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00489.warc.gz
en
0.935879
949
2.75
3
Are you using social media platforms on daily basis then you must be aware of its dark side impacts on your social life? Let’s talk and rethink about your personal safety before using these interactive internet based applications. As people use it to connect with far-flung friends and family members to send speedy messages to colleagues and can broadcast major and minor actions in your lives. This is also a basic platform for many businesses to collaborate or share information for instance; individuals can discuss an assignment with co-workers by means of a various social media messaging session. With advancement employers and schools are more and more using social media to reach out to possible employees and students as well. According to the Bureau of Justice Statistics, more than 16 million US residents became victims of identity theft in 2012 alone. Keeping your passwords, financial, and other personal information safe and protected from outside intruders has long been a priority of businesses, but it’s increasingly critical for consumers and individuals to pay attention to data protection advice and use various top-notch practices to keep your personal information secure and protected. There is loads of information out on the internet there for consumers, families, and persons on protecting their bank credentials, sufficiently protecting desktop and laptop from hackers, malware, and other threats, and most excellent practices for using the Internet safely for personal safety. Protecting your individual information via various cybersecurity training programs can help lessen your menace of individuality theft. Various conducts to secure your data: - Maintain proper security on your PC, Mobiles and other electronic devices to secure your social life. - Systematically store and arrange your personal information securely - Ask questions before deciding to share your individual information Practices for personal safety and Keeping Your Devices Secure - Install good quality anti-virus software, anti-spyware software, and a firewall. - Don’t open files which are not known to you or download programs sent by outsiders. - Before sending personal information over your laptop on a public wireless check if your information will be protected. - Keep financial information on your laptop only if necessary. - Don’t use an automatic login feature on your bank accounts and emails that save your username and password. - Delete mail that contains identifying information or account numbers or invalid transaction. There is so much information that may make you get confused, principally if you’re not a tech-savvy. In order to mend these issues, you can adopt various straightforward best practices and tips for protecting your devices from threats or simply consult us via [email protected] and get various security tips to secure your social platforms. Content Marketer and Team Leader
<urn:uuid:e39ffa14-cf30-43ae-85e7-a55382a8b134>
CC-MAIN-2024-38
https://kratikal.com/blog/digital-life-risk-lets-know/
2024-09-18T22:21:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00289.warc.gz
en
0.913657
540
3.125
3
It can be challenging to keep your data sufficiently secure online in a constantly changing digital landscape. Hackers are continually improving their techniques, and a username and password aren’t enough to protect your data. This is partly our own fault – as much as we know the importance of complex, unique usernames and passwords many people are still using the same credentials on every website because they’re easy to remember. Unfortunately, they’re also very easy for bots to crack. Password is not an acceptable password! If you’re looking to improve cybersecurity within your organisation, it may be time to move over to multi-factor authentication. What is multi-factor authentication? Multi-factor authentication (also known as MFA) is an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more pieces of evidence to an authentication mechanism. In addition to your password, you might need a randomly generated PIN sent by SMS, to click a link sent to your nominated email address, or to input random characters from a chosen passcode. This technology is widely used online by Google and financial institutions. It’s also possible to use biomechanical data to secure your accounts, such as facial recognition and fingerprint scanning such as Face ID or Touch ID with iPhones Essentially, multi-factor authentication is the equivalent of adding an extra lock to your door. Usernames and passwords do a great job, but they are vulnerable to brute force (thousands of automated guesses of your password) attacks. Locking an account after a certain number of incorrect login attempts can help protect an organisation, but hackers have numerous other methods for system access. Extra layers of authentication can stop hackers in their tracks. What information can be used for multi-factor authentication? The credentials used for MFA fall roughly into three categories: These are unique pieces of information that only the user is likely to know. The most common one is a password, which can be made stronger with the inclusion of more characters and not just alpha-numerics. Many browsers can automatically generate passwords for you on sign-up. The answers to personal questions, such as ‘where were you born?’ are less secure, as these answers can be researched or guessed. Other questions have a limited number of likely answers, such as ‘what is your favourite colour?’. This is a piece of information that only the user would have access to. It could be a time-restricted passcode that is sent via SMS to the user’s phone or a single-use link emailed to you. Fingerprints, iris scans, voice recognition and facial recognition are all unique to the user. Location-based authentication involves verifying an individual’s identity by detecting its presence at a distinct location. A network might expect to see you log on from your UK office, but would flag up a security alert if you were attempting to log on from the other side of the world. Does your company need multi-factor authentication? The answer is almost certainly yes. If your company has computers connected to the internet, you could potentially be a victim of a cyber attack. Small businesses are collectively subject to almost 10,000 cyber-attacks a day, and the cost of these attacks can be devastating. It’s incredibly difficult for hackers to replicate MFA methods like retina scans or fingerprints, making your data considerably more secure. Each additional layer of authentication makes your data more secure, but it also ‘makes more work’ for users. The harder it is for cybercriminals to breach your business, the less likely they are to succeed. The key is achieving a balance between security and usability. MFA is also a requirement of the Cyber Essentials certification. You must utilise multi-factor authentication on any accounts that connect to cloud services. Introducing additional security factors to your organisation for Cyber Essentials Cyber Essentials is one of the UK’s most accessible frameworks for cybersecurity for businesses of all sizes. Although it can appear overwhelming initially, it’s cost-effective to roll out and can provide real-world security improvements. We provide a complete service and hand-holding help at every step of the Cyber Essentials certification and Cyber Essentials Plus process to ensure that you pass the first time.
<urn:uuid:ee53d2f5-0345-4a53-8c01-0342b074c16c>
CC-MAIN-2024-38
https://forensiccontrol.com/guides/what-is-multi-factor-authentication/
2024-09-08T01:06:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00389.warc.gz
en
0.951764
895
3.4375
3
By Faith Taylor, Global Sustainability, ESG and Social Impact Officer at Kyndryl Technology is essential to accelerating sustainability efforts. But to maximize technology’s impact on net zero goals, businesses first need a strong data foundation. According to the Global Sustainability Barometer, a study released by Kyndryl in collaboration with Microsoft, many sustainability and technology leaders find it challenging to integrate their sustainability and data strategies. And only 37% of organizations say they are making full use of technology to achieve their sustainability goals — a missed opportunity when artificial intelligence has the potential to fast-track sustainability efforts. No matter the maturity of their sustainability programs, businesses can overcome these hurdles and start to directly reduce their environmental footprints by building a data-centric culture: a comprehensive, connected and strategic approach to managing sustainability data. This requires collaboration between finance, IT and sustainability teams, according to the Sustainability Barometer. Focusing on data will equip enterprises to navigate increasing regulatory requirements, meet evolving stakeholder demands and seize opportunities to turn sustainability into a catalyst for growth. A solid data foundation can also help businesses reduce their emissions by informing decisions that optimize resources and cut back on carbon outputs. With the right data strategies and trusted technology partners in place, businesses can turn sustainability from a missed opportunity into a driver of business success. Here are five steps sustainability and technology leaders can take today to build data-centric cultures that shrink their environmental footprints:
<urn:uuid:3ff864e1-3898-453b-8369-f7a28841d0e2>
CC-MAIN-2024-38
https://www.kyndryl.com/ca/fr/about-us/news/2024/03/5-data-driven-sustainability-strategies
2024-09-09T06:14:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00289.warc.gz
en
0.940291
296
2.515625
3
In a three-table join, Oracle joins two of the tables and joins the result with the third table. When the query in the following listing is executed, the EMP, DEPT, and ORDERS tables are joined together, as illustrated in Table 1. Table 1. A three-table join Which table is the driving table in a query? People often give different answers, depending on the query that accesses the PLAN_TABLE. This query would drive with the EMP table accessed first, the DEPT table accessed second, and the ORDERS table accessed third (there are always exceptions to the rule). This next listing shows a query that has only one possible way to be accessed (the subqueries must be accessed first) and a query to the PLAN_TABLE that will be used for the remainder of this article. This listing is provided to ensure that you understand how to read the output effectively. The following listing is a quick and simple EXPLAIN PLAN query (given the PLAN_TABLE is empty). Throughout this article, I show many of these, but I also show the output using Autotrace (SET AUTOTRACE ON) and timing (SET TIMING ON). EXPLAIN PLAN Output Next, you can see abbreviated EXPLAIN PLAN output. The order of access is PRODUCT_INFORMATION, ORDER_LINES, and CUSTOMERS. The innermost subquery (to the PRODUCT_INFORMATION table) must execute first so it can return the PRODUCT_ID to be used in the ORDER_LINES table (accessed second), which returns the CUSTOMER_ID that the CUSTOMERS table (accessed third) needs. To ensure that you are reading your EXPLAIN PLAN correctly, run a query in which you are sure of the driving table (with nested subqueries). One exception to the previous subquery is shown here: EXPLAIN PLAN Output The expected order of table access is based on the order in the FROM clause: PRODUCT_INFORMATION, ORDER_LINES, and CUSTOMERS. The actual order of access is ORDER_LINES, PRODUCT_INFORMATION, and CUSTOMERS. The ORDER_LINES query takes the PRODUCT_ID from the subquery to the PRODUCT_INFORMATION table and executes first (Oracle is very efficient).
<urn:uuid:7865cc93-508f-41cd-9a9e-9f53de6a255b>
CC-MAIN-2024-38
https://logicalread.com/oracle-11g-three-table-joins-mc02/
2024-09-14T00:13:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00789.warc.gz
en
0.90776
486
2.9375
3
In my previous blog, “Are chimpanzees more intelligent than computers?,” I established some thoughts about what it all means, about the difference between artificial intelligence (AI) and human or general intelligence. In this second blog, I will instead look at the hype itself. It’s important to explain why AI is suddenly such a hot topic. After all, it’s been around since the fifties, with John McCarthy being considered the father of AI. Over the past 60 years, there have been various waves of AI acceleration and hype, followed by disappointment. But, some AI technologies and methods have survived and proved their usefulness. Firstly, after the Dartmouth Workshop of 1956, an era of new discoveries emerged. Influential programs focusing on reasoning by search (deducing the best result by trying out all combinations and backtracking if it doesn’t work) and natural language led to high expectations. H.A. Simon predicted in 1965 that machines would be capable of doing any work that a person can do. Marvin Minsky said in 1970 that within three to eight years a machine would exist that has the general intelligence of an average human being. Well, needless to say, it didn’t exactly pan out that way. Limitations in computing power, an exponential increase in the time required to resolve real issues (combinatorial explosion) and the lack of good-quality data led to a lack of progress in the field. Funding became scarcer as frustrations rose. It took another 10 years or so to see the next wave of AI hype. In the eighties, funding returned, not in the least due to the rise of expert systems. These expert systems were programs that tried to answer questions or solve problems using logical rules that are derived from experts. With the belief that intelligence might be based on the ability to use large amounts of knowledge (data), knowledge-based systems and knowledge engineering became a focus within AI research. Again, it didn’t work out. For instance, the attempt to replace doctors with expert systems underestimated the complex process of examination, decision-making, testing and pragmatic interventions that occurs when a doctor sees a patient. As the expert systems failed to deliver on the inflated expectations, funding for them dried up. The most recent and biggest wave of acceleration and subsequent hype has happened in the last 15 years. The availability of big data (the more data an AI has available to learn from, the ‘smarter’ it gets) and cloud (hugely scalable), as well as edge computing power, have created the circumstances needed for this latest wave of hype. Breakthroughs in Deep Learning (new ways of training neural networks) have been key in teaching machines to ‘think’ more in the way we do. Neural networks are intended to classify information in a way that is similar to how the brain works. It classifies based on characteristics of the input, which is particularly useful for applications such as image recognition where, for example, it could classify the make and model of a car based on shape and size. Using these advances, companies have developed useful applications that actually work. For instance, H&M is using a chat-bot that through natural language processing can interact with the user and can recommend clothes that the user would be interested in buying. Although most of these use cases are very specific and cannot be considered general intelligent, this has nevertheless generated another round of hype in the market: that machines can learn everything and that this time around, machines really will become general intelligent, intelligent like human beings. This hype is being enforced by a multitude of organizations and persons preaching that AI will reach human-like intelligence in the foreseeable future. As with the previous waves of hype, whether this is true or not remains to be seen. But, unlike the previous waves, this time AI at least is able to demonstrate business value (e.g. autonomous vehicles, web shop recommendations, medical diagnosis or planning optimization cases). With continuing high levels of investment in AI, it looks like we’ll see a continuation of development in this field for the foreseeable future. Learn more about IFS Labs at ifs.com. Do you have questions or comments about AI? We’d love to hear them so please leave us a message below.
<urn:uuid:b8c03d29-9655-4026-b303-be92f81a918f>
CC-MAIN-2024-38
https://blog.ifs.com/2018/03/why-the-hype-around-ai/
2024-09-15T07:19:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00689.warc.gz
en
0.96029
880
3.03125
3
Artificial intelligence (AI) is everywhere, including our homes and mobiles (Alexa & Siri). AI has transformed practically every industry, including sales. AI is rapidly being used by enterprises to achieve a competitive advantage. When you open YouTube, you'll notice AI suggestions as to what to watch. If you are looking for a product and it appears unexpectedly on your social media page, that's also AI. (Speaking of YouTube, learn how YouTube uses AI?) Every day, AI influences our purchasing decisions, from the movies we watch to the route our drivers travel to the next thing we shop online. Businesses that prepare for and welcome the changes coming on by AI will flourish. Those who do not will not. That's all there is to it. But what about those that utilize AI in sales? That's an excellent question! Sales have always been a people-to-people activity, but technology such as AI is pushing professional salespeople to reconsider the mix of the customer, marketing, and machine. Indeed, automation is already having an impact on commerce, and its influence will only rise in the future. Leaders seeking methods to improve their profit margins could look to AI for help. The best part is that you can start planning for such adjustments right now. So, how is AI impacting sales and what are its roles in it? Let’s find out. AI in Sales AI is a broad phrase that encompasses a variety of technologies such as machine learning, object tracking, natural language processing, supervised learning, and others. At its foundation, however, all of these innovations assist robots in doing certain complex processes on a level with or better than mankind. For example, AI-powered object detection algorithms in self-driving vehicles can detect barriers in the same way that humans do, enabling the machine to seize the wheel. Using AI, your favourite voice assistant, such as Alexa or Siri, hears your speech and answers in like. While there are several complexities to various forms of AI, all you'll need to know right now is that the term "artificial intelligence" refers to a wide range of advanced technology. Several of these techniques can have a significant influence on your sales career and productivity. This is because AI is more than just automation, however, it may incorporate components of automation technologies. Artificial intelligence (AI) tools take things a step further. Large datasets are analysed using these tools. These discoveries may then be utilised to produce forecasts, suggestions, and judgments. This sort of AI, known as "machine learning," is at the heart of the most outstanding sales skills. Benefits of AI in Sales and Marketing Sales are facilitated by AI without the need for a human salesman to be replaced. It improves the sales team's overall effectiveness by delivering curated data and mechanisation tools. It also aids sales representatives in better decision making and prospecting. Role of AI technology in boosting sales AI provides unique information and insight, allowing salespeople to focus on what is important most: solving client issues. AI improves lead volume, closure rate, and entire shopping experiences by using various sales processes. Here's an in-depth glimpse at how AI may boost a company's overall sales: It takes the opportunity to meet with every bit of information. It also takes time to evaluate leads and follow up with them to create and maintain a commercial connection. When you use AI for sales, the algorithms take over those responsibilities and give quick results. As a result, leads grow because AI enables them to approach particular and targeted leads. According to a McKinsey study published in the Harvard Business Review, the sales force that has implemented AI have boosted leads and engagements by around 50%. According to the study, AI has taken over the time-consuming responsibilities of connecting with leads, certifying, following up, and maintaining the connection in those firms. Improves Forecasting Accuracy Sales managers are frequently faced with the issue of estimating quarterly sales figures. And why shouldn't they? It affects all aspects of performance, payroll, and hiring. Furthermore, a single false projection may cause a company's credibility to suffer. AI assists sales managers in more accurately forecasting revenue for the following quarter. This enables the sales team to better plan for forthcoming issues. (Since we mentioned revenue, check out Revenue Deficit) In a word, management positions are becoming less analytical while remaining critical. There are several situations in which AI does not fit. Some of the essential areas on which leaders should concentrate their efforts are as follows: - Create a positive and thriving workplace culture. - Develop connections with customers, colleagues, and businesses. - Establish values and ethics to assist sales representatives in their responsibilities. Builds Customer Involvement A strong business is built on the loyalty and commitment of its consumers. These two characteristics are the bedrock of client relationships. It is difficult for sales representatives to keep consumers interested in the absence of backup information. One squandered opportunity increases the economic cost. Additionally, AI in sales can rank and highlight the most probable prospects for the company. It can also offer the information needed to follow up with them. According to a study, sales professionals may save up to 40% of their time spent gathering data. As a result, salespeople may focus on prospects with a better possibility of closing and increasing income. Chatbots are one example of artificial intelligence in sales. You should be familiar with chatbots that may be found on a company's website. That's AI in action: directing a prospective buyer to the marketing funnel. Alongside prediction and prioritising, some AI technologies may actively advocate sales activities, even telling sales staff which actions the machine believes make the most impact based on your targets and data analysis. These suggestions might include how to price a sale, who to approach next, or which consumers to target first with upsells or cross-sells. As a consequence, salespeople may free up bandwidth to complete transactions rather than ruminating about what to do next, thanks to focused advice on what activities to take. Makes Pricing less difficult to optimise. The salespeople can be steered by AI algorithms to the optimal price discount rate for each proposition, allowing them to win the transaction without losing money. AI examines particular aspects of previously won and lost sales, such as deal size, alignment with product requirements, number of rivals, client's capacity to spend, region, timing, and influencers. The AI can then provide information on appropriate pricing. Improves Best practices AI assists sales firms in delving into the methodologies, techniques, and time management methods of their best salespeople as well as their lower-performing sellers for comparability. Then, sales managers may share their thoughts and best practices with the rest of the team. This expertise also assists managers in selecting new team members with similar talents to quota performers. Simplifies upselling and cross-selling Marketing to your current client base boosts your sales rate and product sales. Cross-selling and up-selling are effective because they are cost-effective strategies to increase top-line income. The problem is determining who to sell to. You may either spend a lot of money marketing to people who don't purchase or use AI in B2B sales to classify clients who will accept an upgrade (upsell) or are likely to purchase a certain product from you (cross-sell). (Speaking of B2B, you can also take a look at our blog on B2B Marketing) You will gain vital insights on clients who are willing to sign up for more or superior services/products this way. Finally, you will see a decrease in marketing expenditures and an increase in sales income. Management is less analytical yet continues to be vital As artificial intelligence becomes more capable of gathering and analysing performance data, recommending solutions, and making daily data-driven choices, sales managers' jobs will evolve. But the point to admit is that the sales leaders will always play essential functions that AI cannot. Leaders will continue to: - Develop a work environment - Cultivate ties with coworkers and customers - Hire salespeople who are a good match, and serve as morality and ethics guideposts for salespeople and the marketing program. Future of AI in Sales AI in sales refers to the application of current technology to assist robots in doing cognitive tasks in the same way as people do, if not more effectively. There are several clever AI for sales techniques that have a direct influence on your sales profession. When you use AI for sales effectively, your performance rises significantly. The approaches used by AI for sales anticipate which leads are also most inclined to buy from you. AI will assist your sales firm in tracking sales success and optimising products and deals depending on market trends. All of these AI-assisted sales professionals will be able to return to what they do best: selling, which they presently expend just 34% of their energy on, according to Salesforce. With so many advantages to AI applications, salespeople should consider how to utilise innovation rather than whether or not to use it. Innovative companies will begin to embrace AI now.
<urn:uuid:19f8236d-ff77-4a2f-a50c-287ed3a75183>
CC-MAIN-2024-38
https://www.analyticssteps.com/blogs/8-benefits-ai-sales
2024-09-16T13:25:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00589.warc.gz
en
0.958864
1,875
2.671875
3
Safeguarding EV Charging Stations: The Power of Cybersecurity Partnerships The rise of electric vehicles (EVs) has been accompanied by the rapid development of charging infrastructure worldwide. As the number of EV charging stations increases, so does the importance of addressing cyber security vulnerabilities. A recent article on Wired.comshed light on the potential risks faced by EV charging stations, highlighting the need for robust cyber security measures. Understanding the Risks The Wired.com article presented a concerning incident where hackers exploited vulnerabilities in an EV charging station's network, gaining unauthorized access and potentially compromising user data. Such attacks not only pose a threat to the charging infrastructure but also raise concerns about user privacy and the security of personal information. The Role of Cybersecurity Partnerships To combat the increasing sophistication of cyber-attacks, it is crucial for EV charging station operators and managers to proactively engage with cyber security firms. By partnering with experts in the field, these operators can assess and mitigate the risks associated with their charging infrastructure effectively. Here's how such partnerships can benefit building owners with EV chargers: Risk Assessment: Cyber security firms can conduct comprehensive risk assessments of EV charging stations, identifying potential vulnerabilities in the network infrastructure and the associated systems. This evaluation encompasses network architecture, data storage, user authentication processes, and encryption protocols. By uncovering these vulnerabilities, the cyber security firm can provide valuable insights into potential attack vectors and recommend effective mitigation strategies. Vulnerability Mitigation: After identifying vulnerabilities, the cyber security firm can work closely with charging station operators to implement robust security measures. This may include installing firewalls, intrusion detection and prevention systems, secure authentication protocols, and encryption mechanisms. By fortifying the charging station's network and systems, the risk of unauthorized access and data breaches can be significantly reduced. Ongoing Monitoring and Management: Cyber security partnerships offer continuous monitoring and management services to ensure the charging station's network remains secure over time. This involves real-time threat detection, proactive vulnerability patching, and incident response protocols. By having a dedicated team of experts constantly monitoring the network, any potential cyber threats can be identified and neutralized before they cause substantial damage. User Awareness and Education: Alongside technical measures, cyber security firms can also assist in educating charging station users about best practices for secure EV charging. This may include guidance on avoiding suspicious charging stations, recognizing phishing attempts, and protecting personal data. By promoting user awareness, the overall cyber security posture of the EV charging ecosystem can be strengthened. As the demand for electric vehicles and associated charging infrastructure grows, so does the urgency to address cyber security risks. By partnering with cyber security firms like 5Q, charging station operators can proactively assess, mitigate, monitor, and manage network security risks. Through a comprehensive approach that includes risk assessments, vulnerability mitigation, ongoing monitoring, and user education, the EV charging industry can protect both its infrastructure and the privacy of its users. To learn more about 5Q's cyber security offerings visit 5qcyber.com. You can also reach out to us to schedule a FREE consultation call at firstname.lastname@example.org.
<urn:uuid:de7f9053-2cb9-422b-aecd-a4c5dc731ba2>
CC-MAIN-2024-38
https://www.5qpartners.com/post/safeguarding-ev-charging-stations-the-power-of-cybersecurity-partnerships
2024-09-08T02:20:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00489.warc.gz
en
0.92733
639
2.515625
3
The mention of Industry 4.0 brings to our mind some buzzwords that promise to revolutionize entire systems and usher in a new era. Among these buzzwords is machine learning, a subset of artificial intelligence that allows computers to learn from data sets and get better with time. The idea behind this is to find ways to use the vast amounts of data that companies have at their disposal to enhance efficiency and productivity. One major practical application of the technology is in a field known as anomaly detection. What is Anomaly Detection? Anomaly detection denotes the discovery of a rare exception to a given rule in terms of data patterns. Basically, it refers to the identification of data points that differ significantly from a majority of other data in a given set. These data points raise suspicions as they do not conform to the pattern expected of a group of data. What makes them particularly interesting is the fact that they often signify rare yet noteworthy events such as fraud or cyber intrusions. To illustrate, let’s assume you have a neighbor who takes her dog out for a walk every afternoon at 2pm. For more than a year of living next door to her, she has never once failed to take her dog out for the afternoon walk. If one day she does not come out at that time, you could have reason to conclude that something out of the ordinary has happened. That exception would qualify as an anomaly, raising questions that could prompt further research. Why Do Companies Need Anomaly Detection? Thanks to the advent of the internet, there is more data available to companies now more than ever before. But with the increase in data, there has also been an increase in security threats to business such as cyber attacks. In simple, straightforward situations, it is possible to separate anomalous data from normal data by use of data visualization. However, as one scales up to higher numbers of variables, the exercise becomes more and more complicated. Manual thresholds in such cases do not offer a viable, scalable solution to anomaly detection. This is where anomaly detection algorithms in machine learning come in. Machine learning helps companies manage the vast amounts of data at their disposal as well as analyze transactions in real time. By identifying differences between data points, anomaly detection opens up interesting opportunities for companies. On one hand, it minimizes potential risks for business operators while on the other, it maximizes revenue potential. Moreover, it can help companies adapt to changing conditions rapidly. How Anomaly Detection Algorithms in Machine Learning Work Machine learning algorithms for anomaly detection process data points one at a time. As such, it constantly defines and redefines ‘business as usual’ using statistical tests to check available data. During processing, a number of events could take place: - The system creates a model based on data patterns. - Using this model, it predicts the value expected from the next data point in the sequence. - If there is a significant difference between the prediction and actual data point, the data point gets flagged as a potential anomaly - As the system flags potential anomalies, its algorithm digs deeper to establish relationships between available metrics. Using this information, it filters results down to a smaller number of actual anomalies. Techniques of Anomaly Detection Machine learning algorithms for anomaly detection make use of techniques that offer an efficient alternative to traditional approaches. Let us examine two main techniques: Supervised Machine Learning Anomaly Detection To use this approach, you need a labeled training data set containing both normal and anomalous samples. This set will facilitate the creation of a predictive model. In theory, supervised approaches are said to offer better performance than unsupervised models. Some of the popular models include: - K-nearest neighbor (k-NN) - Support vector machine learning - Decision trees - Bayesian networks Unsupervised Machine Learning Anomaly Detection Unlike the previous approach, the unsupervised model does not require data for training. Rather, its design is based on two main assumptions. For starters, it assumes that a majority of the data is normal and only a small percentage is anomalous. And second, it assumes that malicious traffic will bear statistical differences from normal traffic. On the basis of these assumptions, it classifies the groups of data that occur more frequently as normal and the rest as abnormal. Some of the popular models used for this approach include: - Self-organizing maps - Adaptive resonance theory - Expectation-maximization meta-algorithm Practical Applications of Anomaly Detection - Network Intrusion Detection Anomaly detection is used to identify network intrusions as well as misuse. It does this by monitoring activity on a given system and classifying it as normal or anomalous. By flagging deviations from the norm as malicious activity, it allows for the creation of a defense line. - Monitoring Machine Health It can also be used to keep track of machine health and send notifications when the behavior of components deviates from the norm. For highly interconnected production systems, it is especially hard to get information about machine status. But using this automated approach gives the user hints to help in identifying faults. This approach can effectively support predictive maintenance by sending information of abnormal activity that precedes component failure. - Fraud Detection Fraudsters are ever adapting their techniques to remain a step ahead of security experts. As such, traditional fraud detection models are often reactive rather than predictive. But machine learning applications can capture common fraud patterns as well as new ones. This often uses unsupervised approaches, flagging potential fraudsters. In turn, it allows fraud busters to train their focus on these high-risk scenarios rather than carrying out random checks. Benefits of Using Machine Learning Applications for Anomaly Detection The use of machine learning applications for anomaly detection offers countless benefits to companies. Consider some of their top benefits for business: - Real-Time Insights Having the backend process of anomaly detection automated using machine learning algorithms means getting access to insights in real time. With this information, it becomes possible to address anomalies immediately. And for situations that do not require an immediate response, you get the chance to prioritize your next steps. Traditional anomaly detection processes involve a significant amount of guesswork, which is not always correct. But thanks to the use of machine learning models, one gets accurate insights using a less complex process. Business users thus get access to opportunities that are far beyond human capabilities. - Scalable Solution It is highly unlikely that data generation would slow down in the foreseeable future. If anything, it can only keep increasing. Using automated algorithms is the only viable way for a business to handle the infinite number of data points accessed. And they are important tools that equip entrepreneurs with what they need to keep pace with the demands of business. - Full Business Automation Armed with the necessary tools for automated anomaly detection, a business user will not only speed up response times. They will also have the opportunity to analyze the business as a whole. With an advanced system you can go deeper, analyzing relationships between patterns that span across the organization’s functions. With this, one can get the deep insights needed to optimize performance. Fostering a Proactive Business Approach with Machine Learning Anomaly Detection The use of machine learning in anomaly detection holds vast potential for business operators. Every entrepreneur could benefit from getting real-time insights on abnormal activities so as to act accordingly to avoid risks and optimize benefits. Having such insights on hand would also enhance the efficiency of your digital initiatives which are otherwise prone to cybersecurity risks. A combination of the above benefits in particular would serve to foster a more proactive approach in enterprise and gradually increase efficiency over time. Notably, it offers a solution that scales with your business to offer optimal performance at all stages of growth. - Microsoft Machine Learning Server in Agriculture: How the Fourth Industrial Revolution is Driving the Second Green Revolution - April 15, 2021 - Reimagining the Digitalization Process with Robotic Process Automation Software - March 30, 2021 - Machine Learning for Content Filtering – Winning the Battle against Harassment and Trolling - January 30, 2020
<urn:uuid:1b12e33e-9922-4fed-86da-54a635492201>
CC-MAIN-2024-38
https://www.cognillo.com/blog/anomaly-detection/
2024-09-08T02:35:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00489.warc.gz
en
0.936747
1,652
2.84375
3
Of the highly-visible hacks and data breaches over the past year, a large number of them were related to criminal hackers cracking weak Web passwords. This is arguably the most common Web flaw and something that anyone can exploit at any time. The bad guys don’t want you know this, but “mad hacker skillz”are not required. Wreaking havoc on websites hosting everything from blogs to e-commerce applications to Microsoft Outlook webmail is pretty darned simple if weak passwords are present. When it comes to Web password-related flaws, it’s clear to me that people: - Are guilty of creating/using extremely weak Web passwords – this includes all of us – from non-techies to information security professionals. These findings of 10,000 leaked Hotmail passwords say it all. - Re-use their Web passwords on numerous other business and personal-related websites. - Don’t think about the long-term consequences of choosing weak Web passwords. - Haven’t taken the time to understand what makes a good Web password. So, what can you do to protect yourself, your website(s) and your users? In a nutshell: require passphrases that are simple to remember yet difficult to crack. Note I said passphrases and not passwords. There’s a difference. Passwords are often either too short and easy to crack or too complex and impossible to remember. Both are bad for business. Passphrases take the concept (and benefits) of strong passwords and turn them into something that the user can actually remember. Here are some examples: Again, the key is to create passphrases that are relatively simple to remember yet don’t exist in a dictionary and would be next to impossible for someone guess. Strong passwords don’t equal complete security. There will always be Web vulnerabilities such as SQL injection, missing patches, rogue Web administrators and misconfigured Web sites/applications facilitating Web login attacks that can lead to password exposure. But why not do your part to minimize the risk. It’s much better than being a part of the problem. Get the latest content on web security in your inbox each week.
<urn:uuid:cb52f837-2d64-4d77-96c1-f2162076a6b0>
CC-MAIN-2024-38
https://www.acunetix.com/blog/uncategorized/web-passwords-are-the-weakest-link/
2024-09-09T09:46:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00389.warc.gz
en
0.923352
462
2.609375
3
Copyright and trademark infringement used to be easier to identify. Movie studios, book publishers and record companies knew who was manufacturing products for the consumer market, how many were produced, where they were shipped, and how many were returned unsold. Piracy happened, and it could be profitable. However, pirates had to make a financial investment to physically copy books, movies, or recordings. The internet made it much easier and more lucrative to get around copyright laws. It costs almost nothing to copy and distribute digital work, and millions of copies can be in the hands of consumers within minutes. Usually, all that is required is one or two clicks. Obviously, the companies that had paid the expenses to produce the work were losing money. Authors and others who were compensated through royalty programs were also losing money. One step that the federal government took was to pass the Digital Millennium Copyright Act, usually abbreviated as the DMCA. Although the DMCA offered a great deal of support to ISPs, it also provided some support for the legitimate holders of the copyrights. ISP Liability for Copyright Infringement - If the ISP knowingly hosts copyrighted items and directly benefits from it financially, the ISP could be guilty of direct infringement. - If the ISP knows about the infringing activity and materially contributes to or assists with the infringement, it could be guilty of contributory infringement. - If the ISP receives a financial benefit from the infringement and has the ability and right to control its users, the ISP could be guilty of vicarious liability. Most cases fall under the heading of contributory infringement, and cases of direct infringement are rare. Vicarious infringement can be difficult to prove because plaintiffs must prove the ISP’s ability and right to control its users. However, some ISPs have been tripped up because their terms of service specifically state that they can terminate a user’s account for a list of violations, including copyright infringement. Safe Harbor for ISPs Under the DMCA - The ISP has no direct knowledge of or substantial reason to suspect an infringement. - The ISP is not receiving a financial benefit from the infringement. - The ISP complies with any takedown notice provisions related to the removal of the copyrighted material. - The ISP must have a designated agent for handling complaints of copyright infringement. However, several recent lawsuits have revealed that the waters of the safe harbor have become more treacherous. It is no longer enough to simply go through the motions of complying with the DMCA. Instead, ISPs need to make sure that they are following the DMCA rules carefully. Three Ways to Help Prevent Being Held Liable for Copyright and Trademark Infringement It would be naive to assume that none of your users would ever violate intellectual property laws. If you want to protect yourself, there are three ways to shield yourself against potential liability. - The first step is one that ISPs often overlook. You must designate an agent to receive claims of copyright infringement if you want to benefit from the safe harbor provisions of the DMCA. This will require registering for an account with the United States Copyright Office and submitting an Interim Designation form or its equivalent. You will need to pay a small fee, and you will need to post the agent’s contact information on your website so that the public can find it. - Recent lawsuits have proven that ISPs must take action when they receive a valid notice of infringement. Once you have verified that the notice substantially complies with the formal requirements detailed above in the second point, you must expeditiously disable access to or remove the material. You must notify your subscriber or user of the removal so that he or she can file a counter-notice. If you receive a valid counter-notice, you should notify the copyright owner and include a copy. Digital piracy is a problem that is not going to disappear in the immediate future. As copyright holders become even more aggressive in pursuing cases of infringement, ISPs must ensure that they comply with all provision of the DMCA to qualify for the safe harbor provisions. The best way to do this is through automation. AbuseHQ can help increase your efficiency while reducing your costs. A recent study found that at most ISPs without AbuseHQ, the average abuse department had up to 10 people who only managed to process less than 15% of the reports received. After going live with AbuseHQ, 100% of the reports were being processed within weeks of the implementation, and staffing was reduced to just one or two people. Furthermore, AbuseHQ can help ISPs prove that they have done everything possible to comply with the DMCA should the ISP find itself in a court case.
<urn:uuid:2536c9c1-f9ed-4e88-9e99-9b4f805edeaa>
CC-MAIN-2024-38
https://abusix.com/blog/uncategorised/how-following-the-dmca-rules-helps-you-protect-yourself/
2024-09-11T19:20:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00189.warc.gz
en
0.966707
932
2.640625
3
- What is M-Files? - What is Metadata? - Intro to the M-Files Interface - Accessing M-Files - M-Files Terminology - Saving Documents with Metadata - Introduction to Metadata Cards - How to Save Documents - Finding Information with Metadata - How to Use Quick Search - Organizing with Views - How to Use Views - How to Use the Pinned Tab - The M-Files Way to Collaborate - How to Modify Documents - How to Share Documents - Grouping Information - Creating Views in M-Files - Search Options - How To Create Document Collections - How to Create Multi-File Documents - How To Create Relationships Between Objects Tips and Tricks - How to Create and Complete Assignments - How to Create Notifications in M-Files - How to Use and Create Document Templates - How To Use Offline Mode in M-Files - How to Change the Default Check-In Functionality - How to Use Workflows in M-Files - Permissions in M-Files - How to Convert Documents to PDF Format - How to Avoid Creating Duplicate Content in M-Files How to Convert Documents to PDF Format There may be several reasons why we would want to convert our documents into PDF format. Not only is the PDF good for preventing edits, but it also ensures that the document looks as it should no matter what system your recipient uses. Below you can find information about the different file formats that can be converted to PDF, as well as a list of ways you can convert your documents to PDF. File formats that can be converted Conversion to PDF can be done for files in formats such as Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Visio, as well as RTF and OpenOffice files. When converting to PDF, M-Files updates the M-Files property fields, if any, in Microsoft Word and Microsoft Excel documents by using the current metadata of the object. To access the PDF conversion options, right-click the document you want to convert and select Save as PDF. You have a few options when converting a document to PDF in M-Files: - Save as PDF: This option lets you save your document on your computer, a shared drive, or some other system. - Convert to PDF (replaces original file): This option replaces your document with a PDF. You can use this option if your document is meant to be the final version. If you still need to access the original document, it can be found via the History dialog. - Convert to PDF (adds separate file): This option converts the file into a multi-file document that contains your original document and the PDF version. - Send as PDF by E-mail: This option is good if edits are still needed. You get to keep the editable version in M-Files, and the PDF is directly attached to an email. Save as PDF This is good for situations where you don’t necessarily need to send the PDF just yet but want to store it somewhere on your computer, on a shared drive, or some other system. Replace the Original File with a PDF If you don’t need the original, editable document anymore and want to convert it, this is your go-to option. Use this option if the PDF is meant to be the final version and no edits are expected. Should you still need the original document, you can access it in the version history. Add a Separate PDF File If you want to keep the original document but also want a PDF version, you are free to do so. This option converts the document into a multi-file document that contains the original document and the PDF version. Send as PDF by Email Documents can be sent in PDF format by email. This option is good if edits are still needed. The original document stays in M-Files where you can still edit it, and the PDF is directly attached to an email. PDF Conversion via Workflows Some workflows can convert your document into a PDF when the workflow state is changed. This can happen in the beginning, in the middle, or at the end of the workflow, depending on how it’s configured. The same options apply here: we can either convert the document into PDF by replacing it or by converting it into a multi-file document that contains the original document and a PDF version of it. In this example, the document is converted into PDF format once it has been approved by the reviewer.
<urn:uuid:7ed8fa74-1af6-4034-acc4-ce9a2e6d5191>
CC-MAIN-2024-38
https://help.m-files.com/guides/how-to-convert-documents-to-pdf-format/
2024-09-13T00:28:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00089.warc.gz
en
0.872159
962
2.53125
3
The Problems of Using AI in Healthcare: Algorithmic Bias The purpose of the AI law is not to prevent or inhibit the Rise of the Machines. The purpose of the law is to prevent uses of AI in healthcare that might be discriminatory. Such uses are known as algorithm or algorithmic discrimination. Algorithm discrimination is sometimes referred to as algorithmic bias or AI bias. An algorithm, simply, is a set of rules or instructions to be followed in calculations or other problem-solving operations. Artificial intelligence operates through the use of algorithms. How can a machine be biased? It’s not the computer that’s biased. The humans who input data that the algorithm relies on, though, can be, intentionally or unintentionally. This bias can, in turn, lead to results that are discriminatory, or that can favor one group over another. In the healthcare system, algorithmic processes used to predict what patient populations will need more healthcare in the future than others, can lead to inaccurate decisions that can harm one group over another. A landmark AI in healthcare study published in 2019 in the scientific and medical journal, Science, illustrates algorithmic bias in action. The study found that an algorithm, widely used by hospitals to predict future healthcare needs for over 100 million people, was biased against black patients. How so? The algorithm relied on healthcare spending to predict future health needs. The humans who fed the data to AI to crunch, essentially gave these instructions to the algorithm: “When trying to predict who is likely to need extra healthcare in the future, consider previous healthcare spending.” In other words, the algorithm was to assume, “The patients who have spent large amounts of their income in the past, are the ones who are likely to spend more in the future, and require extra care in the future.” A reasonable assumption? Not quite. The assumption did not account for economic realities. The algorithm, the study noted, predicted that because black patients spent less than white patients on healthcare, black patients were not as likely to require extra care in the future as white patients would be. The conclusion that the algorithm had enabled was that black patients had to be much sicker to be recommended for extra healthcare. Equating the amount of previous healthcare spending with the likelihood more care will be required in the future (e.g., “If a patient spent little on healthcare in the past, it’s likely that patient will not need extra care in the future”) is, it turns out, a biased assumption. The humans who came up with the algorithm did not take a basic fact into account: Historically, black patients have had less to spend on their health care compared to white patients, due to longstanding wealth and income disparities. Spending less on healthcare does not mean someone is healthier – it could mean a person might be unhealthy, but has lacked access to affordable treatment. Using AI in Healthcare: What Are the Requirements of the Colorado AI Act? The Colorado AI Act requires AI developers and deployers to use reasonable care to protect consumers from the risks of algorithmic discrimination. The Act defines “algorithmic discrimination” as “Any use of a high-risk artificial intelligence system that results in unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under Colorado or federal law.” Algorithmic discrimination, as we’ve seen above, can arise from the use of AI systems. The Colorado AI Act regulates “high risk AI systems.” The law defines a “high-risk AI system” as a system that makes or is a substantial factor in making a “consequential decision.” A “consequential decision,” in turn, is a decision that has a material legal or similarly significant effect on the provision or denial to any customer of, or the cost of, certain things. These include: - Healthcare services - Education enrollment or opportunities - Employment or employment opportunities - Financial or lending services - Essential government services - Legal services Had the Colorado AI Act been the law when the “who needs extra care” algorithm was deployed, the fact of the longstanding wealth and income disparities would have been taken into account. AI in Healthcare: Guarding Against Discrimination The Colorado AI Act regulates the use of AI in healthcare and other areas by requiring developers of high-risk AI systems to complete annual impact assessments for these systems. These assessments must include certain information, such as “An analysis of whether deployment of [a high-risk AI system] poses any known or reasonably foreseeable risks of algorithmic discrimination, and if so, details on such discrimination and any mitigations that have been implemented.” The Colorado AI Act also requires AI developers to notify consumers of certain activities. If a deployer uses a high-risk AI system to make an adverse consequential decision concerning a consumer, it must send the affected consumer a notice that includes the following information: A disclosure of the principal reason(s) for the consequential decision, including: - The degree/manner in which the high-risk AI system contributed to the decision - The type of data processed to make the decision - The source or sources of such data - An opportunity to correct any incorrect personal data that factored into the decision - An opportunity to appeal any adverse decision, which must allow for human review The Colorado AI Act, legal experts predict, prompt AI in healthcare regulation in other states. Already, California and several other states have proposed AI-related legislation.
<urn:uuid:0589b58b-0a9e-4724-a447-cdc2f86e36b6>
CC-MAIN-2024-38
https://compliancy-group.com/colorado-artificial-intelligence-act/
2024-09-14T03:52:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00889.warc.gz
en
0.954053
1,187
3.5
4
Recent Internet attacks have caused several popular sites to become unreachable. These include Twitter, Etsy, Spotify, Airbnb, Github, and The New York Times. These incidents have highlighted a new threat to online services: botnets powered by the Internet of Things (IoT). Distributed denial of service (DDoS) attacks have been around for over a decade and, for the most part, have been handled by network providers’ security services. However, the landscape is changing. The primary strategy in these attacks is to control a number of devices which then simultaneously flood a destination with network requests. The target becomes overloaded and legitimate requests cannot be processed. Traditional network filters typically handle this by recognizing and blocking systems exhibiting this malicious behavior. However, when thousands of systems mount an attack, these traditional filters fail to differentiate between legitimate and malicious traffic, causing system availability to crumble. Cybercriminals, Hacktivists, and IoT Cybercriminals and hacktivists have found a new weapon in this war: the IoT. Billions of IoT devices exist, ranging in size from a piece of jewelry to a tractor. These devices all have one thing in common: they connect to the internet. While this connection offers tremendous benefits, such as allowing users to monitor their homes or check the contents of their refrigerators remotely, it also presents a significant risk. For hackers, each IoT device represents a potential recruit for their bot armies. A recent attack against a major DNS provider shed light on this vulnerability. Botnets containing tens or hundreds of thousands of hijacked IoT devices have the potential to bring down significant sections of the internet. Over the coming months, we’ll likely discover just how formidable a threat these devices pose. For now, let’s dig into the key aspects of recent IoT DDoS attacks. 5 Key Points to Understand The proliferation of Internet of Things (IoT) devices has ushered in a new era of digital convenience, but it has also opened the floodgates to a range of cybersecurity concerns. To navigate the complexities of this digital landscape, it’s essential to grasp five key points: 1. Insecure IoT devices pose new risks to everyone Each device that can be hacked is a potential soldier for a botnet army, which could be used to disrupt essential parts of the internet. Such attacks can interfere with your favorite sites for streaming, socializing, shopping, healthcare, education, banking, and more. They have the potential to undermine the very foundations of our digital society. This underscores the need for proactive measures to protect our digital way of life and ensure the continued availability of essential services that have become integral to modern living. →Dig Deeper: How Valuable Is Your Health Care Data? 2. IoT devices are coveted by hackers Hackers will fight to retain control over them. Though the malware used in the Mirai botnets is simple, it will evolve as quickly as necessary to allow attackers to maintain control. IoT devices are significantly valuable to hackers as they can enact devastating DDoS attacks with minimal effort. As we embrace the convenience of IoT, we must also grapple with the responsibility of securing these devices to maintain the integrity and resilience of our increasingly digitized way of life. 3. DDoS Attacks from IoT Devices Are Intense and Difficult to Defend Against Identifying and mitigating attacks from a handful of systems is manageable. However, when tens or hundreds of thousands of devices are involved, it becomes nearly impossible. The resources required to defend against such an attack are immense and expensive. For instance, a recent attack that aimed to incapacitate Brian Krebs’ security-reporting site led to Akamai’s Vice President of Web Security stating that if such attacks were sustained, they could easily cost millions in cybersecurity services to keep the site available. Attackers are unlikely to give up these always-connected devices that are ideal for forming powerful DDoS botnets. There’s been speculation that nation-states are behind some of these attacks, but this is highly unlikely. The authors of Mirai, a prominent botnet, willingly released their code to the public, something a governmental organization would almost certainly not do. However, it’s plausible that after observing the power of IoT botnets, nation-states are developing similar strategies—ones with even more advanced capabilities. In the short term, however, cybercriminals and hacktivists will continue to be the primary drivers of these attacks. → Dig Deeper: Mirai Botnet Creates Army of IoT Orcs 4. Cybercriminals and Hacktivists Are the Main Perpetrators In the coming months, it’s expected that criminals will discover ways to profit from these attacks, such as through extortion. The authors of Mirai voluntarily released their code to the public—an action unlikely from a government-backed team. However, the effectiveness of IoT botnets hasn’t gone unnoticed, and it’s a good bet that nation-states are already working on similar strategies but with significantly more advanced capabilities. Over time, expect cybercriminals and hacktivists to remain the main culprits behind these attacks. In the immediate future, these groups will continue to exploit insecure IoT devices to enact devastating DDoS attacks, constantly evolving their methods to stay ahead of defenses. → Dig Deeper: Hacktivists Turn to Phishing to Fund Their Causes 5. It Will Likely Get Worse Before It Gets Better Unfortunately, the majority of IoT devices lack robust security defenses. The devices currently being targeted are the most vulnerable, many of which have default passwords easily accessible online. Unless the owner changes the default password, hackers can quickly and easily gain control of these devices. With each device they compromise, they gain another soldier for their botnet. To improve this situation, several factors must be addressed. Devices must be designed with security at the forefront; they must be configured correctly and continuously managed to keep their security up-to-date. This will require both technical advancements and behavioral changes to stay in line with the evolving tactics of hackers. McAfee Pro Tip: Software updates not only enhance security but also bring new features, better compatibility, stability improvements, and feature removal. While frequent update reminders can be bothersome, they ultimately enhance the user experience, ensuring you make the most of your technology. Know more about the importance of software updates. Securing IoT devices is now a critical issue for everyone. The sheer number of IoT devices, combined with their vulnerability, provides cybercriminals and hacktivists with a vast pool of resources to fuel potent DDoS campaigns. We are just beginning to observe the attacks and issues surrounding IoT security. Until the implementation of comprehensive controls and responsible behaviors becomes commonplace, we will continue to face these challenges. By understanding these issues, we take the first steps toward a more secure future.
<urn:uuid:077c68a6-b08c-44b7-8686-9b2a0ce86a81>
CC-MAIN-2024-38
https://www.mcafee.com/blogs/other-blogs/mcafee-labs/top-5-things-know-recent-iot-attacks/
2024-09-17T22:34:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00589.warc.gz
en
0.942938
1,408
2.953125
3
Rate this article: (9 votes, average: 4.11) Hackers continually try to find new ways to steal data from businesses. That’s why it should come as no surprise that 91% of phishing occurs via email. As many as a million phishing emails containing the dreaded Emotet trojan have been known to be sent in a single day! But how can you combat this threat? Having a secure email certificate (also known as S/MIME certificate or an Outlook email encryption certificate) can help your company do precisely that. Its security features prevent things like phishing and builds a trusted network around your company, so your email recipients know who’s who. A digital certificate used to authenticate the identity of the email’s sender and to encrypt the email is called an Email certificate or S/MIME (Secure /Multipurpose Internet Mail Extensions). When a person uses an Outlook security certificate, they can encrypt the contents of the email to protect it from intruders and can also sign the email to validate their identity. Ever heard of SSL certificates? Secure email certificates work in a similar way — at least on paper. What the S/MIME protocol does is implement a public and private encryption key into the email delivery process, which allows the email to be encrypted before it is sent. This process, known as data at rest encryption, turns your plaintext email into gibberish so it stays secure until your intended recipient decrypts it with their private key. Not only this, but if you use an email encryption certificate in Outlook to encrypt your messages, your identity will also be verified. Who would you trust more — Thomas, who’s labeled as a “trusted seller” and sends an encrypted email, or Harry, who isn’t identified as such and sends plaintext information? (I seriously hope you would choose Thomas). Now that you know what an email certificate is and why it’s useful, let’s discuss some information you’ll need to gather to begin the process of installing your new email signing certificate. Secure the email of all of your employees with Comodo Secure Email Certificates. You’ll need to do the certificate issuance process in Mozilla Firefox, so make sure you’ve got it installed on your computer. Before we can start the process of installing your new email security certificate, we’ll need to gather a few bits of information, such as: After doing so, the following window will pop up: Note: Your list may look different, but the process is still the same. If it displays “more choices” as it does above, that means you have multiple certificates available. To choose a specific certificate, simply click on the more choices link and select the certificate you wish to use. Press OK. That’s it! Your new email certificate is now installed. Although you’ve installed your new email security certificate, you may not have a clue how to use it. If that’s the case, here’s what you need to do to make the emails you send secure: After doing so, any emails you send should be both secure and encrypted! What is S/MIME? How does it work? Do I need S/MIME?
<urn:uuid:a5941d0a-c479-4100-be51-d86dacdc734f>
CC-MAIN-2024-38
https://comodosslstore.com/resources/how-do-i-install-a-secure-email-certificate-in-my-outlook/
2024-09-20T10:27:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00389.warc.gz
en
0.932431
692
2.796875
3
Virtual Private Networks (VPNs) have become a popular tool for boosting online privacy and security. By funneling your internet connection through an encrypted tunnel to a remote VPN server, your real IP address and location can be concealed from the websites you access. However, this begs the question – can VPN usage itself be tracked? We’ll explore the inner workings of VPN services and examine their vulnerabilities. A VPN provides a private, encrypted tunnel from your device to a VPN server maintained by a VPN provider. Rather than connecting directly to sites, your data first gets routed through this intermediate VPN server. This prevents the sites you access, or your Internet Service Provider (ISP), from viewing your actual home IP address and tying internet activity directly to you. It also hides location details, preventing geography-based targeting. Can VPNs be Tracked? While VPN tunnels provide vastly improved privacy and anonymity over no protection at all, there exist risks depending on your VPN provider’s operations. We’ll analyze aspects like encryption standards, connection logging, threat intelligence gathering and legal jurisdictions to determine if VPN traffic leaks identifiable electronic fingerprints that could allow external monitoring. Properly assessing your anonymity threats lets you make informed choices around commercial VPNs for maintaining rigorous online confidentiality. VPNs and Online Privacy Let’s first fully define what VPNs are and their role in protecting internet users’ privacy, before examining potential surveillance vectors. What is a VPN? A Virtual Private Network creates an encrypted data tunnel from your local device to a server operated by the VPN provider somewhere globally. Rather than connecting directly to sites and services via your ISP, traffic first gets routed through this intermediate VPN server. Your computer establishes a secure session with the target VPN server. All internet traffic gets funneled through an encrypted VPN tunnel before leaving the VPN server to reach public internet destinations. This conceals your actual home IP address and physical location from visited sites, replacing it with the IP address of the VPN server you’re connected through. So your online activities and browsing appear tied to the server’s geographic location instead of your own. Why Use a VPN? There are several key reasons internet users leverage VPN services: - Bypass Geographic Restrictions – Content sites frequently restrict access based on location detected via your IP address. A VPN masks location, defeating geo-blocks. - Public Wi-Fi Security – Connecting to open hotspots means risky exposure to criminals sniffing traffic. VPN encryption secures public connections. - ISP Tracking Prevention – Many internet providers exploit user data for profit. VPN encryption blocks them from monitoring your online activities. - Defeat Data Retention Mandates – Certain countries require ISPs log user traffic. VPN tunnels bypass this state surveillance. As VPN adoption widens, more people utilize these services hoping to take control over their digital privacy. Next we’ll see if VPN encryption itself provides sufficient protection against external monitoring. VPN Protocols and Encryption The VPN protocol defines the types of encryption used to secure the connection between your device and remote server. Proper encryption makes it difficult for outsiders to decipher intercepted VPN traffic. VPN Protocol Types VPNs rely on various tunneling protocols to apply encryption: PPTP – Point-to-Point Tunneling Protocol uses 128-bit MPPE encryption. Provides minimal security but maximum speeds. Avoid when possible due to weaknesses allowing compromise. L2TP/IPsec – Layer 2 Tunnel Protocol paired with IP Security employs 168-bit AES encryption. Fast performance balanced with strong security. OpenVPN – Utilizes up to 256-bit AES encryption plus 2048-bit RSA keys. Slower but offers nearly impenetrable OpenVPN encryption ideal for avoiding deep packet inspection. Can leverage either TCP or UDP transport layers. WireGuard – Next-gen protocol that uses state-of-the-art cryptography like Curve25519, Salsa20, Poly1305 and BLAKE2s. Fast and secure. Why VPN Encryption Matters Without trusted encryption protocols shielding traffic, VPN tunnels leak huge amounts of metadata and even allow full contents viewing to sophisticated network analysis efforts by state-level agencies. However, proper VPN cipher implementation prevents interception of data in transit between your device’s VPN client and the remote VPN server. This protects the confidentiality and integrity of your communications. That being said, encryption alone does not prevent external VPN detection or hide the fact you’re using a VPN in the first place. Activity patterns can still surrender some user specifics through metadata examination by global intelligence entities. VPN Logs and Data Retention To understand how VPN usage could still be tracked or identified via audits, you need to understand the concept of logs – data recorded about user connections to VPN services. What are VPN Logs? VPN providers necessarily monitor server resource demands and performance metrics around active user sessions. Server logs may record details like connection timestamps, assigned internal IP addresses, incoming data transfer volume and connection duration. Session logging assists with technical troubleshooting but also provides telltale electronic fingerprints that intelligence agencies can leverage to unravel some anonymity – especially if combining data across VPN providers. VPN Data Retention Policies Reputable VPN providers limit exposure by restricting data retention windows on server logs to only span a couple months before permanent deletion. However, some disreputable VPN companies have been caught maintaining logs spanning years which massively deteriorates anonymity if seized. The 14 Eyes Surveillance Alliance Extra scrutiny applies to VPN providers operating within the 14 Eyes group of nations sharing intelligence (US, Canada, UK, Australia, New Zealand). Mandatory data retention requirements may compel extensive logging nobody can control. Steer clear of VPN brands based in these territories. Detecting VPN Use Next we’ll explore technology-assisted methods for uncovering people accessing the internet via VPN services rather than directly through residential ISP connections. IP Address Tracking Simple IP address lookups can reveal addresses owned by commercial VPN providers. However, this doesn’t necessarily indicate active VPN use. Instead, examining connection patterns across days for repeating addresses linked to VPN pooling servers offers stronger signal. VPNs also try to mimic residential address behaviors to conceal server indicators. Overall, IP detection proves unreliable. Traffic flow analysis utilizing machine learning models measure patterns like packet timing, volume, order and frequency to guess whether connections demonstrate traits distinct from residential ISP customer baseline profiles. Irregularities suggest possible VPN usage. However, adaptive techniques like VPN traffic obfuscation, throttling and spoofing can successfully trick these analysis systems with false positives. Deep Packet Inspection Deep packet inspection (DPI) captures and evaluates actual contents of traffic down to data payloads rather than just metadata or headers. Only state-level agencies realistically possess this scale of computational power currently. Even still, top VPN solutions rely on things like Perfect Forward Secrecy, Pre-Shared Keys and Public Key Pinning to make decryption virtually impossible even when leveraging full data packet access along with SSL and HTTPS encryption. VPN Provider Logging Practices Logging and user data retention policies of the VPN provider you choose plays a substantial role in preventing third-party tracking of your VPN connections. Logging and Anonymity VPN providers that indefinitely store extensive logs including session times, internal assigned IP addresses, incoming data volume and other metadata for each user builds up a large trove of distinctive identifiable traits for intelligence agencies to pinpoint individuals across sites – via correlating data sets. Meanwhile, providers with strict time-limited data retention on bare minimum internal operational analytics makes connecting such dots across a user’s browsing history considerably harder. Choosing a Trustworthy VPN Scrutinizing legal jurisdiction, transparency reports, public leadership accountability and examining the specific details of a VPN’s logging policies provides the clearest measurement for evaluating risks. Search for providers that undergo annual third-party audits by reputable cybersecurity firms to validate actual practices match advertised logging procedures and protections. In closing, while leveraging a VPN furnishes considerable privacy upsides through IP maskings and encryption tunnels – the potential for some traceability still lingers depending on your chosen provider’s protocols, jurisdiction, threat detection evasion capabilities and logging procedures. Rigorously investigating each vendor using the criteria covered herein allows properly gauging just how watertight your traffic and metadata remain across ecosystems from government agencies to hackers before selecting service. Ultimately for strongest anonymity assurances, open source audited VPN solutions operating under nonprofit governance outside intelligence alliances using next-gen encryption and hardware-isolated multi-hop servers offer unmatched confidentiality. But when weighed against the incredible privacy dividends still on offer by reputable premium VPNs over no protections, even commercial providers with limited diagnostics data collection give most individuals all the discretion desired for practical browsing needs plus some non-trivial legal buffer through network obfuscation. Just keep the service’s technical logging policies front of mind when conducting sensitive activities online or regarding questionable jurisdictions during travels to best safeguard where possible based on threat models.
<urn:uuid:aad588dc-d49b-4880-91ae-656a17ef9fad>
CC-MAIN-2024-38
https://nirvanix.com/best-vpns/can-vpn-be-tracked/
2024-09-08T06:29:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00589.warc.gz
en
0.880432
1,845
2.71875
3
H1 Header: AI and Agriculture: Improving Productivity and Reducing Waste The global population is projected to reach 9.7 billion by 2050, requiring food production to increase by 70%. However, traditional farming methods have their limitations, such as resource constraints and unpredictable weather patterns. Fortunately, the integration of artificial intelligence (AI) in agriculture offers a solution to this problem. H2 Header: Precision Farming with AI Technology Precision farming makes use of AI technology to collect and analyze data to make informed farm management decisions. This involves using sensors to measure soil characteristics, crop growth, and weather patterns. The collected data is then analyzed and processed to obtain information that would be difficult for humans to analyze. With precision farming, farmers can closely monitor every aspect of their crops and make real-time decisions based on the information obtained. This can optimize yields, reduce waste, and create a more sustainable agricultural industry. H3 Header: AI-Powered Irrigation Systems Irrigation is an essential part of agriculture. However, traditional irrigation systems can be inefficient and lead to water waste. AI-powered irrigation systems, on the other hand, use real-time weather data and soil moisture sensors to make watering decisions. The system can determine when and how much water to use, reducing water waste and increasing efficiency. This technology can also be used to prioritize irrigation in areas facing drought and water scarcity. H2 Header: AI-Powered Crop Selection Farming is a complicated process, and the choice of crop to plant plays an essential role in the success of the farm. AI technology can be used to analyze soil data and generate recommendations on the best crop to plant in a particular area. This technology can save farmers time and resources and increase crop yield. H3 Header: AI-Powered Pest Management Pests and diseases can cause significant damage to crops, resulting in massive losses for farmers. AI technology can monitor the farm and identify pests and diseases before they cause damage. This can allow farmers to address the problem before it becomes significant. AI-powered pest management technology can minimize the use of pesticides, a situation that lowers costs while reducing the environmental impact of farming. H2 Header: AI-Powered Harvesting Harvesting is a time-consuming and labor-intensive process. AI-powered harvesting technology can automate the process, reducing labor costs and increasing efficiency. This technology can quickly identify and harvest ripe crops, reducing waste and increasing yields. H3 Header: AI-Powered Supply Chain Management Supply chain management is a critical aspect of the agricultural industry. AI technology can be used to track crops and predict market demand. This can ensure that farmers have the right amount of crops at the right time, reducing food waste and increasing profitability. In conclusion, the integration of AI technology in agriculture has the potential to revolutionize the industry, increasing productivity and reducing waste. The technology can be used in every aspect of farming, from precision farming to supply chain management. This technology can help farmers make informed decisions that optimize yields while reducing the industry’s environmental impact.
<urn:uuid:4b76798c-1783-466b-9ce0-798e9cac3472>
CC-MAIN-2024-38
https://cybernews.cloud/ai-and-agriculture-improving-productivity-and-reducing-waste/
2024-09-09T11:16:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00489.warc.gz
en
0.925415
634
3.546875
4
Cuba Does Away with Its Dual Currency System The start of 2021 has delivered a long-awaited change in Cuba: the end of the country’s dual currency system. In fact, until this year Cuba had three different currencies in circulation: the Cuban peso (CUP), the convertible peso (CUC) and the U.S. dollar (USD). Here’s what changed on 1 January 2021: - Cuba removed from circulation the convertible peso (CUC); only the CUP and USD will remain. - Cubans can exchange their convertible pesos (CUCs) at banks or CADECAs (state-owned foreign exchange offices) through June of this year. After that, the convertible peso will be worthless. The rise and fall of the Cuban convertible peso tells an interesting story about the island nation’s economy. Why did Cuba have a dual currency system? The Cuban peso (CUP) was established as Cuba’s currency by the country’s post-revolution Central Bank. Over time, economic influences complicated the value of the CUP. An embargo by the United States, as well as some of Cuba’s state-run economic policies, made the value of the CUP unstable. Cuba took steps in response to these conditions: - After the collapse of the Soviet Union 1993, Havana legalized the use of the US dollar within its borders. - In 1994, the convertible Cuban pesowas introduced as a second form of currency which the island nation hoped would fare better than the Cuban peso, especially when buying imported goods. Unfortunately, these steps were not enough to stabilize the economy. Why is Cuba doing away with its dual currency system? Cuba’s dual currency system has created economic imbalance and disparity. Over time, there has been an artificial over-valuation of the Cuban peso relative to the U.S. dollar by using the 1-to-1 exchange rate in the public sector, which made imported goods artificially inexpensive compared to domestic products. Furthermore, Cubans in different social sectors are paid with different currency. State (government) employees are typically paid in CUCs, while those who work in the private sector are paid in CUPs. In 2013, Raúl Castro announced a series of economic reforms aimed at correcting the relative price structure in Cuba. - The use of only one [Cuban] currency throughout the economy (starting in 2021). - Establishing a unified exchange rate for all sectors of the economy, set at 24 Cuban pesos for each USD. To finetune the currency unification plan, today’s Cuban government has announced it will: - Publish the exchange rate for the Cuban peso daily, on the Central Bank’s website. This indicates the value could fluctuate rather than remain fixed. - Raise government employees’ salaries to adjust for inflation. Time will tell how effective the new plan will be in stabilizing Cuba’s economy and creating greater equality among its citizens. What will Cuba’s new single currency mean for the island? Business and Economy News | Al Jazeera | 1 January 2021 Q&A: Economist Ricardo Torres on Cuba’s Monetary Unification Americas Society/Council of the Americas |11 January 2021 Cuban Convertible Peso Kristin Stanberry performs dual roles for Keesing Technologies. She’s responsible for document acquisition in North America, serving as liaison with the agencies that issue identity documents throughout the U.S. and Canada. And, as editor of the Keesing Platform, she works with experts around the world to bring their knowledge and insights to the Platform’s global audience.
<urn:uuid:a0a2f172-1f8f-44eb-9b85-ad6fa17eaebf>
CC-MAIN-2024-38
https://platform.keesingtechnologies.com/cuba-does-away-with-its-dual-currency-system/
2024-09-15T14:00:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00889.warc.gz
en
0.942686
777
3.171875
3
The term ‘DevOps’, coined in 2009, refers to a culture that promotes a holistic approach to the software development lifecycle (SDLC) by building stronger relations between the software development and IT operation divisions in an organization. The main objective of DevOps is to completely break down barriers between development and operations, so as to establish an effective working environment that is transparent, rapid, responsible, and smart, for the overall success of a project. How did DevOps originate? DevOps has its roots in Agile software development methodology which focuses on building a collaborative environment between the client and vendor, promoting cross-functional working between the various divisions in an organization, and encouraging systems thinking. The Agile method has proved to be very efficient with successful outcomes. DevOps implements new techniques and strategies to approach the Agile philosophy. It attempts to eliminate the practice of silos, in order to build a more integrated work atmosphere, for the effective and smooth running of an organization. Though Agile and DevOps may seem similar, they are not the same. While the Agile methodology focuses on addressing the issues between the development team and customer needs, DevOps focuses on bridging the gap between the development and operation teams. What are the benefits of practicing DevOps? One of the biggest advantages of adopting DevOps is that it breaks down silos. By getting rid of silos, organizations can build stronger relationships without any hierarchical distributions of power that can result in miscommunication and conflicts. Another major benefit of incorporating the DevOps culture is that organizations can get more work done in a short span of time. The efficiency of employees increases when they work in a collaborative environment and this, in turn, will help them achieve their targets faster. Practicing DevOps brings stability in the workings of an organization. This, in turn, builds an atmosphere of reliability and stability — a scenario which reduces the chances of failure by 50 percent. The DevOps culture encourages team members to think of their roles as part of contributing to the bigger whole – the development of the organization itself. This improves their sense of responsibility to their work and makes them more accountable. How can DevOps be implemented? The best method to adopt DevOps is to implement it slowly, little by little. It takes time for people to break free from silos and adapt to an integrated environment, accept and trust new people and ideas. It is advisable to give people the time to adopt DevOps, and to improve their confidence in practicing it. One way to establish an integrated environment is to encourage systems thinking and avoid making one person or a particular team responsible for a failed action or result. Numerous organizations have benefited by adopting the DevOps culture in recent years. Though the process of adopting and practicing DevOps can seem a little challenging initially, it becomes easier to overcome the challenges once one gets accustomed to its principles and functions. CloudNow is a software development company that strongly believes in adopting an Agile-DevOps approach throughout the software development lifecycle (SDLC). For DevOps services that deliver exceptional results, contact us today.
<urn:uuid:f0c435e4-9608-4431-9de1-d546122827f3>
CC-MAIN-2024-38
https://www.cloudnowtech.com/blog/what-is-devops-and-its-benefits/
2024-09-16T18:18:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00789.warc.gz
en
0.952929
636
3.203125
3
What Is Data Classification? Imagine going to a library where none of the books are organized — not by the Dewey Decimal System and not by genre. It would be difficult for anyone to find what they are looking for. The same applies to data, which is why any business collecting information should have data classification tools. But what exactly is data classification and why is it necessary for handling personally identifiable information (PII)? Data classification is the process of categorizing data into relevant subgroups so that it is easier to find, retrieve, and use. It often involves marking or tagging data with a classification label such as “Confidential” or “Public” and simultaneously removing stale and duplicate data. Why Is Data Classification Necessary? Classification also acts as a visual cue for your employees and users to better understand the level of safety and alertness required when handling a given document. Knowing and using different types of data classification gives your business insight into the data it is creating, the data it is collecting and its level of sensitivity. Data classification can also help you reach your business objectives and enhance operational efficiency. Knowing where millions of files are and what purpose they serve allows your company to analyze data and see trends, which enhances decision-making and streamlines productivity. Organizing data and identifying those trends early on can also reduce maintenance and storage costs. How Data Classification Works Before you can classify data, you need to identify and collect it. Here are the three most common ways vendors organize the initial data before deciding how it should be classified. 1. Content-based classification This approach involves looking at files directly and organizing them based on the kind of content and its level of sensitivity. 2. Context-based classification This approach is efficient for classifying a lot of data from the same source as it examines metadata rather than the specific content. Parameters may include: - The application used to create the file or the file type (.xlsx or .docx) - The user/organization who created the file - The physical location of where data was created 3. User-based classification A manual form of organization that sees a person or team decide how to classify individual files or data. User-based classification is reliant on personal discretion and the employee’s knowledge of what falls under sensitive data. Types of Data Classification Generally, the more data classification labels you implement, the better you can manage your files and data. Most organizations use four classification labels ranging from information available to the public to PII and other sensitive data that could prompt legal action if not properly maintained. This category of data is freely accessible to the public including all company employees. It can be freely used, reused, and redistributed without repercussions. An example might be marketing brochures, press releases, or a publicly- traded company’s stock report. This category of data is only available to internal personnel or employees who are granted access. This might include internal-only emails and correspondence, recordings or other communications, business plans, org charts, internal staff contact list etc. Confidential data (including PII data) Access to confidential data requires special access privileges that must be strictly controlled. Types of confidential data can include sensitive personal information of customers and employees, M&A documents, privileged information protected under NDA, and more. Usually, confidential data is protected by data privacy and security regulation laws like HIPAA, GDPR, CPRA and the PCI DSS. Restricted data is that which, if compromised or accessed without authorization, could lead to criminal charges and massive legal fines or cause irreparable damage to the company. Examples of restricted data might include proprietary information or research and data protected by state and federal regulations. What Is the Data Classification Process? When done manually, data classification can be a tedious and complex process. Manual classification processes are vulnerable to human subjectivity compared to trained algorithms that a classification tool would rely on. However, humans should still be part of the process. While automation does streamline the overall process, you will still need processes and procedures in place that outline the roles and responsibilities of employees in your organization in regard to data classification. Below are some basic steps to take when developing a data classification process. - Understand compliance requirements - Determine what information you are collecting - Establish processes and documentation for managing data - Identify how collecting this data will affect business objectives - Create documentation explaining how data levels will be assigned - Train employees on how to handle sensitive data using documentation - Scan and identify information using a data classification tool - Organize and classify results based on data sensitivity - Assign systems to manage unused data in compliance with regulations - Review processes to ensure ongoing classification and compliance Classify Data With Ground Labs In order to properly classify data, you will need a data discovery tool. Not only will it help you have a complete understanding of where all your data resides and what category it belongs to, but it will assist your company in ensuring compliance with data protection laws. Our solutions, like Enterprise Recon and Card Recon, help businesses discover over 300 types of data across a variety of surfaces, such as desktops, email, and cloud, among other environments. These tools also help to remediate data compliance issues and keep your business functioning more efficiently. If you are ready to take control of your data and streamline your classification process with tools that also support compliance initiatives, contact us today.
<urn:uuid:910d8116-f8c6-4847-ae64-46dc3047bbcc>
CC-MAIN-2024-38
https://www.groundlabs.com/blog/what-is-data-classification-an-overview-best-practices/
2024-09-19T06:58:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00589.warc.gz
en
0.907173
1,121
3.28125
3
Cyber security is always going to be a number one issue for any organisation, and one of the most effective ways of protecting a business is to deploy and maintain robust patch management and vulnerability management policies. However, there exists some confusion around the scope of each term, and it’s not unheard of for patch management and vulnerability management to be used interchangeably, despite being distinctly different processes. Simply put, patch management is the systematic process of applying software updates to address specific flaws. Although there are commonalities, patch management is a far narrower category than vulnerability management, with the former being just one part of the latter. Vulnerability management concerns itself more with the establishment of a framework designed to combat vulnerabilities across an organization, with patch management being one of a number of processes deployed to achieve this. As vulnerability management is considered a much broader field than patch management, the steps needed to create an effective strategy are far more nuanced and incorporate a larger number of stakeholders. We explain the key differences between vulnerability management and patch management below, and break down the importance of each. What is patch management? Patch management is the process of updating all software within a company, using the most current versions released by the manufacturer, in order to fix bugs that have been discovered after release. This includes enterprise-level products like server operating systems and database products, as well as more basic tools like Internet Explorer and Adobe Flash. Get the ITPro. daily newsletter Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024. Patch management can be done manually on a machine-by-machine basis, but it's much more commonly performed using centralised management tools. This can involve dedicated patch management software, which allows IT teams to set policy-based rules for the automatic application of patches. These can be scheduled around business hours to ensure that patch application results in minimal downtime and loss of productivity. Why is patch management important? Unpatched systems are one of the easiest attack vectors for criminals looking to gain access to corporate networks. Hackers and security researchers are constantly discovering new vulnerabilities, and companies are constantly issuing patches to deal with them. If those patches are not applied, however, cyber criminals have an easy entry point into your networks. Patch management also ensures that all your enterprise equipment keeps working as it should. Technology is a notoriously fickle beast, and even minor software bugs can lead to major headaches and plummeting employee productivity. Timely application of patches ensures that any potential problems can be resolved as soon as possible before the cost of downtime starts to get out of control. Knowing when not to apply an update can be just as important for good patch management, however. New software updates can cause compatibility issues between different systems or can introduce new bugs of their own. Good patch management often involves making a judgement call on whether the security benefits of installing a potentially buggy patch outweigh the inevitable downtime. What is vulnerability management? Vulnerability management gives a business an overview of your security posture as a whole. It gives you a sense of which areas of your infrastructure are most at risk, which allows you to not only prioritise security remediation but also helps inform future IT investment. There are a variety of models available for deploying vulnerability management, with a differing number of steps depending on the one you choose. However, generally they all include four main steps. First of all, there’s the scan, then assessment of risk, followed by the prioritisation of vulnerabilities, before the final step of continuous management. Scanning / discovery The first phase, discovery, involves assessing all assets across the breadth of your IT infrastructure, including servers, laptops, printers, screens, and backup appliances. Essentially all devices that may be connected to a corporate network count, as well as software that’s running. The discovery process must ascertain whether the developer still supports the software with security patches, and how up-to-date the software is. This process may be arduous and lengthy, but putting in the hard work at this stage is crucial. It’s essential to ascertain a complete picture of the systems the business relies on, with unpatched hardware introducing needless gaps into the setup. One of the tools in the CISO armoury is the use of the Common Vulnerabilities and Exposures (CVE) glossary. This is a project maintained by Mitre, funded by the US Department of Home Security, that will catalogue every vulnerability that has so far been identified, ensuring that managers have up-to-date information at their fingertips. The scanning process will involve routing TCP/IP traffic across a corporate wide network this will enable managers to ascertain where possible weaknesses are. It’s an exhaustive process and there could be downsides insofar as that level of network traffic could lead to the slowing down of the system. The second stage is assessing what vulnerabilities are present and what the level of risk is. The most common way of doing this is by using the Common Vulnerability Scoring System (CVSS). This assigns a numerical value to the level of risk for all vulnerabilities that have been assessed. The CVSS score will look at three areas in particular: - Base metrics for qualities intrinsic to a vulnerability - Temporal metrics for characteristics that evolve over the lifetime of vulnerability - Environmental metrics for vulnerabilities that depend on the way that a system has been implemented All of these groups will be given a numerical score: these will range from 0 to 10, with 10 being the most severe. Different organisations may handle these scores in different ways: some companies will just use the base metrics while some larger organisations – or those with more complex environments – will take temporal and environmental scores into account. The CVSS scoring system can be found on the Forum of Incident Response and Security Teams, FIRST website. The reporting phase follows on once you’ve established a full and up-to-date understanding of the IT estate, and what hardware devices and software is connected to the corporate network. This information should be compiled into a report that can be easy to read, accessible and referenceable, detailing the systems that are most vulnerable. This assessment would be based on various criteria such as the severity of unpatched flaws, and how close the systems and applications are to sensitive data. It's possible to do this automatically using software, with many security platforms allowing you to create reports and 'digests' based on the results of autonomous network scans. Reporting feeds into the next step, prioritisation, and some vulnerability management programmes class them as part of the same stage. Arguably the most important stage of the vulnerability management process, prioritisation is where you decide the order in which you're going to address the vulnerabilities within your network. This will be based on a number of factors, but the principal things to consider are: how long it will take to fix, how much it will cost to fix and how much risk it poses. Which factor you give the most priority to will likely depend on the individual circumstances of your business, but it's a good idea to prioritise high impact, low-effort fixes where possible. In many cases, the likelihood of a flaw being exploited, or the potential impact if it is, will be low enough that you can judge leaving it unpatched to be an acceptable risk. Alternatively, the cost of fixing something may be so high as to make it unfeasible with your current resources. The important thing is to be able to identify these acceptable risks and to be aware of them going forward. Once these vulnerabilities have been assessed and prioritized, there’s a need to look at how those vulnerabilities can be tackled. There are a few options. The most obvious one is completely fixing the vulnerability so it can’t be exploited and cause damage to the system. Although this is the ideal way forward, it's not always achievable, and so you may need to rely on more creative methods. For example, greater use of segmentation to make sure that those vulnerable areas are more easily isolated. There could also be greater use of measures such as two-factor authentication and encryption to protect any data. Other measures may be more costly or time-consuming, however, such as creating a patch for your own application or replacing a device that is no longer supported by the manufacturer. You can also take the decision to mitigate an issue by partly addressing the problems or, as mentioned above, by accepting the risks posed by a particular vulnerability. Once you've completed the response cycle, the process starts again with a fresh round of discovery to see what the state of your network is after your actions to secure it. Organisations should lastly ensure that there’s an ongoing process of vulnerability management in place once the initial work has been done. This is not just a question of running a sweep of the system and remedying all the vulnerabilities; it’s about establishing a framework going forward that would mean patches are handled effectively, networks have been organised so that any breaches can be isolated, and that staff are fully trained – and continually monitored – to maintain strict control over a corporate-wide system. Any relaxation of this policy could prove very costly indeed. A programme of end user training can be one of the most effective components of this continuous management. After all, according to a Verizon report, 74 % of breaches are down to poor employee behaviour. These are the people who are opening attachments from unknown sources and downloading unsafe apps. A comprehensive vulnerability management strategy will include an effective training process for employees, so that possible social engineering breaches can be minimised. Why is vulnerability management important? Vulnerability management is crucial because it gives you an overview of your security posture as a whole. It gives you a sense of which areas of your infrastructure are most at risk, which allows you to not only prioritise security remediation but also helps inform future IT investment. More importantly, vulnerability management gives you insights into potential security holes beyond what you can learn from looking at a list of outstanding patches. There may be a piece of software that is known to be vulnerable, for example, but for which a patch is not yet available. In this case, looking at unapplied patches would not have alerted you to the issue. Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.
<urn:uuid:f061e324-28ef-4cf9-b498-bd0d0f35b0ae>
CC-MAIN-2024-38
https://www.itpro.com/security/27713/the-importance-and-benefits-of-effective-patch-management
2024-09-20T13:41:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00489.warc.gz
en
0.952443
2,190
2.625
3
August 28, 2001 Batten down the 802.11b and HomeRF hatches At the first annual Windows Embedded Developers Conference in early 2001, Microsoft set up an 802.11b network for more than 1000 attendees. The test network provided 10Mbps data-rate connections for email and for access to slide show presentations on a local server. The test network also provided Web access through a proxy server. The solution wasn't perfect—in particular, many attendees found their Pocket PCs almost useless for Web browsing (even with a fast Ethernet connection) because of their tiny displays. I ended up spending more time scrolling around a Web page than actually browsing. The technology, however, thrilled many other attendees, who used 802.11b PC Cards with their notebook PCs to achieve live, realtime Web and email access. We're witnessing an explosion in the popularity of wireless LAN technologies. The 802.11b (aka Wi-Fi) specification*and the similar but lower-performance HomeRF specification*define a new protocol that supports wireless voice and data networking in both home and office environments. Both standards primarily take the form of NICs that communicate over radio rather than through cable. The technologies are embedded either in PCI cards that you can plug into desktop PCs or servers, or in PC Cards that you can use with notebook PCs and mobile devices. In some cases, the technologies are embedded in external devices that you can connect through a USB cable or directly to a router. The great benefit of a wireless LAN is the freedom it gives you on your network. You can add, remove, and move devices at will—you simply plug a card into the device and install the software. Depending on the device and the way you've set up your network, you might need to set a static IP address, but if your network supports DHCP, you won't even have to do that. The 802.11b and HomeRF networking technologies are perfectly suited to mobile devices, offering the advantage of a high-speed (i.e., as fast as 10Mbps) connection without pinning you to a particular location. (For more information about Wi-Fi and a buyer's guide for devices, see Tom Iwanski, "802.11b Wireless Devices," July 2001.) Unfortunately, the 802.11 and HomeRF specifications aren't secure—simply because radio signals are inherently insecure. But you can use a VPN to correct that limitation. A recent story in The Wall Street Journal ("Silicon Valley's Open Secrets," April 27, 2001) illustrates the potential security limitations of wireless LANs. The article describes how two young crackers drove around Silicon Valley with a notebook PC and 802.11b card and hacked into such companies as Sun Microsystems, 3Com, and Nortel Networks. The disturbing part of the story is the crackers' apparent lack of sophistication—they merely installed an 802.11b card and started browsing for other PCs on the wireless LAN. I didn't need an overactive imagination to envision the ease with which a cracker might invade my own HomeRF network, which I'd initially configured with no security and with guest access enabled. Anyone with a Windows-based notebook and a HomeRF card could find my network and browse my computers. Judging from The Wall Street Journal story, most 802.11b networks are similarly insecure. Use a VPN VPN technology provides a secure way to use the Internet for private communications. Rather than communicate directly over the Internet, a VPN client establishes a secure connection with a VPN host. The client encrypts data packets, then passes them over the Internet to the host, which decrypts the packets. Although a cracker could intercept the encrypted packets as they pass over the Internet, he or she would need to first decrypt the packets to obtain any useful information. Such decryption is beyond the casual cracker's capability. You can use Win2K's inherent VPN technology to secure a wireless LAN. On the day The Wall Street Journal story broke, I implemented a VPN on my HomeRF network. Here's how you do it. Set up your wireless LAN per the manufacturer's instructions. On each Win2K Professional client machine that belongs to the wireless LAN, go to Start, Settings, Network and Dial-up Connections. Right-click your wireless adapter and select Properties. Clear the Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks check boxes. Make sure Internet Protocol (TCP/IP) remains selected, as Figure 1 shows. Click OK. On the Win2K Server machine, select Start, Settings, Network and Dial-up Connections, Make New Connection to start the Network Connection Wizard. Click Next, then select the Accept incoming connections check box. On the wizard's next screen, make sure that All connection devices remains cleared. On the following screen—the Incoming Virtual Private Connection page—click Allow Virtual Private Connection, then click Next. Choose the users to which you want to permit access to the virtual connection (don't select Guest). On the next screen, ensure that all networking components are selected. On the final page, which lists the name of the resulting connection, click Finish. On each client, select Start, Settings, Network and Dial-up Connections, Make New Connection to start the Network Connection Wizard. After clicking Next, select Connect to a private network through the Internet. On the next screen, click Do not dial the initial connection. On the following screen, enter the server's DNS name or IP address, then click Next. You can create the connection for all users or for only the logged-on user. Finally, you can edit the name of the connection. Click Finish. A Connect Virtual Private Connection dialog box appears on the client. To complete the connection, the user must type a username and password. The client now sees the server as if the two were connected directly on the LAN. This procedure routes all browsing, as well as file and printer sharing, over the VPN. A cracker outside the building who has a wireless LAN card could see that a network exists but couldn't browse it. Of course, the cracker might guess that a VPN exists and try to access it. However, to do so, the cracker must guess a username and password. Obviously, if you plan to implement a VPN, you'll want to disable guest access and require nonblank passwords. Also, if you—like many of us—have created a too-obvious password for your Win2K Administrator account, now is the time to change it. The only problem I've found with the VPN approach is that Internet Connection Sharing (ICS) fails: Attempts to browse the Web or perform other Internet-only tasks go through the VPN instead of directly over the wireless network. To fix this problem, open the Properties page (on the client) for the VPN connection that you've created. On the Networking tab, select Internet Protocol (TCP/IP) and click Properties. Click Advanced and clear the Use default gateway on remote network check box on the General tab. Click OK to close the Advanced TCP/IP Settings dialog box. Click OK again to close the Internet Protocol (TCP/IP) Properties dialog box, then click OK a third time to close the VPN connection's Properties dialog box. If the connection is active, you'll see the warning message Since this connection is currently active, some settings will not take effect until the next time you dial it. To effect your changes, you'll need to double-click the connection's taskbar icon, click Disconnect on the resulting Status dialog box, then reconnect from Network and Dial-up Connections. To learn more about Win2K's VPN technology, see Douglas Toombs, "Configure a Win2K VPN," September 2000. Clearly, wireless LANs are a natural fit for mobile devices. But how do you use a wireless Ethernet card to secure your Pocket PC or other Personal Digital Assistant (PDA)? Although Microsoft has built VPN support into Windows CE 3.0—which both Pocket PCs and the larger Handheld PC 2000 devices use—the company hasn't provided a VPN client for these devices. Microsoft's enterprise white paper "Why Pocket PC?" (http://www.microsoft.com/mobile/enterprise/papers/why.asp) lists third-party VPN support from Certicom and V-ONE. Unfortunately, the Certicom and V-ONE solutions don't let you connect a Windows CE device to a Win2K-hosted VPN. V-ONE's proprietary SmartGate VPN requires a server component as well as a client component. Certicom's movianVPN client works with enterprise VPNs from such big-name vendors as Alcatel, Axent, Check Point Software Technologies, Cisco Systems, Intel, Nortel Networks, and RADGUARD. The movianVPN client also supports Palm OS devices, as well as devices that run Windows CE. (V-ONE is developing a Palm V and Palm III version.) You can use Win2K's built-in VPN support to secure a wireless LAN that runs on Windows-based notebooks and desktop PCs. However, if you want to provide secure access to the wireless LAN for other types of mobile devices, you'll need third-party software for the connection's client and server sides. I'll sign off with a question for Microsoft: Why didn't you provide necessary client software so that Windows CE devices could participate in a Win2K VPN? About the Author You May Also Like
<urn:uuid:c6f217c6-5c33-406d-a7e5-372f3ed753ca>
CC-MAIN-2024-38
https://www.itprotoday.com/endpoint-security/use-a-vpn-to-secure-your-wireless-network
2024-09-08T09:46:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00689.warc.gz
en
0.908646
1,948
2.8125
3
Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet for information-sharing across a local area network (LAN) and the storage management capabilities of FC as used in a storage area network (SAN). It allows both types of traffic to pass through the same physical cable while preserving the benefits of the FC protocol. Some organizations use FCoE to completely converge their LAN and SAN networks. For low-end storage, businesses may find iSCSI sufficient; when performance, reliability, and low latency are paramount, an FC SAN is the right choice. FCoE sits somewhere between the two, offering a balance between performance and cost and a way to gradually transition from a traditional and far more expensive FC SAN to more affordable Ethernet-based solutions. Here’s what you need to know about the FCoE storage technology. Table of Contents Table of Contents How Does Fibre Channel over Ethernet (FCoE) Work? FCoE works in a similar way to native Fibre Channel, except it encapsulates FC frames in an Ethernet header instead of an FC header. Storage and data traffic are transmitted from a converged network adapter (CNA), which includes an FC host bus adapter (HBA) and a network interface card (NIC) for TCP/IP Ethernet traffic. This ability is made possible by enhancements to the IEEE Data Center Bridging standards, which upgrade Ethernet behavior to support input/output (I/O) convergence of LAN and SAN traffic over the same physical Ethernet network infrastructure. This gets around the fact that FC is a lossless protocol that uses buffer-to-buffer credits to ensure frames are not lost. Ethernet lacks this lossless capability, as it relies on other protocols such as TCP for acknowledgments and retransmissions as a way to avoid loss of packets. FC frames are never segmented to be sent across multiple Ethernet frames. However, it is important to understand that data traffic still requires a MAC address. This is addressed by virtualizing the physical interface into a virtual NIC with a MAC address for the data traffic and a virtual HBA for storage. Otherwise, FCoE works in exactly the same way as traditional Fibre Channel. The Importance of Fibre Channel over Ethernet (FCoE) Ethernet networks are everywhere. They are used in offices, data centers, many homes, and most businesses. Speeds have increased to more than 100 Gbps, with the current roadmap extending to 400 Gbps. It makes sense, therefore, for FC to capitalize on its ubiquitous nature. Ethernet enhancements enabling FCoE allowed many large enterprises to use it, and most equipment suppliers and software providers support it. In some cases, such as servers, it’s possible to run FCoE over a standard Ethernet NIC. However, anyone using FCoE usually demands high performance and that is best achieved with a CNA. FCoE actually operates at a different layer of the stack than Ethernet, which makes it routable over non-contiguous networks. When used in a SAN, it is much more affordable than strictly FC, as it eliminates the need to buy dedicated FC hardware. Further, the skill needed to run FCoE is more commonly available than that of an FC SAN, which lowers staffing costs and increases reliability. Fibre Channel over Ethernet Benefits FCoE offers a number of benefits across a wide range of applications. Here are the most common: - Cost Reduction—Businesses that want to use FC for storage can use FCoE to gain many of the same advantages of FC at less cost, without having to buy so much dedicated FC hardware. - Lack of Technical Skills—Hiring veteran SAN administrators can be difficult and expensive. But FCoE can be administered by professionals experienced with Ethernet and iSCSI with some training. - Performance Improvements—FCoE offers better performance than iSCI networking, though not equal to strictly FC. Businesses running an iSCI SAN will experience a performance bump by switching to FCoE. - SAN Refreshes—Many SANs are reaching end-of-life, and the price tag for upgrading can be steep. Adding Ethernet-based FCoE solutions keeps costs lower, as an Ethernet network is already in place as are NICs, though CNAs may need to be added along with multi-protocol switches. - Flexibility—FCoE often adds flexibility to highly virtualized server environments or multi-tenant environments. Workloads and virtual machines (VMs) from the same physical servers may connect to different storage types, made easier by FCoE. Challenges of FCoE FCoE has some challenges and limitations for users to be aware of. Here are the most common: - Network Contention—Running separate FC SAN and Ethernet networks has the advantage of keeping key storage traffic away from the data network. However, FCoE sends storage traffic over the same network as other traffic, which can result in network contention, delays, and bottlenecks. - Compatibility—FCoE often requires new network adapters and specially designed switches. Some gear can support multiple protocols and run FC and Ethernet traffic simultaneously, but compatibility issues should be considered before purchase. - Implementation—As FC SANs have a reputation for complexity, some may underestimate the effort of migrating from FC to FCoE. Careful planning is needed to successfully switch from FC to FCoE. If a modern FC SAN network is operating, there is no need to switch to FCoE—but if that infrastructure is aging, augmenting it with FCoE should be on the table. Businesses may consider a slow transition from all-FC to some FCoE by introducing hybrid switches and other gear that supports both. FCoE can bring about reduced storage cost and many applications may work well even if network contention is sometimes an issue. But for highest possible storage performance, FC SANs are the best option. Note, though, that FCoE does provide relatively high-speed data transfer in tandem with relatively low latency. It does a good job of bridging the performance gap between FC and Ethernet. FCoE, then, can often deal with demanding workloads. Some even use it successfully for artificial intelligence (AI), real-time data analytics, and high-performance computing (HPC) applications. Those considering FCoE should evaluate overall performance requirements. FCoE provides higher-speed data transfer and lower latency than traditional iSCSI storage networks. Although FCoE is far cheaper than FC, it still comes at a cost premium compared to a plain and simple iSCSI SAN architecture that can run on commodity hardware. Read 5 Types of Enterprise Data Storage to learn more about organizations’ different approaches to retaining and accessing the massive amounts of data they collect and rely on for decision making.
<urn:uuid:c2d9e356-1c44-4952-88b9-518bd66ccdf9>
CC-MAIN-2024-38
https://www.enterprisestorageforum.com/hardware/fibre-channel-over-ethernet-fcoe/
2024-09-09T16:15:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00589.warc.gz
en
0.934772
1,436
2.953125
3
In this article, we’re focusing on HIPAA compliance and how your organization can stay ahead of the compliance curve. You know all that information you provide to your doctors and health insurance companies? Things like your name, address, social security number, medical history, test results, insurance details—that’s your protected health information or PHI. As a patient, you have certain rights regarding how your PHI is used and shared. Ever wonder what your doctor can and can’t disclose to others about your health? What about to your family or friends? Or for research studies you may want to participate in? We’re here to give you the full rundown on PHI disclosure so you understand your rights and can make the best decisions about who has access to your personal health details. What Is PHI Disclosure? When it comes to your health records, privacy is essential. PHI disclosure refers to the sharing of your protected health information with outside parties. What exactly is protected health information (PHI)? Protected health information includes any personal details about your health, medical conditions, treatments, payments, and more. This sensitive data is kept private under HIPAA laws, but can be disclosed in certain situations with your consent. Knowing how your PHI may be used or shared with outside parties gives you more control and helps ensure your health records remain as private as possible. If at any time you have questions about the disclosure of your PHI, don’t hesitate to speak with your healthcare providers. PHI Disclosure Rules and Regulations The Health Insurance Portability and Accountability Act (HIPAA) establishes strict rules around disclosing a patient’s protected health information or PHI. As a healthcare provider, you need to understand and follow these regulations to avoid penalties. PHI refers to any information that could identify a patient – things like their name, address, birth date, and medical records. You can only share PHI for certain reasons, known as “permitted disclosures.” These include: - Treatment – You can disclose PHI to provide medical care, like sending records to a specialist. You need the patient’s consent for non-emergency treatment. - Payment – PHI can be shared to bill and collect payment from health plans or patients. For example, sending claims to insurance companies. - Healthcare operations – Disclosing PHI to improve quality, reduce costs, or manage your practice is allowed. Think reviewing records to monitor treatment effectiveness. - With patient consent – Patients can sign an authorization form allowing you to disclose their PHI for any purpose. They can revoke consent at any time. - As required by law – You must disclose PHI when required for public health activities, health oversight, judicial and administrative proceedings, law enforcement, and more. - To avert a threat – You can disclose PHI to prevent or lessen a serious threat to health or safety in an emergency situation. Notify patients promptly after disclosure. - For specialized government functions – Disclosing PHI for military and veterans activities, national security and intelligence activities, protective services for officials, medical suitability determinations, and more. Following these rules helps keep patients’ sensitive health details private while allowing necessary sharing for treatment and other vital purposes. Be transparent in how you handle PHI and get patient consent whenever possible. Patients trust you with their most confidential information, so take that responsibility seriously. While the rules around sharing patient health information can seem complex, the guiding principles are actually quite straightforward. Only share what’s necessary, get proper consent, and make sure any disclosures are for permitted purposes that improve patient care. If you follow these best practices and check with your organization’s privacy officer whenever you have questions, you’ll be well on your way to handling PHI responsibly and protecting your patients’ trust. Knowledge is power, so keep learning and stay up to date with any changes to the rules.
<urn:uuid:f6a2c947-70a2-4f28-a24b-5d356f1c56a3>
CC-MAIN-2024-38
https://scytale.ai/glossary/phi-disclosure/
2024-09-10T19:49:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00489.warc.gz
en
0.93165
797
2.96875
3
It Seems We’ve Graduated to Space Warfare AUKUS to Strategically Deploy Its DARC Program The U.K., the U.S., and Australia team up to deploy the Deep Space Advanced Radar Capability (DARC) program. - The program comes as part of the AUKUS security partnership, which focuses on strengthening security partnerships in the Indo-Pacific. - The three nations will strategically place three high-powered radars to track and identify objects in deep space. - The DARC program offers critical space traffic management and contributes to the global surveillance of satellites. Under the AUKUS partnership, three high-powered radars will be strategically placed in the UK, US, and Australia to identify and track objects in deep space. The agreement itself focuses on strengthening security partnerships in the Indo-Pacific region amid rising concerns over China’s expanding influence. The U.K. Ministry of Defense (MoD) revealed that these radars, once fully operational by 2030, will characterize objects up to 36,000 kilometers from Earth. That’s 22,000 miles for our American readers. The DARC program will boast heightened sensitivity, accuracy, capacity, and agile tracking compared to existing systems. This will, in turn, offer critical space traffic management and contribute to the global surveillance of satellites. Allowing 24/7, all-weather monitoring, the program will indeed benefit land, air, and naval forces across the AUKUS nations. According to the U.K.’s announcement, the first DARC radar site is under construction in Exmouth, Western Australia, and is expected to be operational by 2026. The U.K.’s will, however, be in Cawdor Barracks in Pembrokeshire, Wales, granted it passes environmental assessments and gains the approval of local authorities. Grant Shapps, the U.K.’s Defense Secretary, believes that the dangers of space warfare are increasing. As a result, “the UK and [its] allies must ensure [they] have the advanced capabilities [they] need to keep [their] nations’ safe.” It looks like we may have another space race on our hand, what with the European Space Agency (ESA) building the next International Space Station (ISS) in collaboration with Airbus and Voyager. But here’s what’s confusing me a little: The U.K. Defense Secretary emphasized the dangers of space warfare, which is fair considering the world seems to have a hair trigger these days. However, is it really that legitimate of a concern right now? Or is it a concern for the future? Don’t get me wrong: I understand preparedness. I’m saying it’s a bad thing. But the world is on fire. There isn’t a corner that isn’t knee-deep in some war or conflict. Shouldn’t we be solving our conflicts with minimal bloodshed right now rather than trying to anticipate opponents floating thousands of kilometers away from Earth? Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.
<urn:uuid:c0af6520-f73d-479f-b774-1fdfb89d97f4>
CC-MAIN-2024-38
https://insidetelecom.com/aukus-to-strategically-deploy-its-darc-program/
2024-09-12T02:21:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00389.warc.gz
en
0.918358
679
2.75
3
Generating greener energy Community energy generation, which is energy generated by a community for the use of that community, has huge potential in helping the UK to reach net-zero. However, our research found that there is a long way to go. Currently 81% of people do not generate energy or power in their local community and 84% don’t have any way of generating energy in their homes. 44% said that it would be too expensive for them to do so. 10% of respondents said they do generate energy from solar panels at home, while 11% generate solar energy in their community. Again, household income has an impact. Half of respondents who earn over £150,001 a year have solar panels installed, while 21% who earn between £50, 271 and £150,001 do not. However, this number is especially low for people who live in Liverpool, where only 5% of residents have solar panels installed in their home. Wind turbines and hydro turbines were the least popular options as a source of energy in their own households and in local communities. There’s a big discrepancy across income when it comes to willingness to switch to an electric or hybrid vehicle – suggesting affordability is a big factor in the decision. Two fifths (41%) of those earning up to £12,571 have or will make this change, compared to 89% of those earning over £150,001 The road to net zero requires significant financial investment and a shift in consumer attitudes. As the findings suggest, although there are factors that affect consumer decisions, most are still willing to make lifestyle changes to reach the government’s goal of creating a carbon neutral economy. At Capita, we’re committed to reorientating our business towards net zero as part of our drive to be a purpose-led, responsible business, and are well under way in defining our pathway towards becoming net zero. Our approach to climate change focuses on decarbonising our operations and tackling climate change with our clients and partners. In 2021, we’ve been developing our ambition and financial plan to achieve net zero as soon as possible, aligning our pathway to the forthcoming Science Based Targets initiative (SBTi) global standard for corporate net-zero targets. Alongside this, we are strengthening our assessment of climate risk against our corporate strategy and financial position – building plans and an understanding of the costs to manage and mitigate the risks. Research was undertaken between 8th - 15th June 2021 by Opinium Research. Sample: 3,004 UK adults.
<urn:uuid:7a6eca18-a41a-4a8b-9351-f3a352a01ca7>
CC-MAIN-2024-38
https://www.capita.com/our-thinking/cost-reaching-net-zero-and-perilous-consequences-if-we-dont
2024-09-14T13:32:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00189.warc.gz
en
0.962082
524
2.625
3
Manufacturing technologies set to hold the reins From big data analytics to advanced robotics to computer vision in warehouses, manufacturing technologies bring unprecedented transformation. Many manufacturers are already leveraging sophisticated technologies for manufacturing such as the internet of things(IoT), 3D printing, Artificial Intelligence, etc., to improve operations’ speed, reduce human intervention, and minimize errors. As 2024 rapidly approaches, manufacturers will have to move away from Industry 4.0 and embrace Industry 5.0. The latter is all about connecting humans and machines (smart systems). Interestingly, Industry 5.0 may already be here. The ongoing COVID-19 pandemic only accelerates its arrival. Read more: Digital Transformation in Manufacturing Here are the top 10 technologies that positively impact the manufacturing industry. With advances in robotics technology, robots are more likely to become cheaper, smarter, and more efficient. Robots can be used for numerous manufacturing roles and can help automate repetitive tasks, enhance accuracy, reduce errors, and help manufacturers focus on more productive areas. Benefits of Using Robotics in Manufacturing: - They improve efficiency right from handling raw material to finished product packing - You can program robots to work 24/7, which is excellent for continuous production - Robots and their equipment are highly flexible and can be customized to perform complex jobs - They are highly cost-effective even for small manufacturing units Collaborative assembly, painting, and sealing, inspection, welding, drilling, and fastening are a few examples of the jobs done by robots. Today, robots work in several industries, including rubber and plastic processing, semiconductor manufacturing, and research. While they are mainly used in high-volume production, robots make their presence felt in small to medium-sized organizations. Nanotechnology has grown to a great extent in the last few years. It involves the manipulation of nanoscopic materials and technology. Though its widespread use is relatively new, it will be indispensable to every manufacturing industry soon. Further research and experimental designs suggest that nanotechnology can be highly effective in the manufacturing industry. Applications of Nanotechnology in Manufacturing: - Create stable and effective lubricants that are useful in many industrial applications - Car manufacturing - Tire manufacturers are using polymer nanocomposites in high-end tires to improve their durability and make them wear resistance - Nanomachines, though not used widely in manufacturing now, are, for the most part, future-tech 3. 3D Printing Post its tremendous success in the product design field, 3D printing is set to take the manufacturing world by storm. The 3D printing industry was worth USD 13.7 billion in 2019 and is projected to reach USD 63.46 billion by 2025. Also known as additive manufacturing, 3D Printing is a production technology that is innovative, faster, and agile. Benefits of Using 3D Printing in Manufacturing: - Reduces design to production times significantly - Offers greater flexibility in production - Reduces manufacturing lead times drastically - Simplifies production of individual and small-lot products from machine parts to prototypes - Minimizes waste - Highly cost-effective Major car manufacturers use 3D printing to produce gear sticks and safety gloves. 4. The Internet of Things (IoT) IoT in manufacturing employs a network of sensors to collect essential production data and turn it into valuable insights that throw light into manufacturing operational efficiency using cloud software. This connectivity had brought machines and humans closer together than ever before and led to better communication, faster response times, and greater efficiency. Benefits of Using IoT in Manufacturing - Internet of Things (IoT) reduces operational costs and creates new sources of revenue - Faster and more efficient manufacturing and supply chain operations ensure a shorter time-to-market. For instance, Harley- Davidson leveraged IoT in its manufacturing facility and managed to reduce the time taken to produce a motorbike from 21 hours to six hours. - IoT facilitates mass customization by providing real-time data essential for forecasting, shop floor scheduling, and routing. - When paired with wearable devices, IoT allows monitoring workers’ health and risky activities and making workplaces safer. The ongoing pandemic has expanded the focus on IoT due to its predictive maintenance and remote monitoring capabilities. Social distancing makes it difficult for field service technicians to show up on short notices. IoT-enabled devices allow manufacturers to monitor equipment’s performance from a distance and identify any potential risks even before a malfunction occurs. Additionally, IoT has enabled technicians to understand a problem at hand and come up with solutions even before arriving at the job site so that they can get in and get out faster. 5. Cloud Computing After making its presence felt in other industries, cloud computing is now causing ripples in manufacturing. From how a plant operates, integrating to supply chains, designing and making products to how your customers use the products, cloud computing is transforming virtually every facet of manufacturing. It is helping manufacturers reduce costs, innovate, and increase competitiveness. IoT helps improve connectivity within a single plant, while cloud computing improves connectivity across various plants. It allows organizations across the globe to share data within seconds and reduce both costs and production times. The shared data also helps improve the product quality and reliability between plants. 6. Big Data The manufacturing industry is complicated in terms of the variety and depth of the product. As far as opening new factories in new locations and transferring production to other countries is concerned, companies can leverage big data to tackle it. As the process of capturing and storing data is changing, new standards in sharing, updating, transferring, searching, querying, visualizing, and information privacy are arising. Think of manufacturing software like MES, ERP, CMMS, manufacturing analytics, etc. When integrated with big data, these can help find patterns and solve any problems. Benefits of Using Big Data: - Improve manufacturing - Ensure better quality assurance - Customize product design - Manage supply chain - Identify any potential risk Explore our use case: Adding New Dimensions to Equipment Maintenance with IIoT, AR, and Big Data 7. Augmented Reality In manufacturing, we can use AR to identify unsafe working conditions, measure various changes, and even envision a finished product. Augmented Reality can help a worker view a piece of equipment and see its running temperature, revealing that it is hot and unsafe to touch with bare hands. An employee can know what’s happening around them, like what machinery is breaking down, a co-worker’s location, or even a factory’s restricted sites. Simply put, AR applications can help inexperienced employees to be informed, trained, and protected at all times without wasting significant resources. AR has made it possible for technicians to provide remote assistance by sending customers AR and VR enabled devices and helping them with basic troubleshooting and repairs during the COVID-19 crisis. Also, more and more customers are open to allowing manufacturers to implement AR with the long-term goal of creating permanent solutions. After all, it helps both the customers and field technicians by reducing the risk of exposure. 5G will have a tremendous impact on the manufacturing industry. It will be more transformational for devices that drive automated industrial processes. The amazing low-latency and connectivity of 5G will power sensors on industrial machines. It will help generate a lot of data that will open new avenues of cost savings and efficiency when combined with machine learning. Currently, China and South Korea are leveraging 5G this way. Soon the US and the UK are expected to compete with them. 9. Artificial Intelligence(AI) Manufacturers are already employing automation on the plant floor and in the front office. In the future, AI-powered demand planning and forecasting will continue to develop that will help manufacturers align their supply chain with demand projections to get data that were not possible previously. A study from IFS shows that 40% of manufacturers plan to implement AI for inventory planning and logistics and 36% for production scheduling and customer relationship management. 60% of the respondents are said to focus on productivity improvements with these investments. Moving manufacturing operations to the cloud and building and integrating systems using IoT will equally create opportunities and challenges. In an increasingly insecure digital era, there is a pressing need for heightened security. Manufacturing experts are investing in secure cloud-based ERP like SAP and Odoo to resolve the security challenges. Enterprises-big or small- will soon increase their dependence on cloud-based ERP systems to address security glitches and save costs by paying for usage. White Paper: What difference does RPA bring to your business? How can you embrace this disruptive technology to remain competitive? Download to learn more! Technologies for manufacturing will decrease labor costs, improve efficiency, and reduce waste, making future factories cheaper and more environment-friendly. Additionally, improved quality control will ensure superior products that will benefit both the consumers and the manufacturers. COVID-19 has changed the way the manufacturing industry operates. If your business wants to remain competitive, you will have to embrace manufacturing technologies to shape your company’s future. To know more about the forward-thinking strategies that integrate the latest trends and technologies, please connect with us today. Stay up to date on what's new 03 Jul 2024 Financial Services AI in Business is a present reality! It’s a building revolution that is all-encompassing and is redefining business operations. You have only two options. Either ride on the crest of…… 20 Jun 2024 Healthcare B2B Artificial Intelligence is a multi-talented assistant and has proven its worth in the healthcare industry. Healthcare organizations have found innumerable ways to use AI, from record maintenance to patient assistance.…… 08 May 2024 Financial Services B2B Achieving perfection is no easy process. It is not impossible either. It takes a lot of effort and hard work but with the help of Artificial Intelligence, this process can…… 24 Apr 2024 B2B How are Businesses Using AI? The verdict is crystal clear—leaders today must embrace AI solutions to stay ahead of the curve and survive in the rapidly evolving business landscape. AI……
<urn:uuid:8cf5ef02-194a-4d0e-8704-027884804ccc>
CC-MAIN-2024-38
https://www.fingent.com/blog/top-10-technologies-that-will-transform-manufacturing-in-2021/
2024-09-14T13:31:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00189.warc.gz
en
0.937534
2,084
2.921875
3
The Science Behind Virtual Servers The use of virtualization is growing more important in the world of technology as people are beginning to see how it can work more efficiently to use processing power and lower overall IT costs. The concept of virtualization and using virtual servers may be unfamiliar to those accustomed to traditional infrastructures. Virtualization is basically a way to run multiple operating systems and applications on a single server to take full advantage of its processing power. Virtualization makes infrastructures simpler and more efficient, allowing applications to deploy faster and performance and availability to increase. Virtual servers are appealing because they can create IT that is easier and less expensive to own and manage. Table of Contents How Virtual Servers Work To create this desired efficiency, one physical dedicated server is divided into multiple virtual servers using special server software. The reason that this process is so useful is that typically a physical server is only dedicated to a specific application or task. This traditional system can streamline a computer network from a technical standpoint, but it doesn’t take advantage of the server’s full processing power. Using only one physical server per task can waste a lot of this power, and computer networks can get large and complex as multiple physical servers take up a lot of space. A data center that is crowded with physical servers consumes a lot of power and can be expensive to maintain. When one physical server is converted into multiple virtual servers, power is used more effectively and each server can then run multiple operating systems and applications. Architecture & Components The structure of virtual servers begins with the main hardware or physical server, which is made into a virtual server with a special kind of software. The virtual server is then split up into multiple kinds of virtual hardware and virtual machines that each operate independently. - Each virtual machine can run its own operating system and applications, acting the same as a unique physical device. - Each virtual machine also has its own virtual network and all of them are connected to the greater network as a whole. - With full virtualization, the components involve the hypervisor which is the software that interacts directly with the physical server’s CPU and disk space. Other types of virtualization, para-virtualization, or OS-level virtualization use different components and work through unique approaches. The architecture can depend on the type of virtualization that is being used. What is a Virtual Server? A virtual server can be defined as a web server that shares computer resources with other virtual servers and is not a dedicated server. With a virtual server, the entire computer is not dedicated to running the server software but is split among two or more virtual machines. Dozens of virtual servers can co-reside on the same computer without affecting the performance, but this may depend on the workload. Virtual machines can be sold as a service and are priced much lower than a physical server in spite of being functionally equivalent to a dedicated physical server. Virtual servers are much easier to be created and configured than a dedicated server and performance can be equal or less based on the workload of other instances on the same hardware. What Are the Different Types of Virtual Servers? There are a few different versions of virtualization that are common, and these create unique forms of virtual servers. One type of virtualization focuses on the operating system. This means a desktop’s OS is moved to a virtual environment and is hosted on a server. The operating system includes one version on the physical server and copies of it for each virtual server that are provided to different users. Another type of virtualization is server virtualization, which moves the entire physical server into the virtual environment. Rather than just the operating system, this virtualization method can emulate a physical server and helps to reduce the number of servers that need to be used. Virtual servers can also be used for storage or for combining multiple physical hardware into a single virtualized storage environment. This virtualization is also known as cloud storage and can be public, private or a hybrid of both. The last type is hardware virtualization, which makes the components of a real machine virtual. It works like a real machine and is typically a computer with an operating system. The software remains on the physical machine and is separated from the hardware resources. What Are the Different Uses for Virtual Servers? Virtual servers can prove useful as a tool for lowering costs and creating more efficient use of power, but their function can depend on the preference of the user. Some virtual servers can be utilized mainly for testing and developing server applications. Creating server applications can require rapid and frequent server reconfiguration, which makes virtual servers a helpful tool in the process. A company can make a library of virtual machines in different server configurations without having to dedicate a physical computer to each configuration. This is useful for testing software in certain configurations before deploying them. Virtual servers can also be useful in consolidating the amount of servers that a business uses. They can reduce the number of physical servers used by migrating applications and operating systems into virtual machines that are running under only one server. Companies that have different departmental or branch office applications that are written for different operating systems can consolidate these servers that go underutilized. What Are Some Typical Virtual Server Configurations? Configuring a virtual server usually starts with physical host server, which must be set up to run multiple servers. The physical server will typically be a four or six core CPU which is enough to run a number of virtual servers using the resources that are spread out among the RAM, CPU, disk and network input/output. A small virtualization project usually starts with a single server, which should have at least a 4-core CPU for hardware resources but might work better with a 6 or 12-core CPU. More CPU cores can mean faster and more consistent performance across the virtual machines as the virtual server load is more spread out. As far as RAM, a virtualization host machine needs as much as possible and the fastest available. It can be difficult to oversubscribe RAM because running multiple virtual machines requires a lot, especially with hypervisors that do not share memory features. The same is true for storage disks which are usually SATA drives or SAS drives in a RAID 5 or RAID 6 array. What Is Server Virtualization? In creating a virtual server, the process is also known simply as server virtualization. It means partitioning a physical server into smaller virtual servers to maximize the resources of the dedicated server. Through server virtualization, the resources of the server itself are hidden or masked from users who each have their own separate and independent virtual machine to utilize. The server administrator uses software that divides the physical server into multiple isolated virtual environments while masking resources from the users, such as the number and identity of individual physical servers, processors and operating systems. Virtual servers sometimes are only able to work through the use of hardware emulation if there is no direct access to server hardware. Hardware emulations means one hardware device mimics the function of another hardware device. This is usually used when an administrator needs to run an unsupported operating system within a virtual machine. Since the virtual machine does not have access to the server hardware, an emulation layer directs traffic between physical and virtual hardware. Hardware emulation is important for virtual servers that can only be used on certain guest operating systems. Through the emulation layer the administrator can run and interact with an embedded operating system from a desktop that couldn’t normally support the operating system. This is necessary because an embedded operating system is created to run in dedicated hardware environments or on systems that are not intended for interactive use. Three Kinds Of Server Virtualization There are three basic types of server virtualization that are typically used to divide a single physical server into multiple virtual servers. Each type shares common traits and use the physical server as the host with virtual servers as guests. All three systems use a different approach to allocate physical server resources to virtual server needs. The first kind of server virtualization is full virtualization, which uses a hypervisor as a special kind of software to allocate resources. The hypervisor interacts directly with the physical server and works as a platform for each virtual server’s operating system. The hypervisor also works to keep each virtual machine independent and unaware of other virtual servers running on the physical machine. Each guest server runs its own OS while the hypervisor monitors the resources of the physical server and relays these resources to the appropriate virtual server. Some of the physical server’s processing power must be reserved for the hypervisor’s needs. The second type of virtualization is a different approach known as para-virtualization. With this method, guest or virtual servers are aware of one another, unlike in the full virtualization approach. The hypervisor does not require as much processing power to manage the virtual servers as it would under full virtualization, which can help prevent any slowing down of the performance. The hypervisor does not play as big a role because each OS is already aware of the demands the other operating systems are placing on the server. This makes it possible for the whole system to work together as a unit rather than the hypervisor relaying resources and having to monitor what resources are available for each virtual server. OS Level Virtualization The third type of virtualization that is an option to use is OS level virtualization, which uses a completely different architecture than the other two. OS level virtualization does not even use a hypervisor at all because virtualization capability is part of the host OS which performs that kind of functions that a fully virtualized hypervisor would. There are limitations to this method though as all the guest or virtual servers run on the same OS. The virtual servers remain independent of one another, but users are not able to mix and match OS’s among them. This is environment is known as homogeneous since all the operating systems are the same. The type of virtualization that would work best depends on the network administrator’s needs. Appeal Of Virtual Servers Now that it is more clear what virtual servers are and how they work, one must understand what the appeal of virtualization is. One of the benefits of using virtual servers is that companies can practice redundancy without spending too much money on extra hardware. Having multiple virtual servers that all run the same application is a safer method because if any of the servers should fail, a second server can quickly take its place. For businesses, that means less down time and minimal interruption of their service. Redundant virtual servers are usually created on different physical servers so that if one physical machine were to fail, another one would offer the virtual server running the same application. Another appealing factor of using virtual servers is the ability of businesses to save space and minimize the amount of hardware used through consolidation. Instead of having multiple physical servers which each can run only one application, a single server can run multiple virtual environments and utilize more of the server’s processing power. Having a lot of physical servers can be expensive and time consuming to maintain while also taking up a lot of space. Virtual servers give companies the opportunity to consolidate their equipment and use it much more efficiently. Traditional server hardware, or legacy systems, will eventually become obsolete and businesses will have to switch to a new system. Switching over can be difficult, but a virtual version of the hardware can be created on modern servers. With the virtual version of hardware, applications all run the same and programs perform as if they were on the old legacy system. This makes it easier for a company to make the transition to new processes without worrying about hardware failure especially if the legacy system is outdated or broken. Virtual servers are a simple concept of utilizing more of the processing power available in a physical server and consolidating equipment for more efficiency, lower IT costs and redundancy. Using a single physical server to run multiple virtual machines is a process that businesses are now beginning to consider a more effective way to run IT.
<urn:uuid:f8044417-2729-4138-9009-ec2e5ea3c1d3>
CC-MAIN-2024-38
https://netstandard.com/the-science-behind-virtual-servers/
2024-09-15T19:39:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00089.warc.gz
en
0.930393
2,444
3.671875
4
In an era marked by relentless data breaches and evolving cybersecurity threats, the role of Data Security Platform Management (DSPM) has become paramount. As the digital landscape continues to expand, so do the vulnerabilities associated with sensitive data. This article delves into the promising future of DSPM, shedding light on its significance, the convergence with Data Security Platforms (DSPs), and its role in addressing contemporary and future data security challenges. What is DSPM? DSPM, or Data Security Platform Management, refers to the comprehensive management and oversight of data security platforms. It encompasses the strategies, processes, and technologies used to safeguard sensitive data, prevent unauthorized access, and ensure compliance with data protection regulations. DSPM aims to provide organizations with a holistic view of their data security landscape, enabling them to identify vulnerabilities, mitigate risks, and respond effectively to potential threats. What is a Data Security Platform (DSP)? A Data Security Platform (DSP) is an integrated solution designed to protect and manage sensitive data across an organization’s digital ecosystem. DSPs employ a combination of technologies such as data discovery, classification, encryption, access control, and monitoring to ensure data integrity and privacy. By centralizing data security functions, DSPs enable organizations to streamline their security efforts and establish a robust defense against cyber threats. DSPM vs DSP: Understanding the Distinction While DSPM and DSP share the overarching goal of safeguarding data, they serve distinct yet complementary purposes. DSPM focuses on managing and optimizing the various components of a DSP. It involves activities like policy enforcement, risk assessment, incident response, and ongoing monitoring. In contrast, a DSP is the technological foundation upon which DSPM operates, encompassing the tools and solutions that directly protect and manage data. Why is DSPM Important? The importance of DSPM cannot be overstated in an age where data breaches can have devastating consequences for organizations and individuals alike. As data proliferates and becomes more valuable, cybercriminals are continuously devising sophisticated methods to exploit vulnerabilities. DSPM offers several key benefits that make it an indispensable part of a robust cybersecurity strategy: - Comprehensive Protection: DSPM provides a unified approach to data security, enabling organizations to identify and address vulnerabilities across their data landscape, regardless of where the data resides. - Regulatory Compliance: With stringent data protection regulations like GDPR and CCPA in place, DSPM assists organizations in achieving and maintaining compliance, avoiding costly fines and reputational damage. - Risk Mitigation: By proactively identifying and addressing security risks, DSPM helps mitigate potential breaches and data leaks, safeguarding an organization’s sensitive information. - Operational Efficiency: Centralizing data security functions through DSPM streamlines processes, reducing complexity and improving overall operational efficiency. DSPM in the Gartner Hype Cycle Gartner’s Hype Cycle provides valuable insights into the adoption and maturity of emerging technologies. DSPM is on a trajectory that mirrors the stages of the Hype Cycle, with its current position representing the “Slope of Enlightenment.” As organizations increasingly recognize the value of DSPM, they are moving beyond initial implementation to explore its full potential. This includes optimizing data security strategies, integrating advanced analytics, and harnessing the power of AI to enhance threat detection and response. Data Security Challenges Now and in the Future The ever-evolving cyber threat landscape presents a myriad of challenges for organizations seeking to protect their sensitive data. From insider threats to ransomware attacks, the stakes are higher than ever before. Additionally, the rapid adoption of cloud technologies and the Internet of Things (IoT) introduces new areas of vulnerability. Looking ahead, data security challenges are expected to become more complex as technology continues to advance. With the proliferation of data across diverse platforms and devices, the potential attack surface expands—expanding organization’s need for a robust and adaptive DSPM approach. The Convergence of CSPM and DSPM As organizations strive for comprehensive cybersecurity coverage, the convergence of Cloud Security Posture Management (CSPM) and DSPM emerges as a pivotal trend. CSPM focuses on securing cloud environments, ensuring proper configuration and compliance. By integrating CSPM and DSPM, organizations can create a seamless security ecosystem that safeguards data across on-premises and cloud infrastructures. This convergence offers a holistic approach to data security, addressing vulnerabilities at every layer of the digital landscape. DSPM with BigID In the forefront of the data security posture management evolution stands BigID, a leading innovator in data security and privacy management. BigID’s cutting-edge platform empowers organizations to proactively manage and protect their sensitive data. Leveraging advanced machine learning and AI, BigID enables accurate data discovery, classification, and identification, ensuring compliance with regulatory requirements like GDPR and CPRA. Key capabilities of BigID’s DSPM solution include: - Discover, categorize, and chart sensitive information throughout your environment: Effective Data Security and Privacy Management (DSPM) solutions should possess the capability to automatically unearth, label, and catalog both organized and unstructured data across on-premises and cloud settings, all within a unified user interface. - Pinpoint potential risks related to access and exposure: Gain insight into which individuals have access to specific data, identify instances of data overexposure, and monitor data sharing, encompassing both internal and external access. Integrating access intelligence into the mix enables a reduction in insider threats, a hastening of zero-trust implementation, the achievement of least privilege principles, and an enhancement of data security from the access angle. - Issue alerts for high-risk vulnerabilities: Visibility isn’t enough— DSPM solutions must possess the ability to autonomously trigger alerts based on risk levels, breaches of policies, and potential insider threats. These alerts should expedite the investigative process, enabling security teams to efficiently explore, resolve, and monitor security alerts and risk mitigation efforts. - Simplify reporting and risk evaluation: At the core of DSPM lies a comprehension of risk, necessitating the capacity to generate reports detailing your risk posture, monitor progress (and setbacks), and track advancements. Conducting data risk assessments marks the initial step in gauging your position, supported by both comprehensive and high-level reports on your most valuable data on a consistent basis. Get the gold standard in DSPM—schedule a free 1:1 demo with BigID today.
<urn:uuid:3ca56218-526b-4d79-ad7d-c6c1bbc7747c>
CC-MAIN-2024-38
https://bigid.com/blog/the-future-of-dspm-in-cybersecurity/
2024-09-16T23:58:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00889.warc.gz
en
0.894277
1,359
2.546875
3
Unlike buffer files or file sync, this approach does not work with files, but rather with specific sectors on a drive. Entire sectors are protected, so no changes stick to those physical areas of the drive, bringing a brand new computer experience with each restart. What’s the difference between a block and a sector? A sector is a physical area on a disk. Think of it as a slice of a track on a formatted disk that can hold a fixed amount of data. The first ones used to hold 512 bytes, then 2048 —now the newer drives use a 4K sector (4096). A block is a group of sectors that an OS can address. A block can consist of one or many sectors. This great ability to shuffle data on block level has a lot of benefits. – No performance hit Compared to file level based methods, there is no performance hit. Very little processor power is spent on providing this kind of protection. – No application compatibility issues Working well below file level takes away any issues with how, where and when any files get moved. Block level solutions don’t care about single files. As a matter of fact, they have no idea what kind of payload is being moved or protected at any given time. – Set it and forget it Once block level protection is activated, no further tweaks are necessary to improve its performance. Whatever configuration was frozen at the moment will prevail with every reboot. Machines can run for years without any updates. If a computer doesn’t boot up, it’s due to hardware failure. Simple is good. – Instant imaging is possible! The block level protection is often compared to instant imaging (too bad such technology doesn’t exist yet – we look forward to quantum computing though). A session can accumulate a lot of changes, and all of those changes can be reset with a simple press of a “restart” button or with a remote command. A fresh system boots up each and every time. – CSIs will be happy Think any changes on a computer are gone once it reboots? Not quite. The data is still there, just not where the OS thinks it is. Professional forensics tools that access the drive directly will be able to recover the data needed. – You can patch and pull updates Nothing sticks to a system protected at the block level, even updates and patches. After a reboot it will be like they never happened. To update/patch your systems just turn the protection off, patch your system, then freeze it again. – Better performance later Optimize your system, then apply protection. With no system lint your computers will operate at their peak since junk files and system settings will not survive a reboot. Even your 100% defragmentation settings will stay the same, further improving your performance. – You can choose what to freeze Block level solutions typically apply to a partition, so if you want to have some flexibility in what to freeze and what to leave alone, you can create multiple partitions. A good idea is to leave all your program and system files on one partition (C: on Windows) and have your user profiles setup on another. Freezing C: partition and leaving D: as is. This way you can completely lock your operating system, but leave user data intact. Emails and pictures will be saved, system files and installed apps will be reset. There are a couple of downsides though: – Be careful what you freeze Block level protection freezes everything – even the bad stuff. If your computer is infected and antivirus cleans it, the infection will be brought back upon restart. – Need to restart You may not like it, but there’s no way to switch the protection status without rebooting a computer. If you need to change protection status on the fly, block level protection is not for you. – What root? Block level solutions don’t care about user privileges. Any changes, either by guest or full admin, won’t survive a reboot. Thaw systems before applying any patches or updates.
<urn:uuid:b57d9843-586e-4841-874f-e4d857fe14b7>
CC-MAIN-2024-38
https://www.faronics.com/news/blog/block-level-protection
2024-09-08T14:07:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00789.warc.gz
en
0.910741
842
2.90625
3
The Institute for Advanced Computational Science at Stony Brook University, New York, has installed the new Ookami supercomputer. The system uses the Fujitsu A64FX Arm-based processor found in the world's fastest supercomputer, Fugaku. It means wolf in Japanese "This system has a nearly magical combination of programmability, performance, and efficiency, with the potential to transform computational research in many areas of science, engineering, and industry," Robert Harrison, professor of Applied Mathematics and Statistics and director of the IACS, said. “Ookami is a resource for researchers in academia or industry nationwide. Its use opens the door for researchers to explore new computing technologies that could greatly impact the future of high-performance applications.” The HPE Apollo 80 system also uses the Cray ClusterStor E1000 storage system from HPE and is run as part of a partnership with IACS and the Center for Computational Research at Buffalo University. Ookami is also financially supported by the National Science Foundation's Office of Advanced Cyberinfrastructure. Bright Computing's Cluster Manager software is used to reduce the complexity of building and managing high-performance Linux clusters for the system. "Bright Computing has a long history of working with Cray on state-of-the-art supercomputers, and we are delighted to continue this under the new merge with HPE," said Lee Carter, VP Alliances at Bright. "Our engineers have worked closely with Fujitsu’s A64FX-based development team to ensure Bright Cluster Manager seamlessly manages A64FX-based systems like the HPE Apollo 80 in the same way we manage traditional Intel-based servers. We are proud to play our part in bringing all these technologies together for our valued customer, Stony Brook.” A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), a development of the Armv8-A instruction set architecture made with HPC in mind. SVE enables vector lengths that scale from 128 to 2048 bits in 128-bit increments. Rather than specifying a specific vector length, CPU designers can choose the most appropriate vector length for their application and market - allowing for A64FX to be designed with a focus on exascale computing.
<urn:uuid:c62b232c-6fbc-40f8-8c4b-78129ceb27b2>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/news/arm-powered-ookami-supercomputer-installed-stony-brook-university-new-york/
2024-09-11T00:52:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00589.warc.gz
en
0.927934
469
2.578125
3
Scareware has over a decade causing serious problems to Internet users. This is especially true among casual tech users who don’t have enough expertise to identify the artifice that this type of attacks poses. As a consequence, they end up falling as victims, losing money and jeopardizing their privacy down the road. But this type of social engineering tactic is far from disappearing. That’s why users, especially in business organizations, need to become aware of the threat and prepare to fight back every single attack that lands on their devices. At My IT Guy, our team wanted to talk about how scareware attacks work and how individuals and organizations can act in order to mitigate the dangers that this aggression supposes online. What is Scareware? Scareware is a type of cyberattack that uses social engineering and fear tactics to lead the victim (which ideally is in a panic state) to obey the attacker’s instrucions. The goal is that the victim downloads a piece of infected software provided by the attacker as the definitive solution to the fake infection. The piece of software that the victim ends up downloading will be the real responsible, infecting the victim’s device and leaving a door open for the hacker to steal personal data. In other scenarios, scareware is also used by the attacker to directly get paid an amount of money in exchange for the deceitful solution, a technique for quicker benefits. Malicious agents make the most of these scenarios by also stealing credit card information. We all have been in contact with scareware. This is especially true for those users who browse on the Internet without ad blockers and AV solutions that integrate with the web browser. Most of the time, these attacks come in the shape of highly aggressive pop-ups that aim to generate panic in the user. These pop-ups usually show very violent and pushy warnings about the fake threat that supposedely infected the computer, claiming that immediate actions must be taken on the user’s side in order to prevent damage and data loss. The message showed by scareware is both vague and dramatic at the same time. Animations and even sound effects are used in order to influence the inexperienced users online, who often get panicked. Alongside the great drama, the message will then explain how the infection can be cured, offering a download as the solution. How to Prevent Scareware Attacks? There are human and technical factors when it comes to preventing scareware attacks. First, Internet users need to be aware of this threat online. We must acknowledge its existence and understand how the attacks operate in order to identify them in case they appear. That’s the step one with scareware and with every single type of attack based on social engineering. Fortunately for us, scareware attacks are evident most of the time. If we find ourselves in front of an unexpected, pushy, dramatic notification of virus infection, coming from a unknown company which claims to have the miraculous solution (installing a free software they offer), we need to suspect about it. Everything begins with being alert and recognize these red flags when browsing online. And if you care about your business being exposed, share this knowledge with your team and help them to become aware. On the technical side, your web browser needs to be up-to-date and count with a fully-working ad blocker that keeps these pop-ups at bay. If you want to go the extra mile, install an AV or anti-malware software that covers you further. The latest versions are always vigilant of the potential scareware attacks that often come our way when we visit compromised websites.
<urn:uuid:5f15ef41-3680-42b0-8d92-271fc9ce96ba>
CC-MAIN-2024-38
https://www.gomyitguy.com/understanding-and-fighting-back-scareware-attacks/
2024-09-10T23:12:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00589.warc.gz
en
0.946788
744
2.75
3
I think the best way to answer the question is to describe what each one is. I will start with Cisco ONE. ONE stands for “Open Network Environment”. The idea is to allow programmability of the network and the network elements. This allows for consistency across both physical and virtual network environments. The way that Cisco ONE accomplishes this is to bring the Open Networking Foundation works together with the following tool sets: - OnePK (the API for NX-OS, IOS, and IOS-XR) - Software Defined Networks/OpenFlow (Cisco has created its own controller and agent and introduced it in the Cat3K switches) - Overlay Virtual Networks (Cloud focus, hypervisor agnostic, using VxLAN and NX1000v) So Cisco ONE is Automation and Orchestration of the network to meet various application requirements. You can see that OnePK is part of the Cisco ONE ecosystem. Now let’s talk more specifically about OnePK. Consider the following customer problem: How can I have new ways to control/program/configure my network and its elements to meet application specific requirements? What OnePK (One Platform Kit) does is provide a rich and programmable service set APIs (Application Programmable Interfaces) to create a set of tools that can control the network infrastructure. This means that the new network engineer can write a C or Java program that can manipulate: - Data Path (copy, inject, punt packets) - Policy Routing (create new and different routing decisions) They write the program and OnePK provides a seamless interface to all Cisco operating systems NX-OS, IOS-XR, and/or IOS via consistency in verb usage. So the new network engineer does not worry about syntax and CLI. OnePK then becomes an interface like Visual Basic does when used in Excel or another analogy would be macros in Word. The routers and switches become programmable devices and OnePK transcends the need to manage each individual device. Check out the following demo: http://www.youtube.com/watch?v=Bqawx7iiKkw Some folks get confused between OnePK and OpenFlow. OnePK is an API, whereas OpenFlow is a protocol. I hope that demystifies the difference between those two. I further hope that the definitions of Cisco ONE and OnePK clearly define how they are related and how they are different things. Here are some links: I hope you find this article and its content helpful. Comments are welcomed below. If you would like to see more articles like this, please support us by clicking the patron link where you will receive free bonus access to courses and more, or simply buying us a cup of coffee!, and all comments are welcome!
<urn:uuid:794df50c-8f98-4a8c-b9ce-9ab26b884f80>
CC-MAIN-2024-38
https://www.cellstream.com/2013/09/12/what-is-cisco-one-and-cisco-onepk/
2024-09-12T06:24:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00489.warc.gz
en
0.923571
578
2.625
3
With the constant drumbeat of news reports about security breaches, cyber security is hard to ignore. Organizations understand that they need comprehensive solutions that prevent, detect, and respond to security threats. They often implement multiple layers of security controls to protect their IT systems. Yet gaps remain. Many organizations have a blind spot when it comes to the Domain Name System (DNS). Although every action on the Internet relies on recursive DNS, many security organizations fail to install corresponding safeguards. Cybercriminals have been only too happy to exploit this security vulnerability. This white paper explains how attackers take advantage of recursive DNS and provides best practices you can follow to mitigate the risks.
<urn:uuid:bc9a9b49-741f-4745-a6a0-bfd8beb2320e>
CC-MAIN-2024-38
https://www.infosecurity-magazine.com/white-papers/is-dns-your-security-achilles-heel/
2024-09-15T23:23:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00189.warc.gz
en
0.936391
136
2.5625
3
Tierney - stock.adobe.com Serverless computing or serverless architecture has become one of the big trends in enterprise IT over the past several years. Along with other cloud-native development models, it has grown in importance in parallel with the increasing uptake of public cloud services over the past decade or so. It is easy to see the attraction of serverless, since it holds out the promise of simply running code without having to worry about the underlying infrastructure, and paying only for the resources you use. But every technology has its pros and cons, and serverless applications may have pitfalls that developers need to be cognisant of to be able to work around them. What is serverless? Serverless is somewhat of a misleading name since it does not remove the need for servers. It simply means the end user does not have to get involved with provisioning or managing the servers on which the code is running, as all that complexity is handled by the platform itself. This may sound a lot like a platform as a service (PaaS), such as Red Hat’s OpenShift or Google App Engine, but in general, these are more traditional developer environments. The developer has much greater control over the deployment environment in a PaaS, including how the application scales, whereas in serverless scaling tends to be automatic. The term ‘serverless’ can therefore apply to a range of services, notably AWS Lambda on Amazon’s cloud, which can be credited with bringing serverless computing to wider public attention when it launched back in 2014. Lambda allows developers to create code that runs in response to some event or trigger, such as a new object being uploaded to a bucket in Amazon’s Simple Storage Service (S3) to perform some required function with it, for example. For this reason, Lambda is often referred to as a function-as-a-service (FaaS) platform. Under the hood, services such as Lambda typically use containers to host the user code, but the platform handles everything such as the spawning of the containers to their retirement once their function is performed. However, other services can be regarded as serverless if they meet the criteria of scaling automatically as needed and not requiring the user to manage the underlying infrastructure. This includes many application programming interface-driven (API-driven) services provided by the major cloud platforms, and so it is easy to see how entire applications could be constructed by linking together serverless elements developed by the end user with functions and services provided by the cloud platform. This kind of architecture can also be regarded as an example of microservices, whereby applications are implemented as a collection of loosely coupled services that communicate with each other. The benefit of implementing an application this way is that key parts of the overall solution can be scaled up independently as required, rather than an entire monolithic application having to be scaled up. The independent parts can also be patched and updated separately from each other. Weighing up the pros and cons of serverless The advantages of a serverless architecture are therefore fairly easy to see: it removes the need for the developer to worry about provisioning resources, thereby improving productivity; the user pays only for the resources used when their code is actually running; and scaling should be handled automatically by the platform. According to an IDC report, serverless platforms offer “a simplified programming model that completely abstracts infrastructure and application life-cycle tasks to concentrate developer efforts on directly improving business outcomes”. IDC projected that using serverless could lead to higher productivity and lower costs result in an average return on investment of 409% over five years. As with any architectural choice, there are downsides, and any developer or organisation considering using serverless should be aware of these before taking the plunge. Perhaps the most obvious downside, if it can be considered one, is loss of control. With traditional software approaches, the user typically has control over some or all the application environment, from the hardware used to the software stack supporting their application or service. This loss of control could be as simple as serverless functions typically having few parameters that allow them to be tweaked to meet exact requirements, to having little or no control over the application’s performance. The latter is a problem reported by some serverless developers, who say that processing times can differ wildly between runs, possibly because the code may get deployed to a different server with different specifications the next time it is executed. A known performance issue with some serverless platforms is what is known as a cold start. This is the time it takes to bring up a new container with a particular function in it if no instance is currently live. Containers are typically kept “alive” for a period of time, in case the function is needed again, but are eventually retired if not invoked. There are ways around this, of course, with some developers coding a scheduled event function that regularly invokes any time-critical functions to ensure they stay live. Another issue that might be seen as loss of control is that age-old concern of supplier lock-in. Many serverless application environments are unique to the cloud platform that supports them, and organisations may find that changing hosts to a different cloud would require substantial rewriting of the application code. “We still have the ‘over a barrel’ problem of one supplier’s serverless not being the same as that from another supplier,” says independent analyst Clive Longbottom. However, Longbottom adds that containers and Kubernetes are becoming more prevalent and it is to be hoped that there will be the capability for portability across serverless environments at a highly functional level, or add-on capabilities to orchestration systems that can adapt a workload so that it can run on a different serverless environment with little rejigging, or possibly in real-time as and when required. Keeping tabs on serverless deployments Cost is also a potential downside to serverless applications. This might seem paradoxical, given that we have already mentioned that serverless applications only incur charges for the time that the code is actually running. However, this behaviour can make it difficult for an end user to accurately forecast how much it is going to cost to operate serverless applications and services at a level that will deliver an acceptable quality of service. If there were an unexpected spike in demand, for example, then the auto-scaling nature of the serverless platform could lead to more resources being used than were anticipated. “Where the overall use of serverless is based purely on resource usage and it is not clear as to what the workload will use, then the resulting cost may be surprising when the bill comes through,” says Longbottom. However, if the workload is well understood and serverless is just being used for simplicity, then there should be no such surprises, he says: “If the serverless provider allows for tiering in resource costs, then that can also be a way of control – particularly if they allow the user to apply ceilings to costs as well.” More downsides to serverless lie in testing and monitoring of such applications. With traditional application models, developers often have locally installed versions of software components that the code links to, such as a database, allowing them to test out and verify the code works before it is uploaded to the production environment. Because serverless applications may consist of many separate components, it may not be possible to realistically replicate a production environment. Similarly, monitoring production workloads can be tricky because of the distributed nature of serverless applications and services. Whereas in a traditional application, monitoring would focus on the execution of code, the communication between all the different functions and components in a serverless application can make it much more complex to observe what is happening without specialist tools designed with this task in mind. “Where composite apps are concerned, monitoring and reporting has to be across the whole workflow, which is a pain at the moment,” says Longbottom. “AI [artificial intelligence] may help here, dealing with expected outcomes and exceptions and identifying any gaps that happen along the way – again providing the feedback loops that would be required to maintain an optimised serverless environment,” he adds. All of the issues highlighted here may sound like drawbacks, but every application deployment model has its drawbacks, and those mentioned above must be set against the great convenience that serverless application deployment offers to developers, especially for those that wish to delegate the tiresome resource management tasks to the cloud provider and focus on the business logic of their application rather than dealing with infrastructure issues. Many of the downsides are also likely to be ironed out as serverless platforms evolve and as improved monitoring tools become available that are designed to cope with the challenges of serverless deployments. Organisations just need to be aware that serverless is a different kettle of fish from traditional development, and to carefully analyse the risks of adopting a serverless model for a specific application or service before committing to it. Read more about cloud-native deployments - Don’t let your IT teams get consumed by infrastructure management tasks. Review these serverless compute offerings for more efficient application development. - To address the needs of its serverless users, AWS released its Serverless Application Repository. Explore the ways users can benefit and a few best practices everyone should follow when they use this feature.
<urn:uuid:69ad4e4d-8392-40cf-be45-a15ff1d0b467>
CC-MAIN-2024-38
https://www.computerweekly.com/feature/Serverless-Weighing-up-the-pros-and-cons-for-enterprises
2024-09-17T05:35:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00089.warc.gz
en
0.958141
1,933
2.65625
3
The world of IP addresses is amazingly complex. This complexity stems from the fact that there are myriad types of IP addresses — private, public, fixed, mobile, static, dynamic — each of which are assigned a unique range. In this blog post we’ll discuss two types of IP addresses: fixed and mobile IP addresses. We’ll cover what they are, how they differ, insights between the two of them, as well as how to tell one from the other. Let’s get into it. A Quick Word on How IP Addresses are Assigned IP addresses are just a string of numbers, which by themselves don’t tell you much. How they’re assigned — as well as to whom and for what purpose — is the source of Digital Element’s insights. The process starts with the Internet Assigned Numbers Authority (IANA), an international governance body which is responsible for coordinating both the IP addressing systems across the globe, as well as the Autonomous System Numbers (ASNs) that are used for routing Internet traffic. An ASN is a unique identifier that is assigned to each network or a group of networks that are under common administrative control (e.g. an ISP located here in the U.S.). ASNs serve a crucial role in the operation of the Border Gateway Protocol (BGP). BGP is a routing protocol, and its purpose is to direct data (actually, data packets) between different autonomous systems in the most efficient manner possible. Naturally, those autonomous systems need unique IDs, ergo the ASN. The ASN itself includes a lot of data, including the organization to which it’s assigned and routing policies or the paths that data should take to reach it. Back to assigning IP addresses … the IANA allocates pools of unallocated addresses to regional registries known as Regional Internet Registries (RIRs), according to their needs as described by Global Addressing Policies. The RIR then assigns the IP address blocks to a local Internet registry (LIR) or National Internet Registry (NIR), which then assigns them to an Internet Service Provider (ISP). Sometimes the RIR will assign a block of IPs directly to an ISP. With the knowledge of which IP address blocks are assigned to which entities, powerful insights can be gleaned. What is a Fixed IP Address? Fixed IP addresses are IP addresses that are routed via cable, DSL, or fiber infrastructure for internet connectivity and are assigned to non-mobile devices. Think: the home router or corporate network. Fixed IP addresses can be static or dynamic, it’s generally up to the ISP to make that decision. - Static IP addresses are those that have a consistent geolocation, meaning at the time Digital Element observes it, its geolocation is the same as previously identified. We track the degree to which static IP addresses are stable in weeks and months. Static IP addresses are likely tied to the same buildings within an ISP block. - Dynamic IP addresses are addresses whose geolocations change frequently. They’re dynamic because they can service different end users at any given moment. Dynamic IP addresses are common in ISP, mobile carrier and proxy blocks because end users fluctuate within a given area. What is a Mobile IP Address? These IP addresses are typically assigned to mobile devices such as smartphones and tablets for internet connectivity that’s routed via cellular networks. Mobile IP addresses are always dynamic, meaning they change frequently. When a mobile device connects to a cellular network, it is assigned an IP address from a pool of available addresses. This dynamic assignment allows cellular providers to efficiently manage their IP address resources. Distinguishing Between Fixed and Mobile IP Addresses Simply knowing the ISP that’s tied to an IP address itself can provide insight into the geolocation of the IP address, as well as whether it’s fixed or mobile. That means, of course, that we’ll need to understand a bit more about the ISP market. There are four types of ISPs: - Fixed ISP, such as Comcast and Charter. These ISPs provide internet connectivity to both homes and commercial entities. Some businesses, educational institutions and governments can act as their own fixed ISP. Some ISPs also provide WiFi hotspot connectivity. - Mobile and fixed ISPs, such as AT&T. These ISPs provide connectivity to homes and businesses, as well as users on the go. - Mobile-Only ISPs, such as Cricket Wireless. These ISPs provide connectivity for mobile devices only. - Mobile connectivity for homes and businesses, such as T-Mobile and other 5G providers. Again, knowing the blocks of IP addresses assigned to each type of ISP helps Digital Element to glean insights about the devices behind those addresses. For instance, we can look at an IP address and know that it is a fixed IP address that is highly stable and associated with a particular building in an office park. Why Distinguish Between Fixed and Mobile IP Addresses? The ability to distinguish between the two types of IP addresses is very useful for businesses. Take, for instance, digital ad-tech companies that execute or measure mobile app install campaigns on behalf of agencies and app developers. App install campaigns are rife with fraud. Nefarious players will attempt to pilfer the marketer’s budget by claiming installs that didn’t occur. The presence of a valid mobile IP in the data can help legitimate companies ascertain the validity of the install. Note: mobile IP alone will not be enough to validate app installs, but it provides critical context. In other cases, a company, such as a brokerage, may only allow for on-premise access to sensitive information. Any request from a device with a mobile IP can be blocked automatically. That’s not to say that all mobile devices will be blocked; a user who is within the building can still access that data via a mobile phone. In this scenario, the user will sign in via the WiFi, and will have a fixed IP address, indicating that he or she is within the building. Distinguishing between fixed and mobile IPs can also help drive efficiencies in knowing when to request additional authentication. Let’s say a consumer signs on to his bank from home every day in order to check his balance. The bank is likely to have a history of sign-ins from that fixed IP. Now let’s say that the consumer signs on from a mobile IP that is in a location far from his house. In this case, the bank may opt to require a second form of authentication. IP address intelligence data alone won’t secure networks, but it can provide critical context to help businesses set smart rules to protect their — and their customers’ — data. To learn more insights, download “The Definitive Guide to Understanding IP Addresses and VPNs and Implications for Businesses” or contact us to learn how IP geolocation can be leveraged in your industry.
<urn:uuid:6ef515da-dcf0-4e3f-b929-f04b77319fb8>
CC-MAIN-2024-38
https://www.digitalelement.com/resources/blog/distinguishing-between-fixed-and-mobile-ip-addresses/
2024-09-08T12:20:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00830.warc.gz
en
0.956841
1,442
3.1875
3
CompTIA IT Fundamentals (ITF+) CompTIA IT Fundamentals is aimed at those considering a career in IT and computer-related fields. There are no special prerequisites for you to meet to successfully start this course. What you’ll learn in this course The IT Fundamentals FC0-U61 Certification Study Guide was designed to help you acquire the knowledge and skills to set up and use a computer at home securely and keep it in good working order, as well as provide informal support for PCs and simple computer networks to your colleagues in a small business. It will also assist you in preparation for the CompTIA A+ certification exam.. After you successfully complete this course, expect to be able to: Set up a computer workstation running Windows and use basic software applications. Understand the functions and types of devices used within a computer system. Apply basic computer maintenance and support principles. Understand some principles of software and database development. Configure computers and mobile devices to connect to home networks and to the internet. Identify security issues affecting the use of computers and networks. Common Computing Devices o Information Technology o Using an OS o Functions of an Operating System o Managing an OS 65 Management Interfaces o Troubleshooting and Support 85 Support and Troubleshooting o Summary Using Computers Module Using Apps and Databases o Using Data Types and Units 109 Notational Systems o Using Apps o Installing Applications o Programming and App Development o Using Databases o Summary Using Apps and Databases Using Computer Hardware o System Components o Using Device Interfaces o Using Peripheral Devices o Unit 4 Using Storage Devices o Summary Using Computer Hardware Module 4 / Using Networks o Networking Concepts o Connecting to a Network o Secure Web Browsing o Safe Browsing Practices o Using Shared Storage o Using Mobile Devices o Summary Using Networks Module 5 / Security Concepts o Security Concerns o Using Best Practices o Using Access Controls o Behavioural Security Concepts o Summary Managing Security If you would like to know more about this course please contact us
<urn:uuid:45f17f2e-bf47-4554-8f92-9585f24b3a97>
CC-MAIN-2024-38
https://www.flane.ae/comptia/comptia-it-fundamentals-(itf%2B)
2024-09-08T13:39:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00830.warc.gz
en
0.840212
460
3.125
3
What is network optimization, and why is it needed? Network optimization is an important practice that will help improve a variety of business aspects—from processes to simple work communications. What is network optimization? In the fast-paced digital era, where connectivity is at the heart of business operations, the term “Network Optimization” encapsulates a strategic and multifaceted approach to refining and enhancing the performance of a network infrastructure. It goes beyond mere technicalities, representing a comprehensive methodology of IT infrastructure management that ensures networks operate at peak efficiency, delivering seamless connectivity, efficient data transfer, and the support necessary for modern applications and communication. Navigating the intricacies of network optimization involves a meticulous examination of various components, protocols, and technologies. It’s about ensuring that the pathways for data flow within an organization are not just functional but finely tuned for optimal speed, reliability, and security. In essence, network optimization is the art and science of perfecting the digital circulatory system that fuels the operations of businesses in the 21st century. Definition of network optimization The definition of network optimization extends beyond just ensuring that data travels from point A to point B in a data center. It encompasses a dynamic process involving continuously enhancing and adjusting various elements within a network to achieve the best possible performance. Network optimization involves intelligent bandwidth management, reducing latency to imperceptible levels, and prioritizing critical traffic; network optimization is important to ensure that essential applications operate without a hitch. It includes advanced techniques such as load balancing, traffic shaping, and protocol optimization. This process is not a one-time event but a continuous journey, adapting to the evolving needs of the business, the technological landscape, and the increasing demands of users and applications. Defining network optimization requires understanding it as a strategic investment rather than a reactive fix. It’s about anticipating the needs of tomorrow and architecting a network that not only meets those needs but exceeds them with resilience and efficiency. Network optimization components Component #1 - Streamlined connectivity Efficient connections form the backbone of a well-optimized network, contributing to a more robust business infrastructure and fostering smooth communication and collaboration within and beyond organizational boundaries. Component #2 - Bandwidth management Effective network performance optimization and bandwidth management are crucial in data-intensive operations. Managing bandwidth is critical to ensuring optimal network performance, enabling the smooth flow of data across your company, and accommodating the increasing demand for data-intensive applications. Component #3 - Latency reduction Reducing latency is key to network optimization, especially in real-time communication scenarios, supported by advanced techniques that minimize delays between data packets and enhance the responsiveness of your network, supporting applications that rely on real-time data processing. Types of network optimization Type #1 - Application optimization Optimizing applications within a network significantly influences overall network performance. The implementation of advanced strategies further optimizes network performance, cloud architecture, and application delivery, ensuring smooth user experiences and unlocking the full potential of the software ecosystem. Type #2 - Traffic prioritization Maintaining seamless operations requires prioritizing network traffic effectively. Advanced methodologies prioritize traffic, enhancing the overall functionality of the network. These methods ensure that critical tasks receive attention in dynamic and complex landscapes of modern network environments. Type #3 - Security optimization Security plays a critical role in network optimization. Cutting-edge techniques optimize network security and data protection without compromising network performance metrics. These approaches effectively safeguard digital assets and sensitive information against the ever-evolving landscape of cyber threats. Why network optimization is important for your business Understanding the significance of network optimization is crucial in today’s interconnected business landscape. Businesses with a finely tuned network have a distinct advantage in a landscape where digital agility and responsiveness are paramount. It’s not just about keeping up; it’s about staying ahead, and network optimization positions a business as a frontrunner in a competitive market. Benefits of optimizing your network for business Benefit #1 - Enhanced productivity A well-optimized network contributes to increased productivity. Network optimization positively influences day-to-day business operations, fostering a more efficient and collaborative work environment that empowers your workforce to deliver their best. Benefit #2 - Cost savings Optimizing your network can lead to significant cost savings. Efficient networks translate to substantial financial benefits for your business, contributing to the overall bottom line and allowing resources to be allocated strategically. Benefit #3 - Improved reliability Reliable networks are the backbone of a successful business. Advanced strategies for network optimization enhance overall reliability, reduce downtime and network congestion, and ensure continuous operations even in the face of unforeseen challenges. Benefit #4 - Scalability Scalability is a key consideration for growing businesses. Network optimization facilitates seamless scalability, including employing connectivity solutions such as Network Transport that allow your network infrastructure to adapt to changing demands and ensure your business can grow without constraints. Cloud-based network optimization—using private, public, and hybrid cloud solutions—offers unparalleled flexibility and scalability, allowing businesses to adjust their network resources based on demand and leverage the expertise of cloud service providers specializing in network optimization. This outsourcing of specialized functions can minimize cloud migration risks and result in more efficient and effective network optimization tools and strategies. Benefit #5 - Competitive edge A well-optimized network provides a competitive edge. Staying ahead in network optimization can position your business for success in a competitive market, where efficient communication and data flow can make a significant difference. Network optimization techniques Technique #1 - Quality of Service (QoS) Quality of Service is paramount for network optimization. Advanced QoS techniques ensure priority for critical applications, maintaining a high standard of service across your network to meet the diverse needs of your organization. Technique #2 - Load balancing Balancing network loads is essential for efficiency. Sophisticated load-balancing techniques distribute workloads for optimal performance, preventing bottlenecks and ensuring equitable resource usage in complex and dynamic network environments. Technique #3 - Network monitoring Constant monitoring to improve network performance is key to optimization. Advanced network monitoring tools and methodologies that proactively identify issues provide valuable real-time insights for maintenance and optimization efforts. Technique #4 - Traffic shaping Shaping network traffic is a proactive strategy. Advanced traffic shaping techniques contribute to a well-optimized network, managing the flow of data intelligently to maximize efficiency and responsiveness. Technique #5 - Protocol optimization Optimizing protocols enhances communication. Cutting-edge protocol and optimization tools and techniques contribute to a more efficient network, ensuring seamless data exchange and reducing overhead in data transmission. Important network optimization metrics Metric #1 - Throughput Throughput is a critical metric for assessing network performance. Measuring throughput performance metrics contributes to effective network optimization, ensuring efficient data transfer even in high-demand scenarios. Metric #2 - Latency Reducing network latency is a constant goal. Monitoring and minimizing latency contribute to an optimized network, enhancing real-time communication and improving user experiences. Metric #3 - Packet loss Minimizing packet loss is essential. Monitoring and addressing packet loss issues can optimize your network, preventing network data loss and interruptions even in challenging network conditions. Metric #4 - Jitter Jitter can disrupt communication between network devices. Managing and minimizing jitter contributes to a stable and optimized network and ensures consistent and reliable data delivery in diverse network environments. Metric #5 - Network availability Network availability is a key metric for reliability. High network availability contributes to successful optimization, minimizing downtime, improving network performance, and ensuring continuous operations despite network challenges. Optimizing your network with Flexential In the dynamic landscape of the digital age, network optimization stands out as a pivotal strategy for businesses aiming to enhance their operations and communication. Understanding the core principles of network optimization is key to unlocking its potential benefits and ensuring a seamlessly operating and efficient network infrastructure that caters to the demands of modern business environments. As a leader in data center solutions, Flexential offers comprehensive network optimization services, including colocation and interconnection. Learn how our seasoned experts leverage state-of-the-art technologies and innovative managed infrastructure strategies to tailor solutions that meet your business’s unique needs, ensuring a network that aligns with your strategic goals and positions your business for future success in an ever-evolving digital landscape. Schedule a consultation with Flexential today!
<urn:uuid:4f2b41aa-b44a-4a89-a114-68787b97618b>
CC-MAIN-2024-38
https://www.flexential.com/resources/blog/what-is-network-optimization
2024-09-09T18:37:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00730.warc.gz
en
0.892578
1,737
2.859375
3
Virtual Reality – can it harm our eyesight? While technology has enhanced our lives in many ways and offered solutions, too much of anything, has its own set of repercussions. Prior research has indicated that being overly connected can cause psychological issues, from distraction, narcissism, expectation of instant gratification, and depression. Besides affecting user’s mental health, intense use of technology (and physical inactivity) can increase the number of health risks including, vision impairment, hearing loss, deep vein thrombosis, headaches, and neck strain. VR is a simulated experience that can be similar to or completely different from the real world. Virtual reality technology has been utilized in many fields including entertainment, education and medicine. How is VR achieved? Our senses are stimulated together in order to create the illusion of reality, using headsets, omni-directional treadmills and special gloves. Most VR headsets contain two small LCD monitors, each projected at one eye, creating a stereoscopic effect that gives users the illusion of depth. These monitors are placed very close to the eyes which has raised concern from professionals about the negative effects after being used for prolonged periods of time. When using the feature, the person’s brain is forced to process visual stimuli in a different way than normal. Here, the brain faces its normal functioning to understand what the new feature is telling it, causing people to get headaches and eyestrain. Thus, eyestrain is a sign that the eyes are tired from intense activity, which is why many experts recommend taking frequent breaks and not using VR headsets for too long. In addition, more than 70 percent of people complain of nausea and dizziness when using virtual reality headsets, according to ABC news. ‘Cybersickness,’ a form of motion sickness associated with VR headsets, is also known to occur when there is a mismatch of visual information and body position. A software developer, Danny Bittman has tweeted about how putting on a VR headset for several hours a day has harmed his eyesight. He took to Twitter to explain. “Just had my 1st eye doctor visit in 3 years. Now I’m very worried about my future VR use. I have a new eye convergence problem that acts like dyslexia. The doc, a headset owner, is convinced my VR use caused this.” Bittman goes on to mention the doctor’s comment about his eyesight. “These eyeglasses we usually prescribe to 40-year-olds” Despite the rising number of concerns, the Association of Optometrists have found no evidence that proves VR headsets could bring about long-term eye damage. Ways to prevent VR eyestrain? - While using the headset, make sure to consciously blink. - Adjust the display settings so the projected images are not too sharp or too bright. - Take frequent breaks and limit usage time. - Take off the headset regularly, at frequent intervals. - Use artificial/natural tears and massage the eyes and temples after removing headsets. - A person should stand up, walk, stretch and take deep breaths when removing headset. On a good note, VR headsets have been adapted to help improve eyesight. Start-up company GiveVision created a wearable device called SightPlus that aims to restore vision to people whose eyesight has deteriorated beyond repair, by projecting a video of the real world into the working part of the retina.
<urn:uuid:580d2031-f116-45cd-bdac-9816b7c6ea7e>
CC-MAIN-2024-38
https://insidetelecom.com/virtual-reality-can-it-harm-our-eyesight/
2024-09-11T00:37:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00630.warc.gz
en
0.953371
712
3.125
3
The term “metadata” has become more common as our technology allows for more advanced data collection and storage. But what is metadata, and what does it show about you? Let’s explore the concept of metadata. We’ll consider some examples and see how you can handle this information. What Is Metadata? Metadata means “data about data”. It refers to information kept about another piece of data, which doesn’t have much use on its own since it exists to provide a clearer picture about the original file. You might have heard the term “meta” used in other concepts to refer to something self-referential. For example, a meta movie might explicitly tell the audience that they are watching a movie. Metadata is a similar concept. With data defined as “information in digital form that can be transmitted or processed”, metadata is a layer of additional data tied to that core data. It helps you understand what a file is, but isn’t the file itself. Examples of Metadata You’ve likely come across metadata without realizing that’s what it was. Here are a few bits of metadata you’ll see on common file types: - For a Word document, metadata can include the author, file size, the date and time of its creation, and the date the file was last modified. - For a music file, its metadata might tell you the artist, album, track length, year of release, and genre. - For a photo, metadata holds info about the camera used to take the photo, its resolution, file type, and the location where the image was taken. - For a video, metadata commonly includes the video format, length, and resolution. Metadata is also be divided into types, such as the following: - Descriptive: Metadata that better explains the original resource. The majority of what we mentioned above falls into this. - Sometimes, technical metadata is used to separate descriptive info (like the author) from more technical info (like the last modified date). - Preservation (or Structural): Metadata that describes how a file relates to other files. For example, this might keep track of where a file lies in a folder hierarchy. - Administrative: Metadata to keep track of who can interact with the file. For instance, there may be permissions detailing who can see and edit the file. Additionally, this can be metadata that tells your OS how to open and run the file. Depending on the file, metadata might be stored as part of the file itself, or as a separate file that points to the original. How to View Metadata in Windows If this all sounds too theoretical, you’ll be happy to know that you can look at the metadata for any file on your computer. To do this, open File Explorer, right-click on a file, and choose Properties. In the resulting window, open the Details tab to see the metadata included with that item. As you can see in the below example of a photo, the metadata shows the resolution of the image, the camera that took it, the exposure time, when it was created, and more. If you want to remove this data from the file, choose Remove Properties and Personal Information. The amount of metadata for any given file will depend on how it was created. For example, if you buy an MP3 album from Amazon, the files should come with all the correct metadata. In comparison, an audio file you created with a voice recorder app might not have much metadata beyond the file type and length. This data is useful for categorizing your files based on many kinds of information. If you’ve ever sorted the files in a folder by their “last edited” date or searched for photos above a certain size to delete them, you’ve benefited from metadata. Metadata Collection and Privacy We’ve focused on metadata for files here, since that’s the most relevant to normal users. But there are all kinds of metadata at play elsewhere in the digital world: databases use metadata to classify what’s stored, while social media and streaming services use metadata to better figure out what you’re interested in. As discussed, metadata isn’t of much use on its own, like how reading the back of a Blu-ray movie box tells you info about the movie but doesn’t let you watch it. However, metadata can become a privacy concern when you share it unknowingly, or when companies collect it en masse to learn more about you. For instance, if you uploaded a photo to a service like Flickr and didn’t realize that it contained the GPS coordinates of the image, that could expose your location. Part of using the modern internet is the massive amounts of information companies collect by tracking everything you do online. While you can’t stop this entirely, some services do let you disable certain types of collection. Metadata: Files Upon Files Metadata is useful for categorizing files in all sorts of ways, but it can have darker implications too. Before sharing a file, it’s wise to check the metadata included to be sure it’s not more than you expect. And if a company lets you limit the data they collect about you, it’s a good idea to turn off this tracking. To get started, we’ve shown how to manage the data Google stores on your account.
<urn:uuid:44dc019c-3dd2-4037-8017-287cbb2472b6>
CC-MAIN-2024-38
https://www.next7it.com/insights/what-is-metadata/
2024-09-10T23:51:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00630.warc.gz
en
0.923008
1,143
3.78125
4
Tomorrow Water says the US could develop between 900 and 1,500 data centers at waste water treatment plants by 2032. The water treatment company, a subsidiary of BKT of Korea, says data centers can be built cost-effectively at water resource recovery facilities (WRRFs) with a capacity of over 10 million gallons per day, by replacing large traditional treatment tanks with smaller biofiltration system, freeing space to build a data center, which can benefit from the cooling potential of the treatment process. There will be some 900 of these in the US by the year 2032, according to a survey by the US EPA, Tomorrow Water says. Tomorrow Water has launched the Co-Flow Initiative, a campaign to place data centers at US treatment plants, and has filed for a patent for the idea. BKT has a flagship water resource recovery facility (WRRF) in Jungnang, Seoul, South Korea, which uses Tomorrow's BBF Proteus biofiltration system instead of sedimentation tanks. At Jungnang, the recovered land is used for a museum and a park, but the Tomorrow Water subsidiary wants to focus on data centers - and sees a need for them in urban land cleared by downsizing sewage plants. Tomorrow Water's CEO, E.F. Kim said: "If the WRRF is retrofitted with BBF Proteus, a significant amount of physical space can be freed up for other beneficial uses. We believe that the ideal approach should be to focus initially on replacing the WRRF's enormous primary clarifiers to generate space for a data center. Over 90 percent of the world's wastewater treatment plants have traditional primary clarifiers that gravity settle the influent within two to three hours. By contrast, the BBF Proteus advanced primary treatment systems do the same job in less than 30 minutes, hence the smaller footprint." The biofiltration system can actually use waste heat from the data center making the process more efficient, says Tomorrow Water. The WRRF generates biogas from the decomposition of waste, which can generate electricity on-site. "As our society moves toward digital transformation, data processing demand becomes crucial in all aspects, requiring more data centers near highly populated urban areas," said Kim. "These data centers will typically consume more fossil energy, which would aggravate the already worsening global warming situation. By having the data center in the WRRF, cooling will become easier, saving energy. Furthermore, BBF Proteus will divert more wastewater primary solids to biogas production, producing more renewable energy while reducing the aeration energy consumption in the wastewater secondary treatment step." As well as Jungnang, BBF Proteus has now been deployed at the Seonam Wastewater Treatment Center, Seoul - the largest WRRF in Asia - reclaiming valuable space in Seoul's city center.
<urn:uuid:d6437125-a587-46c9-9666-6009ca9bcd23>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/news/tomorrow-water-says-us-could-build-up-to-1500-data-centers-at-sewage-plants/
2024-09-12T06:31:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00530.warc.gz
en
0.929562
581
2.8125
3
How IoT Is Transforming The HVAC Industry HVAC (Heating, Ventilation, and Air Conditioning) enables clean and quality circulation of air within the building. The system modifies the temperature as per the need of the climate or season, having a significant impact on the operational cost and reducing potential waste in the buildings. Introducing the fast-evolving Internet of Things technologies brings greater degrees of innovation, function, and meaningful ROI to the HVAC industry. Embedding intelligence and connectivity into these systems empowers providers and manufacturers to make better use of their assets and meet the energy-efficiency benchmark. The role of IoT in the HVAC industry The key benefit that IoT brings to HVAC systems is operational visibility. It analyzes and collects data to automate, maintain, plan and optimize systems, driving the industry towards more cost-effective and advanced efficiency goals. The collected data is used to monitor and improve appliances reducing energy consumption by large margins. IoT makes HVAC systems smart. It can easily track vibrations with motion sensors, along with airflow, pollutants, etc. HVAC contractors can continuously monitor building occupancy and turn down the power usage if no movements are detected for a prolonged period. It can also incorporate third-party data sources, such as weather feeds, learning tenants’ preferences, and adjusting temperature as per their comfort needs. IoT trends transforming the HVAC industry - Data Analytics driving Optimal Performance Heating, Ventilating, and Air Conditioning (HVAC) contribute up to 50 percent of a building’s total power usage. So it’s logical to look for ways to optimize HVAC performance and generate efficient and sustainable outcomes. Data analytics is of the most effective ways to achieve improvements in manufacturing plant facility and equipment efficiencies. An IoT-enabled HVAC system allows seamless data collection, filtering, and sharing. This collected data helps the building managers reduce energy costs, device outages, and occupant discomfort. The systems are connected to the analytics platform enabling preventive maintenance and continuous optimization. Experts can figure out the maintenance needs of the equipment ahead of time, avoiding costly downtime and expected interruption to operations. - Automatically Reduce Power Consumption Smart HVAC systems embedded with intelligence can easily identify the most energy-efficient configuration automatically. They can control the amount of conditioned, heated, or cooled air running through the building and optimize other parameters such as CO2 levels, temperature, humidity, and occupancy. This means the smart controls can optimize the parameters and quantify the benefits of various configurations in terms of energy usage. The occupancy sensors can also improve the efficiency of cross-systems and help save power. It can modulate the amount of airflow in an area without starving or over-ventilating the other areas. Besides that, connected HVAC platforms can give users an insight into power consumption and CO2 emission statistics. In the Pacific Northwest National Laboratory (PNNL) case, the institution equipped rooftop units (RTUs) with advanced controllers featuring a multi-speed fan, economizer, and ventilation controls. The evaluation found approximately 50% electricity savings for RTUs. - Remote Diagnostic and Predictive Maintenance The data collected from the smart sensors are used to monitor the condition and functionality of the HVAC system. It gives a complete insight into the health of the systems right from temperature settings to energy consumption. One such example is Daikin’s Intelligent Equipment system that provides real-time access to 150 data points on a rooftop HVAC unit or air-cooled chiller. Daikin’s cloud-based management platform allows system managers to trend HVAC health and performance throughout the systems life cycle. The collected data sets also help to run remote diagnostics irrespective of device, time, and location. Managers can easily detect a change or unusual behavior in the systems and send notifications so that the situation can be addressed immediately. This not only saves time and money but also energy and productivity, extending the life of the HVAC system. - Greater Control, Adaptability, and Comfort Other than efficiency and remote diagnostics, smart HVAC systems can offer a superior level of control. Emerson Climate Technologies’ Sensi Touch Smart thermostat allows users to create temperature schedules, control multiple thermostats and receive humidity and extreme temperature change alerts from a single app. Sensors can measure temperature, humidity, and airflow and combine them with external parameters such as weather forecasts to determine optimal settings for the systems. This ensures high performance of the HVAC system and comfort for the users. This proactive approach helps smart HVACs to create a comfortable environment rather than reacting to changes after they occur. IoT is the future of HVAC Systems IoT brings new and exciting opportunities to the energy sector. More and more businesses are switching to smart HVAC systems to compete in the changing commercial and energy-demanding market. Consumers are looking to reduce energy costs, protect temperature-sensitive materials and maintain workplace productivity intact. At iLink, our IoT expert team helps you unlock the true potential of IoT for your HVAC systems. Our future proven solutions can help you launch IoT-enabled HVAC systems or update your legacy equipment. Businesses can gain complete control of their systems and visibility into their performance and environmental adaptability. To learn more about how our IoT infrastructure can help you ensure energy efficiency, get in touch with our experts.
<urn:uuid:21f0b325-e3f1-4448-b23a-ae72953d7ff3>
CC-MAIN-2024-38
https://www.ilink-digital.com/insights/blog/how-iot-is-transforming-the-hvac-industry/
2024-09-12T06:56:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00530.warc.gz
en
0.905844
1,136
2.515625
3
What is ESG? ESG compliance drives a company to operate with conscious regard towards the environment, social issues and the ideal way to govern their organization. It is also used as a set of standards for a company’s behavior that socially conscious investors can use to screen potential investments. The use of ESG Indices provides clients with assessments of investments based on ESG performance. ESG compliance measures a company’s position against best practices for Environment Social & Governance with the results tallied to produce an ESG score. There are many ways to measure a company’s ESG rating with different standards provided by several agencies. ESG describes a category of investing also referred to as “sustainable investing.” This is a catch-all phrase for investments that aim to have a good long-term impact on society, the environment, and company performance. What are the requirements for ESG? Exact requirements vary according to the agency providing the standard, the relevant industry, and even the size of the company looking to comply. Examples of common requirements include: ENVIRONMENT | SOCIAL | GOVERNANCE | Corporate Climate Policies | Labor Standards | Processes and Policies | Energy Use | Workplace Safety | Internal Controls | Waste / Hazardous Waste | Human Rights | Anti Bribery and Corruption | Pollution | Community Relations | Executive Compensation | Natural Resource Conservation | Gender and Diversity | Whistleblower Schemes | Treatment of Animals | Data Protection and Privacy | Transparency | Preservation of Natural Habitats | Business Continuity Management Planning | | Greenhouse Gas Emissions | Businesses that place a high priority on sustainability as well as individuals seeking for socially conscious investment opportunities should pay attention to ESG issues. Organizations must develop an ESG program, raise awareness with an ESG rating, and meet KPIs important to investors with an eye toward the future. The Global Reporting Initiative (GRI) – The GRI standards are guidelines that assist with understanding, developing and communicating sustainability metrics. The framework can be downloaded from their website for free. GRI is an international and independent body and relies on voluntary disclosure, in the form of a report. The Sustainability Accounting Standards Board (SASB) – The SASB is a non-profit organization who have developed a global standard to enable you to identify, manage, communicate and report financial ESG sustainability to investors, in language that investors understand. Their “Materiality Map” identifies the financially material issues and explains the standards, via 77 industry-specific metrics. Because SASB is very specific, it works well alongside another framework, like GRI. International Integrated Reporting Council (IIRC) – The IIRC is a reporting standard, often used together with SASB. Its reporting framework can be used to report on ESG and was designed to drive sustainable development. The Workforce Disclosure Initiative (WDI) – This is created to help companies better communicate labor practices to stakeholders in an efficient way. WDI is starting to accept applications whereby companies can submit their ESG reports. The Task Force on Climate-Related Financial Disclosures (TCFD) – A group of non-profit organizations got together to form a task force that sets out to help organizations integrate information related to climate change in their financial reporting. It’s used across 32 countries by 374 companies. There really are a number of ESG frameworks to choose from and your choice will depend on your organization, the framework provider, and the disclosure requirements for your location. The Future of ESG Reporting In September 2020, five leading framework and standard-setting organizations—CDP, CDSB, GRI, IIRC and SASB—announced a shared vision for a comprehensive corporate reporting system that includes both financial accounting and sustainability disclosure, connected via integrated reporting. Why should you become ESG compliant? Besides actively contributing to preserving the environment for future generations, ESG compliance testifies to a company’s values and is emerging as a factor for long-term financial growth. Starting an ESG program can help businesses in the following five ways: - Advantage in the marketplace. - Reduced costs. - Lenders and investors will find it more appealing. - Supply Chain Opportunities - Talent Attraction and Retention. How to Achieve Compliance? Adherence to an ESG reporting framework needs to be approached strategically in order to gain the most value for your company. As with any compliance, preparation and a full ESG risk assessment of your environment, including processes and systems, is a comprehensive foundation and necessary precursor to remediating any issues and improving your ESG stance. The Centraleyes ESG assessment is built to assess universal ESG areas applicable across industries. It allows companies to identify the issues relevant to them, determine their position for each, and generate an ESG report showing the company’s ESG posture. The Centraleyes platform uses cutting edge automation to empower clients with remediation steps and allows them to easily and visibly analyze posture, make future plans and business decisions based on the outcomes. Centraleyes is an automated ESG compliance solution with pre-loaded frameworks to guide you through a comprehensive risk assessment and towards full ESG posture. Onboard in minutes and use our platform to assess, remediate and report, all powered by Centraleyes automation.
<urn:uuid:f3caa7e2-1657-4af5-bc03-966ae91d0ef4>
CC-MAIN-2024-38
https://www.centraleyes.com/esg/
2024-09-13T11:06:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00430.warc.gz
en
0.925038
1,119
2.578125
3
FOR IMMEDIATE RELEASE Texas Boiler Code Sioux Falls, SD – August 2024 Importance of Carbon Monoxide Detection for Life Safety Carbon monoxide (CO) is a colorless, odorless, and highly toxic gas that is produced when fuels such as natural gas, oil, and coal are burned. CO can accumulate in enclosed spaces such as boiler/mechanical rooms and can cause serious health issues or even death if not detected and addressed in time. Therefore, it is important to have reliable CO detection systems in place in these areas to ensure the safety of workers and the public. In this white paper, we will discuss the importance of carbon monoxide detection in boiler/mechanical rooms, the significance of complying with UL 2075, and reference the Texas Code: 16 Texas Administrative Code Chapter 65. Importance of CO Detection in Boiler/Mechanical Rooms Boiler/mechanical rooms are essential to many commercial and industrial buildings, housing the equipment that provides heat, ventilation, and air conditioning. These rooms are typically enclosed and have limited ventilation, which can lead to the accumulation of dangerous gases such as carbon monoxide. CO is produced when fuel is burned incompletely or improperly, which can occur in the boiler/mechanical rooms due to various factors, such as faulty equipment, lack of maintenance, or inadequate ventilation. The presence of CO can lead to headaches, nausea, dizziness, confusion, unconsciousness, and even death. Therefore, it is crucial to have effective CO detection systems in place to detect and alert workers and occupants of the presence of CO. Significance of UL 2075 Compliance UL 2075 is a safety standard developed by Underwriters Laboratories Inc. (UL) that outlines the requirements for CO detection equipment. It specifies the criteria for the design, performance, and testing of CO detection systems and the related equipment used in enclosed spaces such as boiler/mechanical rooms. Compliance with UL 2075 ensures that the CO detection equipment meets the necessary safety standards and is reliable in detecting the presence of CO. Non-compliance with UL 2075 can result in faulty equipment that may not detect CO, leading to a hazardous environment for workers and occupants. Texas Code: 16 Texas Administrative Code Chapter 65 The Texas Code: 16 Texas Administrative Code Chapter 65 outlines the rules and regulations for the installation and maintenance of fuel gas systems and appliances in Texas. It requires that fuel-burning equipment such as boilers and furnaces be equipped with approved CO detection devices that comply with UL 2075. It also mandates that the devices be tested and calibrated according to the manufacturer’s instructions and the applicable standards. Compliance with these regulations is essential to ensure the safety of workers and the public, and failure to comply may result in penalties and legal consequences. Carbon monoxide is a dangerous gas that can accumulate in boiler/mechanical rooms and pose a serious threat to the health and safety of workers and occupants. It is crucial to have reliable CO detection systems in place in these areas to detect and alert of the presence of CO. Compliance with UL 2075 and the Texas Code: 16 Texas Administrative Code Chapter 65 ensures that the CO detection equipment is designed, tested, and maintained according to the necessary safety standards. By adhering to these standards and regulations, we can ensure a safe working environment and protect the health and well-being of all those involved. The Macurco CM-6 carbon monoxide detector is ETL listed to UL 2075 standard and is a great solution for detecting carbon monoxide and engaging boiler shutdown, ventilation, and notification devices. For more information, please contact Macurco at 877-367-7891 or email us at [email protected]. About Macurco Gas Detection Macurco Gas Detection designs, develops, and manufactures a full set of fixed and portable gas detection monitors to protect workers, responders, and the community. Macurco has more than 50 years of proven gas detection experience in residential, commercial, and industrial gas monitoring. Macurco gas detection systems (HVAC, Fire & Security, AimSafety, and TracXP) are widely recognized by distributors and users for their high performance and consistent reliability. Macurco is based in Sioux Falls, South Dakota. Learn more at www.macurco.com. For more information about Macurco products, applications, or gases, please get in touch with Macurco at 877-367-7891 or email us at [email protected].
<urn:uuid:b5ca9c02-a735-43fa-b417-777988f8b8b0>
CC-MAIN-2024-38
https://macurco.com/texas-boiler-code/
2024-09-14T17:28:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00330.warc.gz
en
0.931202
924
2.765625
3
Ransomware isn’t just a nuisance. It’s big business, the linchpin of a flourishing underworld market that could surpass $265 billion by 2031. From healthcare to retail, this rapidly evolving threat has victimized industries the world over, authoring reams of horror stories along the way. Without further ado, let’s have a closer look at some real-world examples of ransomware in action. 1. An Epidemic Begins The origins of ransomware can be traced back to 1989, when an underdeveloped piece of malware wreaked havoc on a budding IT community. Designed by Dr. Joseph L. Popp, the infected software was primarily mailed out on a floppy disk to subscribers from an AIDS conference organized by the World Health Organization, purporting as a survey fielding questions on the deadly virus. Of course, the disk contained a payload designed to install an encryption tool on the recipient’s computer, making it the first recorded instance of ransomware, and one of the earliest examples of a Trojan as well. The so-called AIDS Trojan used a symmetrical form of encryption that prevented files from being executed, rather than locking them outright. Once encrypted, the unlucky recipients saw a message demanding that they pay a fee of $189 to renew their PC Cyborg software and ultimately unlock the system. The malware was fairly easy to remove, but in the infancy of the information age, panic won out. In an effort to get rid of the infection, some victims resorted to wiping their hard drives clean, resulting in years of lost data and productivity. While the AIDS Trojan didn’t have the complexity, reach, or impact of other threats, it laid the groundwork for the generations of more advanced ransomware that would inevitably follow. 2. Ransomware Goes Mainstream Though far from the first, WannaCry is often credited with kicking the ransomware trend into high gear. Whereas most attacks only compromise systems that interact with the delivery source, WannaCry illustrated a self-propagating nature typical of computer worms. As a result, the ransomware was able to create copies of itself and spread like wildfire across the affected network. Unfortunately, the initial attack in April, 2017 was only the beginning. By leveraging EternalBlue, an exploit developed by the NSA, WannaCry gained entry into an unpatched Windows computer located in Asia. Within four days, the ransomware had ravaged IT systems in over 150 countries. While the attack only netted a $100,000 ransom, the impact is said to have reached billions in total damages. Coupled with thousands of lost files, hours in productivity losses created a miserable aftermath for many organizations. WannaCry is the quintessential case study on why those annoying Windows updates are absolutely vital to cyber security. 3. Hollywood Presbyterian Medical Center On February 5, 2016, actors of the malicious variety invaded Tinseltown. Hollywood Presbyterian Medical Center, which employs over 500 doctors servicing thousands of patients, was rocked by a ransomware strain identified as Locky. The attack is believed to have been the result of an employee unknowingly clicking an infected attachment sent in a phishing email, a common vehicle for malware. Upon execution, staff immediately lost access to the network, prompting the hospital to take its systems offline in attempt to neutralize the threat. But the damage had already been done. Locky worked fast, encrypting sensitive patient data and compromising equally vital medical functions, including processes related to brain scans, X-rays, and other testing procedures. The hospital was forced to reroute some patients to neighboring hospitals in the area, while staff was relegated to logging new admissions by hand. In the end, Hollywood Presbyterian Medical Center paid a reported $17,000 to restore its systems — on top of the public relations nightmare that accompanies such a massive data breach. 4. A Criminal Case of Irony In June, 2019, Eurofins, the largest provider of scientific testing and forensic services in the UK, was hit by a ransomware attack that brought its IT operations to a standstill. The firm spent roughly three weeks restoring order as the infection caused a log jam for more than 20,000 forensic samples. While an exact amount was never disclosed, Eurofins reportedly paid a ransom fee to recover its IT systems. Unfortunately, the sensitivity of the data at hand didn’t seem to leave many options. The Luxemborg-based laboratory processes forensics samples for over 70,000 criminal cases per year. 5. How Negligence Fueled a Multi-million Dollar Attack The most high-profile ransomware attack of 2021, the Colonial Pipeline incident caused a ripple effect across the East Coast, resulting in higher prices at the pump and widespread panic as motorists flocked to local gas stations. Cyber criminals used an inactive account that still had network privileges to gain entry to the pipeline that transports approximately 2.5 million barrels of fuel per day. After the attackers threatened to expose vital pieces of the near 100 gigabytes of stolen data, Colonial Pipeline forked over $4.4 million in ransom fees. 6. Hacking of Historic Proportions 2016 was a huge year for cyber criminals, particularly, those weaponizing a dangerous piece of ransomware dubbed Petya. Within its first year, the malware had already compromised millions of users worldwide. Believe it or not, Petya’s wave of destruction paled in comparison to its namesake successor — NotPetya, widely hailed as the worst example of malware in history. NotPetya adopted a national form of cyber warfare by targeting the Ukraine. Although the attack initially compromised most of the nation’s IT networks, the infection quickly spread across Europe and beyond. Unlike most ransomware, NotPetya did not offer an opportunity to pay a ransom and recover locked data. In essence, its method of encryption was irreversible. Meanwhile, other hackers, who likely had no involvement in the attack, attempted to cash in on the confusion, promising to sell the victims decryption keys for data that could not be decrypted. The ransomware was traced back to Sandworm, an elite hacking group allegedly employed by the Kremlin to deter influential companies from doing business with Ukraine, Russia’s long-time rival. Moreover, NotPetya resulted in massive losses for those affected. The list of victims includes notable companies such as leading snack manufacturer Mondelez, FedEx Euro subdivision TNT Express, and pharmaceutical juggernaut Merck. Cyber security analysts estimate the total damage around $10 billion. Ransomware poses a significant threat to professionals and consumers alike. Contact our customer service team to learn more about how DataLocker’s line of encrypted USB drives can help safeguard your IT infrastructure against the latest ransomware threats.
<urn:uuid:62131cdd-472d-4f47-b7c6-d8f435f41a84>
CC-MAIN-2024-38
https://datalocker.com/cyberthreats/ransomware/hacking-horror-stories-ransomware/
2024-09-17T06:07:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00130.warc.gz
en
0.949392
1,373
3
3
Prefix Lists - Operators When using Prefix Lists to filter or match particular routes, the following operators are used: - le - Less Than or Equal To - ge - Greater Than or Equal To The following context sensitive help from the Cisco IOS command line shows these options: R1(config)#ip pre R1(config)#ip prefix-list TEST permit 10.0.0.0/8 ? ge Minimum prefix length to be matched le Maximum prefix length to be matched R1(config)#ip prefix-list TEST permit 10.0.0.0/8 The names can be somewhat counterintuitive when you take into account that larger subnets are denoted by smaller prefix values, while smaller subnets are denoted by larger prefix values. In the context of IPv4 and subnet masks, a smaller slash notation (like /19) actually represents a larger network because it includes more IP addresses. Conversely, a larger CIDR notation (like /21) represents a smaller network with fewer IP addresses. The “le” and “ge” operators used by the prefix lists always refer to “less than or equal to” and “greater than or equal to” in a numerical context. In other words, /20 is considered “ge” /19.
<urn:uuid:cc8491e4-5a49-4a0f-863b-ea242cc55221>
CC-MAIN-2024-38
https://notes.networklessons.com/prefix-lists-operators
2024-09-08T20:01:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00030.warc.gz
en
0.828538
281
2.796875
3
Google can quickly modify its ML models to find the most difficult-to-detect spam messages with the open-source machine learning framework. Google says that after advancing its machine learning models, it blocks 100 million more spam messages a day in Gmail. In particular, Google has used TensorFlow, its open source machine learning framework, to modify its spam detection functions more efficiently. This allows it to detect spam messages that are hardest to detect, such as image-based messages. Google used machine learning already to power its spam detection capabilities. And the tech company says the existing models helped to block more than 99.9 percent of spam, phishing and malware from reaching Gmail inboxes in conjunction with other protections. Spammers are still refining their techniques. Although spam email has been a problem for decades, it has become more prevalent in recent years, according to the security company F-Secure, as software exploits and vulnerabilities have become more secure. Algorithms for machine learning can identify patterns in spam messages that people can not catch. Google can train and experiment more efficiently with different machine learning models using TensorFlow. In addition to image-based spam messages, TensorFlow has helped Google detect emails containing hidden embedded content and messages from newly created domains that attempt to hide a low volume of spam messages in legitimate traffic. While Google emphasizes that security is one of the main sales points for Gmail and G Suite, there are of course still security challenges. A newly published report highlights how scammers use “dot accounts “from Gmail-a feature of Gmail addresses that ignore dot characters within Gmail usernames-for fraudulent activity.
<urn:uuid:f947168a-77b7-40a3-989e-7a33f47c7ab3>
CC-MAIN-2024-38
https://cybersguards.com/google-used-tensorflow-to-block-100-million-more-spam-messages/
2024-09-09T21:23:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00830.warc.gz
en
0.934106
333
2.890625
3
HUB AND SPOKE/ STAR NETWORK TOPOLOGY Network Topology refers to the physical or logical layout of a network. Hub and spoke or star topology is a site-to-site Wide Area Network (WAN) topology. In this type of topology, we have a central device, called the hub, that is connected to multiple other devices named as the spokes. Large enterprises have multiple business offices at different geographical locations globally. So, in that case we can use Hub and spoke topology, where business office (i.e. main office) act as a hub while other offices(branches) act as spokes. All the spoke sites are connected to each other via hub site. So, basically the network communication between any two spoke sites travel through the hub inevitably. - Hub and spoke topology can be used in a frame-relay network. - It is also used with other protocols e.g DMVPN. - The main advantage of hub and spoke technology is that it is cost effective. - It is relatively easy to set up and maintain. - WAN network topology may cause communication time lags. - WAN network topology also has redundancy issues. - Hub is a single point of failure, if the main office network fails, entire enterprise network communication may fail.
<urn:uuid:74d24df7-e789-4002-8366-4006b80cc7b4>
CC-MAIN-2024-38
https://networkinterview.com/hub-and-spoke-star-network-topology/
2024-09-13T15:33:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00530.warc.gz
en
0.946771
274
3.390625
3
Most readers here have likely heard or read various prognostications about the impending doom from the proliferation of poorly-secured “Internet of Things” or IoT devices. Loosely defined as any gadget or gizmo that connects to the Internet but which most consumers probably wouldn’t begin to know how to secure, IoT encompasses everything from security cameras, routers and digital video recorders to printers, wearable devices and “smart” lightbulbs. Throughout 2016 and 2017, attacks from massive botnets made up entirely of hacked IoT devices had many experts warning of a dire outlook for Internet security. But the future of IoT doesn’t have to be so bleak. Here’s a primer on minimizing the chances that your IoT things become a security liability for you or for the Internet at large.
<urn:uuid:28d0665e-091b-40c4-9cb2-abbdbb579077>
CC-MAIN-2024-38
https://krebsonsecurity.com/tag/shields-up/
2024-09-17T08:02:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00230.warc.gz
en
0.929717
168
2.5625
3
Quantum-Proof Cryptography: What Role Will It Play? Professor Alexander Ling on Why Companies Should Invest in Quantum-Proof Encryption NowCISOs need to begin investigating the use of quantum-proof cryptography to ensure security is maintained when extremely powerful quantum computers that can crack current encryption are implemented, says Professor Alexander Ling, principal investigator at the Center for Quantum Technologies in Singapore. A quantum-proof key cannot be guessed by a quantum computer, while encryption in its present form can be easily broken by quantum computers, Ling says in an interview with Information Security Media Group. Quantum-proof keys can be deployed in current computer systems, not just quantum computers, Ling explains. And because quantum computing could become commonplace soon, companies need to promptly invest in quantum-proof encryption, he contends. Quantum key distribution, a method for securely transmitting a secret key over distance that's based on the laws of physics, enables two parties to produce a shared random secret key known only to them, which can then be used to encrypt and decrypt messages. Ling says one of the key features of a quantum key is that it's easy to detect if it has been intercepted by a third party. "The process by which the photons are distributed can be shown to be tamper proof. Quantum particles are actually very sensitive to disturbances," Ling says. "So if there was an eavesdropper who was manipulating the photons in order to extract the information content, this eavesdropper would actually disturb the information that is carried by the photons." In this interview (see audio link below image), Ling also discusses: - Progress made in quantum key distribution; - Technological challenges in quantum-proof cryptography; - How quantum keys are being deployed. Ling is principal investigator at the Center for Quantum Technologies in Singapore. He leads a team that aims to bring quantum instruments out of the lab and into field deployment. His team has deployed instruments in diverse environments, ranging from Singapore's urban fiber networks to satellites in space. He formerly worked at the National Institute of Standards and Technology in the United States.
<urn:uuid:86bd09a6-5817-4deb-953a-548a203536a9>
CC-MAIN-2024-38
https://www.bankinfosecurity.com/interviews/quantum-proof-cryptography-what-role-will-play-i-4534
2024-09-17T08:42:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00230.warc.gz
en
0.94856
426
2.890625
3
Supercomputers keep getting faster and now the Oak Ridge National Laboratory has switched on a machine that makes 1.1 exaflops of performance. It's called Front... Supercomputers keep getting faster. Just a few years ago it took teraflops — or trillions of floating point operations per second — to make the list of the world’s fastest computers. Now it takes exaflops, quintillions of operations per second. And now the Oak Ridge National Laboratory has switched on a machine that makes 1.1 exaflops of performance. It’s called Frontier. The Federal Drive with Tom Temin talked about Frontier with Oak Ridge distinguished scientist and Frontier project officer, Scott Atchley. Tom Temin: Mr. Atchley, good to have you on. Join us on Oct. 1 and 2 for Federal News Network's Cyber Leader Exchange where we'll dive into how agencies are strengthening federal cyber capabilities. Scott Atchley: Good morning, Tom, I appreciate you having me on. Tom Temin: And just review for us some highlights about this super super computer. I guess it’s number one on the Top 500 list, making it the fastest in the world. Tell me how it supports Oak Ridge, what types of projects at Oak Ridge will this support? And maybe it’s networked into some of the other labs too, I imagine. Scott Atchley: Yeah. So Oak Ridge has a leadership computing facility. So this is one of two facilities within the Department of Energy that focus on what we call leadership computing. Leadership computing uses a large fraction of these large machines to run problems, to solve problems at a scale that you just can’t run anywhere else. So the users that come to Oak Ridge and to Argonne have problems that require large resources, or maybe a large amount of memory. Definitely fast networks. They’re trying to improve the resolution of their simulation and modeling, or as we’re seeing more and more using machine learning or deep learning as part of artificial intelligence. And they just need more resources that they can get anywhere else in the world. Tom Temin: And this machine is physically large, correct? How big is it? In terms of square footage? Scott Atchley: Yes, it’s about 400 meters square, about the size of a basketball court a little bit bigger than about a basketball court. It is similar in size to our previous machines, but just much, much faster. Tom Temin: And did contractors build this? Is it something that you designed at Oak Ridge? Or how does that work? How does it come to be? Scott Atchley: So with these large systems within the Department of Energy, we have a rigorous procurement process. And we will put out requests for proposals. And we’ll get proposals from multiple vendors, we’ll do a technical review, we then award one of those vendors the contract, and they will then start working on the machine. Now we tend to buy these multiple years in advance. So we’ve started deploying Frontier last year, pretty much September, October timeframe is when the hardware came in. We actually selected the vendor Cray back in 2018. And so that was to give them time, they had proposed new processors from AMD. And they gave them time to work out all of that technology, and also gave us time to prepare the machine room. So we had to add more power, we had to bring in more power, we had to bring in more cooling. The floor in there would have collapsed with this new machine because it’s so heavy. So we actually had to tear out the old floor and build a new raised floor for Frontier to handle the weight. Frontier is made up of 74 cabinets, each one of these cabinets is four foot by six foot a little bit smaller than a pickup truck bed, but weighs as much as two F150 pickups in that space. So very, very dense. Tom Temin: Got it. And did the chip shortage and worldwide supply chain affect the delivery and ability to build this on time at all? Scott Atchley: Oh, absolutely. We were in the preparation stage. And I went to visit the factory in May of last year. And we kept asking them, are you having any supply chain issues? And they said, well, some but not too bad. And when I got up there, they pulled me into a room and said we were having some issues. Here’s 150 parts we can’t get. And you’re dealing with a system that has billions of parts, billions of types of parts, not just a million parts total. And you only need to be short of one. And it doesn’t have to be an expensive processor. It can be a $2 power chip or a 50 cent screw. Any one of those will stop you from getting your system. And so yeah, it was a huge issue. Fortunately, HPE had bought Cray in the interim from when we awarded the contract to when they were building this system. And HPE had very good supply chains, they were able to reach out to many, many different companies to try to source components. They pulled off a heroic job of getting us the stuff; it did delay us. It probably delayed us about two months. But at that meeting in May, they told us they could delay us up to six months. So that’s how good of a job they did for us. So we really appreciate the effort that they did. Tom Temin: We’re speaking with Scott Atchley, he’s distinguished scientist and supercomputer Frontier project officer at the Oak Ridge National Laboratory. The processor chips, the AMDs, those are still manufactured in the United States, correct? And the memory is what is made overseas? Scott Atchley: It’s a little bit of both. So they’re designed in the U.S. but the leading computer fabrication facility or we just call it fab is located in Taiwan that’s TSMC. The other leading fabs are Samsung in South Korea and then Intel in the U.S. and so Intel is starting to talk about doing fab services for other companies. But up until this point, they’ve only manufactured their own hardware. So whether it’s NVIDIA or AMD, you know all the leading edge processes other than Intel go to TSMC. But interestingly, even right now, Intel is using TSMC for some of their components for the Aurora system at Argonne. Tom Temin: Right. So that’s why we’re gonna vote pretty soon to to subsidize them all? Scott Atchley: We definitely want the capability to fab these in the U.S. for various reasons, you know, geopolitical reasons. And we also want that workforce in the U.S. So absolutely. Tom Temin: And I think people may not realize that the chip itself represents a gigantic supply chain of equipment, gases, materials, that enable the fabrication of it. And so, you know, there’s a couple of billion dollars worth of investment just to make one wafer, I guess, and people may not realize how deeply this goes into the economy. Scott Atchley: Oh, absolutely. It’s a huge amount. And there’s ripple effects, if you can bring the fabs to the U.S. and we have some here, but bring more and particularly the leading edge fabs the U.S. the ripple effects be fantastic. Tom Temin: And in planning the installation of a machine like this, what about the programs, the applications, the programming that has to go? Is there some long term planning that people that want to use it eventually also have to do so that their code will run the way they hope it will? Scott Atchley: Absolutely. So as soon as we select the vendor, we set up a what we call the Center of Excellence. And that is a team of scientists and developers from the lab, but also with the vendor integrators, in this case, HPE, and then their component supplier, AMD. And so we have selected, you know, 12 or 14 applications that we want them to start working on. Because what you want to do, I mean, these machines are very expensive, when you turn that machine on, you want to be able to do science on day one. And so they start working on these applications and porting them to the new architecture. And then as the previous generation chips become available, they start running on those. And then when the early silicon becomes available for the final architecture, they start running there, and they start their final tuning and optimizing. This process starts as soon as we select that vendor. Tom Temin: And so it’s not necessarily the case that a given set of code for a application or a simulation or a visualization will necessarily run optimally on the faster hardware, you need to tweak your software to get the most out of the new hardware? Scott Atchley: Absolutely. So even if you’re buying from the same vendor, when we moved from Titan to Summit, which is our current production system, they both used NVIDIA GPUs. So the API didn’t change a whole lot, but the architecture of the GPUs changed quite a bit. And so you still have to adjust for the different ratios of memory capacity and memory bandwidth to the amount of processing power. And so that is a good part of the process is doing that optimization and tuning for that given architecture. Tom Temin: That’s an interesting point about supercomputers. It’s much more like the beginning of computing, in the sense that you need to write carefully to the hardware, as opposed to most business computing today where you’re just writing to an API. And you figure pretty much for most business applications, even AI, that the hardware is fast enough for whatever translation layers in between, actually do talk to the hardware. Scott Atchley: Absolutely. We’re trying to eke out as much performance as we can and the applications are running. We don’t use virtualization and all these other techniques that you can use to increase the usefulness of your hardware, we have a high demand, there’s a competitive process to get access to the machine, and you get an allocation of time. And so you want to make sure that time is as useful as possible. Think of it as a telescope, and you’re a scientist studying the stars, you want to be prepared, when your week comes up, and you get to go to that telescope, and it’s yours for that week, you don’t want to waste your time by being inefficient, which you do. So the same thing here, the users don’t have to physically be present, but they have to be able to remotely log into our system. When they’re on the machine, they want it to be as efficient as possible and get as much of that performance as they can. Tom Temin: And what are the power requirements for a machine like this? Do you have to call up the Tennessee Valley Authority and say, hey, we’re going to turn it on? Scott Atchley: That’s a great question. So when we were doing some of our benchmark runs to help shake the system out, you’re running various applications, but the one that we use the most is the HPL, or high performance LINPACK application. That’s the one that’s used to rank the systems on the top 500 list, but it’s a fantastic tool to help you, you know, debug the machine and find the marginal hardware and replace it with better hardware. And so I was watching the power as our teams were submitting jobs using the whole machine and you would see a spike from the baseline power to the maximum power, which was a 15 megawatt increase in five seconds. And you know, the job would run a little bit and then you’d have a node crash, it would die and they would do it again. And so over and over, we were throwing 15 megawatts on the machine and then it would, you know, finish or crash, and then that would go away instantaneously. And I’m thinking, we’re going to get that phone call from TVA, and it’s not going to be a good one. It didn’t come. And I actually know somebody that works at TVA, and I just called him up. I said, hey, by the way, we’re doing this, is this causing you guys any problems? So well, I don’t know, let me let me check with headquarters, calls me back a couple hours later. And just laughs and says, no, we didn’t see a thing. I said, if you can’t see 15 megawatts coming and going, and in five seconds, you’ve got a lot of capacity. He says, yeah, we average about 24 gigawatts at any particular time. So yeah, that’s less than 1%. So to us, it’s huge. But fortunately, we don’t cause the lights to flicker here or anywhere else nearby. So it’s all good. Tom Temin: So plenty of juice left over for Dogpatch, you know, down there. Scott Atchley: Absolutely. We’re not going to slow down anybody’s Fortnite game for sure. Tom Temin: And just briefly, what is your job like day to day do you touch the machine and interact with it personally, are you just kind of more like looking at spreadsheets and power reports and schedules? Scott Atchley: So unfortunately, I attend meetings, that seems to be my major contribution to the Department of Energy, the machine is still undergoing stand up. And so we probably have a couple months to go maybe a little bit longer as we test the system and make sure that it’s ready to put users on. And so I’m not part of that team. I’m tracking what they do daily. So some of the meetings I attend are with our acceptance team, also with the vendor to make sure that we are addressing the issues that we’re discovering, so that we can get it ready for users. After the machine goes into production, I don’t really need to get on it. It’s really at that point dedicated to the users, we’re actually starting to think about its replacement. And so we actually have a mission needs statement into DOE that talks about, you know, we’ll need a machine after Frontier, you know, five years from now. And we were actually starting the process of thinking about the procurement of that machine. And so our expectation is that we’ll put out a request for proposals sometime next year. And by the end of next year, we’ll know what the architecture is that will replace Frontier. Tom Temin: But we’re still a few years from zettabyte computers, we have to get multiple exabytes at this point. Correct? Scott Atchley: It’s becoming more difficult, right? So, three machines ago. So back in 2008 timeframe, we were right at the petabytes level, so roughly two petabytes. Our next system Titan was deployed in about 2012. That was on the order of 20 petabytes. In 2017 or 2018, we deployed Summit, which was it’s 200 petabytes, and that’s still in production, it will stay in production for a couple more years. And so roughly an order of magnitude every five years, but that is becoming more difficult. You hear stories about the slowing of Moore’s law, you’ll hear people say the end of Moore’s law. And that’s that’s a little too pessimistic right now, but it is slowing so it may take us a little bit longer to get those powers of 10. So we are definitely a few years away from looking at zettaflops. Tom Temin: Scott actually is distinguished scientist and supercomputer Frontier project officer at the Oak Ridge National Laboratory. Thanks so much for joining me. Scott Atchley: Tom. Thank you very much. It was a pleasure and have a good day. Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area. Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
<urn:uuid:3b312be4-4cc2-4d9b-8ead-2eeb984fabbb>
CC-MAIN-2024-38
https://federalnewsnetwork.com/technology-main/2022/07/oak-ridge-national-laboratorys-supercomputer-makes-it-to-the-top-100-list/
2024-09-18T15:05:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00130.warc.gz
en
0.964188
3,517
2.71875
3
Boldly speaking, cyberspace is at a never-ending war with hackers. Businesses and their customers are being attacked more frequently than ever. And surprisingly, only 38% of the leading global organizations are equipped with ways and means to handle such attacks. The best way for organizations to win this war on cybersecurity is to be equipped with over-the-edge penetration testing tools. Penetration testing or pen testing can simply be referred to as a practice game of security assessment. Here, the security experts rigorously assess every nook and corner of the IT infrastructure with the aim of discovering vulnerabilities before hackers. Now, the tools which aid penetration testing become really significant. These Penetration Testing and Vulnerability Assessment tools or VAPT tools play a remarkable role in ensuring the security of web-based or mobile-based applications throughout the world. What is Penetration testing? A penetration test is kind of a virtual cyberattack targeted at systems to check for underlying security vulnerabilities. During the test, pen-testers, as they are generally called, simulate an attack in a similar manner as any hacker would do and detect all the possible loopholes. The test can involve attempted attacks at specific system targets like APIs, servers, and other application components. The main goal is to uncover security vulnerabilities that make the entire infrastructure susceptible to attacks. After the end of the test, experts generate useful insights and use them to fine-tune the existing security systems and patch detected loopholes. Who Performs Penetration Test? Penetration tests are performed by network security experts known as pen-testers. And you can find such specialists in a software testing company. Their main goal is to find all the possible vulnerabilities across the target organization's security systems. Penetration testers are expected to drive useful insights through these tests and help security professionals of the target organization in patching all the discovered threats. Without a doubt, pen-testers possess a lot of creativity and technical expertise in matters related to security. What Are Penetration Testing Software Used For? Penetration testing software allows computer security experts to detect and extract optimum security vulnerabilities across all computer applications. These expert ethical hackers or white-hat hackers, facilitate this by simulating all real-world attacks by cybercriminals or black-hat hackers. Penetration testing allows businesses to explore from an attacker’s perspective to discover and combat all weaknesses across the environment. Protecting the data becomes easier as this creates awareness and eradicates all chances of damage. In effect, conducting elaborate penetration testing can make up for the need to hire security consultants to analyze the worst possibilities, exploring how the real criminals might act. Organizations use the results to make their apps more secure and safe. Security penetration testing work as software applications, which are used to check all network security threats. Comparisons simplify and allow enterprises to determine whether particular software is the right investment to make. In short, penetration testing tools make businesses secure, help to justify security investments and assure profitable decision-making across all levels. Read more about Mobile Application Penetration Testing Methodologies Here we have prepared a list of some of the best penetration testing tools for extensive security assessments: Best Pen Test Tools Appknox is considered one of the most reliable market solutions for penetration testing attempts to identify insecure business logic, security setting vulnerabilities, or other weaknesses that a threat actor could exploit. Critical factors like transmission of unencrypted passwords or password reuse are checked in real-time with the advanced Appknox automated penetration testing software solutions. 2. Kali Linux Widely regarded as one of the best open-source tools, Kali Linux is a Debian-based Linux distribution that may be described as the Swiss knife for the penetration testing community. This pen-testing operating system comes with around 600 different tools with tonnes of exhaustive security features. When it comes to SQL injection-related worries, the first option which comes into the minds of pen testers is sqlmap. This open-source VAPT audit tool efficiently detects SQL injection flaws and almost anything wrong with your database servers. Its powerful detection engine is capable of identifying and exploiting even the most far-fetched flaws in database management systems. Related topic- Penetration Testing vs Red Team: What is the Difference? Commonly known as Network Mapper, Nmap is the most preferred tool for port scanning. One of the most efficient and customizable penetration testing assessment tools, Nmap can effectively scan both large as well as small networks for threats. Nmap is generally used in the preliminary steps of thorough VAPT audits to find out which network ports are susceptible to serious threats. Metasploit is widely considered one of the leading penetration testing frameworks across the globe. Supported by Rapid7, Metasploit can be used on servers, networks, and applications as well. This tool has a basic command-line interface and works smoothly on Windows, Apple Mac OS, and Linux. 6. Burp Suite Burp Suite is widely used by security experts for the assessment of web-based applications. It intercepts web traffic between the client and web server by acting as an effective proxy tool and analyzes the responses and requests to carry out key security tests. Both licensed and open-source versions of this tool are available in the market. Aircrack-NG analyzes the vulnerabilities in WiFi networks by deploying an expansive collection of penetration testing assessment tools. This WiFi testing suite captures data packets of your WiFi network and exports them as text files for further analysis. It also carries out other functions like identifying fake access points, assessing driver capabilities and WiFi cards, and so on. WireShark is a penetration testing tool that is inherently utilized as a network protocol and packet analyzer. It supports a variety of useful protocols and is primarily used for detailed network and wireless traffic inspection. It also analyzes wireless over-the-air traffic for in-depth security assessment. It is also an open-source tool and can be used on Windows, Linux, Mac OS X, Solaris, etc. Nessus maybe that one-stop solution to all your IT infrastructure worries. It is a popular and paid VAPT audit tool that offers lightning-fast security scans. Some common, as well as exceptional vulnerabilities like open ports, configuration flaws, and password errors, could be fixed easily using Nessus. It can also perform detailed website scans, sensitive data searches, IP scans, and compliance checks. 10. OWASP Zed Attack Proxy (ZAP) OWASP’s Zed Attack Proxy or ZAP is a widely popular pen-testing tool for both web applications and mobile apps. This open-source tool is maintained by numerous international volunteers who regularly update it with new modules and add-ons. ZAP can also be used by experienced testers for manual security testing. Other high-end testing features like AJAX spidering, fuzzing, and WebSocket testing surely make it a go-to tool for detailed security scans. ZAP’s plugins also make it easy to integrate directly into the DevOps pipeline. Related topic: Mobile app security testing tools One of the best open-source pen-test tools for Android, Drozer not only supports actual Android devices for vulnerability assessment but emulators also. Developed by MWR InfoSecurity (now known as F-Secure Consulting), Drozer is trusted by many to identify and exploit security vulnerabilities in apps and mobile devices. Through automated testing, Drozer not only reduces the time taken for security assessment but also ensures that your organization is not exposed to any unacceptable levels of risk due to Android apps or devices. Quick Android Review Kit or QARK was developed by LinkedIn. This open-source mobile VAPT tool is widely used by security experts to identify security flaws in Java-based Android apps. This tool provides detailed descriptions of security flaws by deeply analyzing app source codes and setup files. QARK also generates dynamic ADB (Android Debug Bridge) commands to help in the validation of detected vulnerabilities. Mitmproxy is an open-source man-in-the-middle HTTP proxy employed for testing, debugging, and penetration testing. This SSL-capable tool can be used to inspect, intercept, replay, and modify HTTP web traffic and other protected protocols, and prevent man-in-the-middle attacks. It is often said that the best way to beat your opponents is to always be one step ahead of them. Penetration testing tools work in a similar way. They prepare your security systems by testing them on all fronts and cover all the loopholes so that hackers don’t find any.
<urn:uuid:583c6137-4f62-4a4f-99e3-c57a291b895a>
CC-MAIN-2024-38
https://www.appknox.com/blog/best-penetration-testing-tools
2024-09-18T14:12:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00130.warc.gz
en
0.925668
1,755
2.5625
3
Inside Quantum Technology’s “Inside Scoop:” Quantum and the Internet The advent of quantum computing has sparked immense excitement and curiosity among scientists, researchers, and tech enthusiasts. Quantum computers have the potential to revolutionize various industries and fields thanks to their ability to perform complex calculations at unimaginable speeds. However, one of the most intriguing aspects of quantum computing lies in its interaction with the internet and the promising future of the quantum internet. The quantum internet is a somewhat nebulous idea, but most people use the phrase to refer to a network of quantum computers that can communicate with each other (possibly through quantum entanglement), analogous to classical computers doing this via the internet today. In this article, we will explore how quantum computing connects with the conventional internet and delve into the transformative prospects of the quantum internet. Quantum Computing and the Conventional Internet Quantum computing and the conventional internet are fundamentally different entities. Classical computers, which power the internet as we know it, rely on bits, represented by 0s and 1s, to process and store information. Quantum computers, on the other hand, leverage quantum bits, or qubits, which can exist in multiple states simultaneously through the phenomenon of superposition. This unique attribute of qubits allows quantum computers to explore an exponentially larger solution space in specific problems, making them vastly more powerful than their classical counterparts for specific tasks. Thanks to their qubit sources, quantum computers are predicted to be able to process more complicated computing problems, which can lead to advancements in cybersecurity, optimization, and more. While quantum computers promise to solve many complex problems, they do not entirely replace classical computers. Instead, quantum computers can complement classical systems by handling specific operations where their advantages shine, such as cryptography, optimization, and quantum simulations. Quantum Key Distribution: Enhancing Security on the Web One of the most significant contributions of quantum computing to the conventional internet is in the field of cryptography. Quantum key distribution (QKD) is a groundbreaking application that uses the principles of quantum mechanics to establish secure communication channels between two parties. Conventional cryptographic techniques rely on the difficulty of solving mathematical problems, such as factorizing large numbers, to secure data. However, quantum computers could potentially break these cryptographic systems, posing a threat to data security. QKD employs the principles of quantum mechanics to create unbreakable cryptographic keys. By transmitting qubits over a quantum channel, QKD allows two parties to share secret keys while detecting any eavesdropping attempts. Even if an eavesdropper intercepts the qubits, the act of observation will disturb the qubits, alerting the parties involved. The Future Quantum Internet: Enabling Unprecedented Connectivity Beyond enhancing security, the internet’s future lies in the quantum internet’s development. The quantum internet is envisioned as a global network of interconnected quantum computers and devices, enabling quantum communication over long distances. Other devices may include things like quantum sensors or atomic clocks, which could help sync up quantum computers and enhance satellite navigation. At present, quantum communication is limited to short distances due to the delicate nature of qubits, which are easily susceptible to environmental interference. However, scientists are working on solutions to extend quantum communication to larger scales. This involves the development of quantum repeaters and other quantum technologies to maintain the coherence of qubits over long-haul distances. The potential of the quantum internet is tremendous. It promises secure communication fundamentally protected by the laws of quantum mechanics, enabling a new era of information exchange, quantum cloud computing, and distributed quantum processing. Quantum computing’s interaction with the internet is paving the way for the quantum internet—an interconnected network of quantum computers and devices with unprecedented capabilities. As quantum technologies advance, the vision of a quantum internet is edging closer to reality, promising secure communication, distributed quantum computing, and a plethora of revolutionary applications. The fusion of quantum computing and the internet holds the key to shaping a future where computation, communication, and security reach new frontiers. Kenna Hughes-Castleberry is a staff writer at Inside Quantum Technology and the Science Communicator at JILA (a partnership between the University of Colorado Boulder and NIST). Her writing beats include deep tech, quantum computing, and AI. Her work has been featured in Scientific American, New Scientist, Discover Magazine, Ars Technica, and more.
<urn:uuid:186b66c0-4a5f-461c-b657-951471a982ae>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/inside-quantum-technologys-inside-scoop-quantum-and-the-internet/
2024-09-18T14:18:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00130.warc.gz
en
0.89788
890
3.53125
4
What is Shellcode? Shellcode is a special type of code injected remotely which hackers use to exploit a variety of software vulnerabilities. It is so named because it typically spawns a command shell from which attackers can take control of the affected system. How to Recognize This Threat: You likely will not notice shellcode until you have noticed an attack on the computer. How to Prevent This Threat: The best way to avoid encountering shellcode on your network is to protect it with a strong firewall accompanied by security services. See your options from the top brands in the industry.
<urn:uuid:9ad00b2a-51d4-43c4-91b9-d8499e6d5527>
CC-MAIN-2024-38
https://www.firewalls.com/blog/security-terms/shellcode/
2024-09-20T23:12:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00830.warc.gz
en
0.934904
116
2.6875
3
Edge data centers are compact facilities strategically located near user populations. Designed for reduced latency, they deliver cloud computing resources and cached content locally, enhancing user experience. Often connected to larger central data centers, these facilities play a crucial role in decentralized computing, optimizing data flow, and responsiveness. Key Characteristics of Edge Data Centers Acknowledging the nascent stage of edge data centers as a trend, professionals recognize flexibility in definitions. Different perspectives from various roles, industries, and priorities contribute to a diversified understanding. However, most edge computers share similar key characteristics, including the following: Local Presence and Remote Management: Edge data centers distinguish themselves by their local placement near the areas they serve. This deliberate proximity minimizes latency, ensuring swift responses to local demands. Simultaneously, these centers are characterized by remote management capabilities, allowing professionals to oversee and administer operations from a central location. In terms of physical attributes, edge data centers feature a compact design. While housing the same components as traditional data centers, they are meticulously packed into a much smaller footprint. This streamlined design is not only spatially efficient but also aligns with the need for agile deployment in diverse environments, ranging from smart cities to industrial settings. Integration into Larger Networks: An inherent feature of edge data centers is their role as integral components within a larger network. Rather than operating in isolation, an edge data center is part of a complex network that includes a central enterprise data center. This interconnectedness ensures seamless collaboration and efficient data flow, acknowledging the role of edge data centers as contributors to a comprehensive data processing ecosystem. Edge data centers house mission-critical data, applications, and services for edge-based processing and storage. This mission-critical functionality positions edge data centers at the forefront of scenarios demanding real-time decision-making, such as IoT deployments and autonomous systems. Use Cases of Edge Computing Edge computing has found widespread application across various industries, offering solutions to challenges related to latency, bandwidth, and real-time processing. Here are some prominent use cases of edge computing: - Smart Cities: Edge data centers are crucial in smart city initiatives, processing data from IoT devices, sensors, and surveillance systems locally. This enables real-time monitoring and management of traffic, waste, energy, and other urban services, contributing to more efficient and sustainable city operations. - Industrial IoT (IIoT): In industrial settings, edge computing process data from sensors and machines on the factory floor, facilitating real-time monitoring, predictive maintenance, and optimization of manufacturing processes for increased efficiency and reduced downtime. - Retail Optimization: Edge data centers are employed in the retail sector for applications like inventory management, cashierless checkout systems, and personalized customer experiences. Processing data locally enhances in-store operations, providing a seamless and responsive shopping experience for customers. - Autonomous Vehicles: Edge computing process data from sensors, cameras, and other sources locally, enabling quick decision-making for navigation, obstacle detection, and overall vehicle safety. - Healthcare Applications: In healthcare, edge computing are utilized for real-time processing of data from medical devices, wearable technologies, and patient monitoring systems. This enables timely decision-making, supports remote patient monitoring, and enhances the overall efficiency of healthcare services. Impact on Existing Centralized Data Center Models The impact of edge data centers on existing data center models is transformative, introducing new paradigms for processing data, reducing latency, and addressing the needs of emerging applications. While centralized data centers continue to play a vital role, the integration of edge data centers creates a more flexible and responsive computing ecosystem. Organizations must adapt their strategies to embrace the benefits of both centralized and edge computing for optimal performance and efficiency. In conclusion, edge data centers play a pivotal role in shaping the future of data management by providing localized processing capabilities, reducing latency, and supporting a diverse range of applications across industries. As technology continues to advance, the significance of edge data centers is expected to grow, influencing the way organizations approach computing in the digital era. Related articles: What Is Edge Computing?
<urn:uuid:0d134809-ba78-4d0a-8f9d-67532cfe56fb>
CC-MAIN-2024-38
https://www.cables-solutions.com/category/data-center/edge-computing
2024-09-07T17:16:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00230.warc.gz
en
0.89737
829
2.796875
3
As the American economy grapples with enduring high inflation rates, despite aggressive monetary tightening by the Federal Reserve, analysts are shining a spotlight on the role of the federal budget deficit. Historically, the relationship between fiscal overspending and inflation has been subject to extensive economic debate. The current situation, where consumer prices continue to climb in the face of interest rate hikes, presents a compelling case study. The heart of the matter lies in the federal government’s fiscal approach, which has involved substantial borrowing and expenditure. Critics argue that injecting borrowed funds into an already heated economy exacerbates inflationary pressures. This is because it increases demand for goods and services, intensifying competition for finite resources and, therefore, pushing up prices. Analyzing the Fiscal Dynamics Government borrowing affects the economy in diverse ways, and one key issue is how deficit spending can fuel inflation by increasing currency circulation without a corresponding rise in goods. Recently, the balance between the Federal Reserve’s interest rate hikes to slow inflation and the government’s stimulus spending has been contentious. Experts like Jim Tankersley argue that such fiscal policies undermine the Fed’s efforts to control prices by stimulating demand further. The International Monetary Fund warns that the U.S. budget deficit may be exerting considerable pressure on the current high inflation rates, complicating the Federal Reserve’s task of price stabilization. This complexity underscores the challenge in addressing inflation, as it results from the intricate interplay between fiscal and monetary policy.
<urn:uuid:2025b8dc-7e2c-44e2-810c-6f7e11a15f70>
CC-MAIN-2024-38
https://financecurated.com/economy/is-the-us-deficit-fueling-intractable-inflation/
2024-09-08T23:08:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00130.warc.gz
en
0.922916
300
2.984375
3
Artificial Intelligence (AI) has become the cornerstone of modern innovation, permeating diverse sectors of the economy and revolutionizing the way we live and work. Today, we stand at a crucial crossroads: one where the path we choose determines if AI fosters a brighter future or casts a long shadow of ethical quandaries. To embrace the former, we must equip ourselves with a moral compass – a comprehensive guide to developing and deploying AI with trust and responsibility at its core. This Entefy policy guide provides a practical framework for organizations dedicated to fostering ethical, trustworthy, and responsible AI. According to version 1.0 of its AI Risk Management Framework (AI RMF), the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) views trustworthy AI systems as those sharing a number of characteristics. Trustworthy AI systems are typically “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.” Further, validity and reliability are required criteria for trustworthiness while accountability and transparency are connected to all other characteristics. Naturally, trustworthy AI isn’t just about technology; it is intricately connected to data, organizational values, as well as the human element involved in designing, building, and managing such systems. The following principles can help guide every step of development and usage of AI applications and systems in your organization: 1. Fairness and Non-Discrimination Data represents the bloodline of AI. And regrettably, not all datasets are created equal. In many cases, bias in data can translate into bias in AI model behavior. Such biases can have legal or ethical implications in areas such as crime prediction, loan scoring, or job candidate assessment. Therefore, actively seek and utilize datasets that reflect the real world’s diversity and the tapestry of human experience. To promote fairness and avoid perpetuating historical inequalities, try to go beyond readily available data and invest in initiatives that collect data from underrepresented groups. Aside from using the appropriate datasets, employing fairness techniques in algorithms can shield against hidden biases. Techniques such as counterfactual fairness or data anonymization can help neutralize biases within the algorithms themselves, ensuring everyone is treated equally by AI models regardless of their background. Although these types of techniques represent positive steps forward, they are inherently limited since recognizing perfect fairness may not be achievable. Regular bias audits are also recommended to stay vigilant against unintended discrimination. These audits can be conducted by independent experts or specialized internal committees consisting of members who represent diverse perspectives. To be effective, such audits should include scrutinizing data sources, algorithms, and outputs, identifying potential biases, and recommending mitigation strategies. 2. Transparency and Explainability Building trust in AI requires transparency and explainability in how these intelligent systems make decisions. In many cases, advanced models using deep learning such as large language models (LLMs) are categorized as impenetrable black boxes. Black box AI is a type of artificial intelligence system that is so complex that its decision-making or internal processes cannot be easily explained by humans, thus making it challenging to assess how the outputs were created. This lack of transparency can erode trust and lead to poor decision-making. Promoting transparency and explainability in AI models is essential for responsible AI development. To whatever extent practicable, use interpretable models, explainable AI (XAI)— a set of tools and techniques that helps people understand and trust the output of machine learning algorithms—and modular architecture where the model is divided into smaller, more understandable components. Visual dashboards can present data trends and model behavior in easier-to-understand formats as well. Building trust in AI requires openness and inclusivity. Start with demystifying the field by inviting diverse voices and perspectives into the conversation. This means engaging with communities most likely to be impacted by AI, fostering public dialogue about its benefits and risks, and proactively addressing concerns. Transparency and explainability need to be part of continuous improvement to foster trust, allowing users to engage with AI as informed partners. Encourage user feedback on the clarity and effectiveness of explanations, continuously refining the efforts to make AI more understandable. 3. Privacy and Security AI’s dependence on sensitive or personal data raises significant concerns about privacy and security. Implementing robust data protection frameworks is crucial for ensuring user privacy and safeguarding against data breaches or criminal misuse. Machine learning models trained on private datasets can expose private information in surprising ways. It is not uncommon for AI models, including large language models (LLMs), to be trained on private datasets, which may include personally identifiable information (PII). Research has exposed cases where “an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.” Privacy-preserving machine learning (PPML) can be used to help in maintaining confidentiality of private and sensitive information. PPML is a collection of techniques that allow machine learning models to be trained and used without revealing the sensitive, private data that they were trained on. PPML practices, including data anonymization, differential privacy, and federated learning, among others, help protect identities and proprietary information while preserving valuable insights for analysis. In a world where AI holds the keys to intimate and, in some cases, critical data, strong encryption and access controls are vital in safeguarding user privacy. The regulatory landscape for data protection and security has been evolving over the years but now with the latest advances in machine learning, AI-specific regulations are taking center stage globally. The effectiveness of these regulations, however, depends on enforcement mechanisms and industry self-regulation. Collaborative efforts among governments, businesses, and researchers are crucial to ensure responsible AI development that respects data privacy and security. In addition to regulatory pressures, organizations are learning the benefits of providing clear and accessible privacy policies to their customers, employees, and other stakeholders, obtaining informed consent for data collection and usage, and offering mechanisms for users to access, rectify, or delete their data. Beyond technical, regulatory, or policy measures, organizations need to also build a culture of privacy. This involves continual employee training on security and data privacy best practices, conducting internal audits to identify and address vulnerabilities, and proactively communicating credible threats or data breaches to stakeholders. 4. Accountability and Human Oversight Even the best intended AI models can stray in terms of results or decisions. This is where human oversight is key, ensuring responsible AI at every stage. Clearly defined roles and responsibilities ensure that individuals are held accountable for ethical oversight, compliance, and adherence to established ethical standards throughout the AI lifecycle. Ethical review boards comprising multidisciplinary experts play a pivotal role in evaluating the ethical implications of AI projects. These boards provide invaluable insights, helping align initiatives with organizational values and responsible AI guidelines. Continual risk assessment plus maintaining comprehensive audit trails and documentation are equally important. In assessing the risks, consider not just technical implications but also potential social, environmental, and ethical impact of AI systems. Each organization can benefit from clear protocols for human intervention in AI decision-making. This involves establishing human-in-the-loop systems for critical decisions, setting thresholds for human intervention when certain parameters are met, or creating mechanisms for users to appeal or challenge AI decisions. 5. Safety and Reliability To truly harness the power of AI without unleashing its potential dangers, rigorous safety and reliability measures must be included in an organization’s AI policies and practices. These safeguards should be multifaceted, ensuring not just technical accuracy but also ethical integrity. Begin with stress testing and simulations of adversarial scenarios. Subject the AI systems to strenuous testing, including edge cases, unexpected inputs, and potential adversarial attacks. This stress testing identifies vulnerabilities and allows for implementation of safeguards. Build in fail-safe mechanisms that automatically intervene or shut down operations in case of critical errors. Consider redundancy mechanisms to maintain functionality even if individual components malfunction. In addition, actively monitor AI systems for potential issues, anomalies, or performance degradation. Conduct regular audits to assess their safety and reliability. Safety-critical applications, such as those in healthcare, transportation, or energy demand even stricter testing protocols and fail-safe mechanisms to prevent even the most unlikely mishaps. In cases of malfunctions, the AI system should degrade to a safe state in order to prevent harm. Continuous monitoring and data collection allow for better problem detection and resolution to unforeseen issues. This necessitates building AI systems that generate logs and provide insights into their internal processes, enabling developers to identify anomalies and intervene promptly. 6. Human Agency and Control As the field of machine intelligence evolves, the human-AI partnership grows stronger, yet more complex. Collaboration between people and intelligent machines can take many forms. AI can act as a tireless assistant, freeing up people’s time for more strategic or creative tasks. It can offer personalized recommendations or automate repetitive processes, enhancing overall efficiency. But the human element remains critical in providing context, judgment, and ethical considerations that AI, for now, still lacks. In creating trustworthy AI, intelligent machines should empower, not replace, human agency. The goal is to design systems that augment or strengthen human capabilities, not usurp them. Design AI systems where the user has clear, accessible mechanisms to override AI decisions or opt-out of its influence. This involves providing user interfaces that have clear parameters for human control or creating AI systems that actively solicit user input before making critical decisions. Embrace user-centered design to make AI interfaces intuitive and understandable. This provides users the ability to readily comprehend the reasoning behind AI recommendations and make informed decisions about whether to accept or override them. Ultimately, the relationship between humans and intelligent machines should be one of collaboration. AI remains a powerful tool at our service, empowering us to achieve more than we could alone while respecting our right to control and direct its actions. 7. Social and Environmental Impact The ripples of AI extend far beyond the technical realm. It promises to power society in unprecedented ways and solve some of humanity’s longest-lasting challenges in medicine, energy, manufacturing, sustainability. The development and usage of responsible AI requires adoption of a holistic view. One that considers the potential for social and environmental implications. This requires a proactive approach, considering not only the intended benefits but also the unintended consequences of deploying AI systems. Automation powered by AI could lead to significant job losses across various industries including manufacturing, transportation, media, legal, education, and finance. While new jobs may emerge in other sectors, the transition may be painful and disruptive for displaced workers and communities. Concerns arise about how to provide support and retraining for those affected, as well as ensuring equitable access to the new opportunities created by AI. As part of the policies for creating trustworthy AI, sustainability serves as the North Star, guiding us towards solutions that minimize environmental and social harm while promoting responsible resource management. Properly designed, AI can server as a powerful tool for combatting climate change, optimizing resource utilization, and fostering sustainable development. Conducting comprehensive impact assessments prior to AI deployment is imperative to gauge potential societal implications. Proactive measures to mitigate negative effects are necessary to ensure that AI advancements contribute positively to societal well-being. Remaining responsive to societal concerns and feedback is equally crucial. Organizations should demonstrate adaptability to evolving ethical standards and community needs, thereby fostering a culture of responsible AI usage. 8. Continuous Improvement The quest for responsible AI isn’t a destination, but a continuous journey. Embrace a culture of learning and improvement, constantly seeking new tools, techniques, and insights to refine practices. Collaboration becomes the fuel that drives teams to learn from experts, partner with diverse voices, and engage in open dialogue about responsible AI development. Sharing research findings, conducting public forums, and participating in industry initiatives are essential aspects of the trustworthy AI journey. Fostering an open and collaborative environment allows us to collectively learn from successes and failures, identify emerging challenges, and refine our understanding of responsible AI principles. Continuous improvement doesn’t always translate to rapid advancement. Sometimes it requires taking a step back, reassessing approaches, and making necessary adjustments to ensure that the organization’s AI endeavors remain aligned with ethical principles and social responsibility. Responsible AI development and usage at any organization requires team commitment, the willingness to embrace complex challenges, and agreement to continuous improvement as a foundational principle. By embedding these AI policy guidelines, your organization can build AI that isn’t only powerful, but also trustworthy, inclusive, and beneficial for all. Entefy is an enterprise AI software and hyperautomation company. Entefy’s patented, multisensory AI technology delivers on the promise of the intelligent enterprise, at unprecedented speed and scale. Entefy products and services help organizations transform their legacy systems and business processes—everything from knowledge management to workflows, supply chain logistics, cybersecurity, data privacy, customer engagement, quality assurance, forecasting, and more. Entefy’s customers vary in size from SMEs to large global public companies across multiple industries including financial services, healthcare, retail, and manufacturing.
<urn:uuid:d0a46b80-583e-4916-8976-e89fcb65f2c9>
CC-MAIN-2024-38
https://www.entefy.com/blog/the-indispensable-guide-to-effective-corporate-ai-policy/
2024-09-10T04:44:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00030.warc.gz
en
0.920252
2,698
2.5625
3
Static routing stands as a cornerstone in the architecture of network routing, offering a predefined pathway for data packets across networks. Unlike its dynamic counterpart, static routing relies on manually configured routes, providing a stable, predictable network environment. This guide embarks on an exploratory journey into static routing, elucidating its mechanisms, operational contexts, and strategic implementations. Whether you’re managing a small network or integrating static routes in larger, dynamic systems, understanding static routing is indispensable for network reliability and security. Table of Contents: - What is Static Routing? - How Static Routing Works - Advantages of Static Routing - Disadvantages of Static Routing - Practical Applications of Static Routing - Configuring Static Routing - Examples and Best Practices 1. What is Static Routing? 1.1 Definition and Overview Static routing represents a fundamental method within network routing, characterized by the manual configuration of routing tables by network administrators. This approach directs data packets through a network using predetermined paths, without the aid of algorithms to dynamically adjust routes based on network traffic or topology changes. Static routers, devices operating under this mechanism, rely on these fixed routing instructions to manage data transmission across networks. Static routing’s simplicity and predictability are its hallmark features, making it a preferred choice for smaller networks or scenarios where network stability and security are paramount. Unlike dynamic routing, static routes remain unchanged unless manually modified, providing a stable and secure routing framework but requiring meticulous setup and maintenance. 1.2 Static vs. Dynamic Routing: A Comparative Analysis Static routing and dynamic routing stand as contrasting paradigms in network path determination. While static routing depends on pre-configured routes established by network administrators, dynamic routing employs algorithms and protocols to automatically adjust routes in response to real-time network conditions. - Predictability and Control: Static routing offers unparalleled predictability and control, as routes are explicitly defined and unchanging without manual intervention. This can be advantageous in environments where route stability is critical. - Simplicity in Small Networks: For small networks with limited pathways, static routing provides a straightforward and resource-light routing solution. - Scalability and Flexibility: Dynamic routing excels in larger, more complex networks where scalability and flexibility are necessary. Automatic route adjustments facilitate efficient data flow, even in the face of network changes or failures. - Maintenance and Overhead: Static routing requires less computational overhead on routers but demands more administrative effort to maintain and update routing tables. In contrast, dynamic routing reduces administrative workload but places higher demands on router processing power and bandwidth. 2. How Static Routing Works 2.1 Understanding Routing Tables At the heart of static routing lies the routing table, a database stored within a router that contains routes to various network destinations. Each entry in a routing table specifies a network destination and the next hop or interface to use when forwarding data to that destination. In static routing, these entries are manually created by network administrators, defining explicit paths for packet forwarding. Routing tables include several pieces of vital information: - Destination Network: The network address of the destination subnet. - Subnet Mask: Identifies the portion of the IP address that represents the network and the part representing hosts on that network. - Next Hop: The address of the next router to which the packet should be sent on its way to the destination. - Interface: The specific router interface (e.g., eth0) through which packets should be forwarded to reach the next hop or destination. 2.2 The Role of Network Administrators in Static Routing Network administrators play a crucial role in the establishment and maintenance of static routing. Their responsibilities include: - Initial Configuration: Defining and inputting static routes into the routing tables of each router within the network. - Network Planning: Carefully planning routes to ensure optimal data flow and to avoid routing loops or black holes. - Maintenance and Updates: Manually updating routing tables to reflect network changes, additions, or optimizations. - Security and Access Control: Leveraging static routes to control access and enhance security by directing traffic through specific pathways. Static routing’s manual nature requires administrators to have a thorough understanding of the network’s architecture and the potential implications of routing decisions. This detailed planning and configuration effort underscores static routing’s suitability for smaller, more manageable networks where the benefits of stability and security outweigh the demands of manual maintenance. 3. Advantages of Static Routing 3.1 Predictability and Stability Static routing stands out for its predictability and stability within a network infrastructure. By relying on manually configured routes, static routing ensures that network traffic follows a predetermined path, eliminating the uncertainty associated with dynamic path selection. This predictability is crucial for network designs where traffic behavior needs to be tightly controlled, ensuring consistent and reliable data delivery paths. 3.2 Enhanced Security One of the significant benefits of static routing is the level of security it affords. Since routes are explicitly defined by network administrators, unauthorized access and routing updates are inherently prevented. This manual configuration acts as a barrier against routing-based attacks, making static routing an ideal choice for networks where security is a paramount concern, such as in financial institutions or government networks. 3.3 Low Resource Consumption Static routing is resource-efficient, requiring minimal processing power and memory. Unlike dynamic routing protocols that consume bandwidth to exchange routing information and require CPU resources to calculate optimal paths, static routes are straightforward and consume no additional network resources once set up. This low overhead makes static routing particularly suitable for devices with limited processing capabilities or networks where bandwidth conservation is essential. 4. Disadvantages of Static Routing 4.1 Scalability Challenges While static routing offers simplicity and security, it poses significant scalability challenges. As a network grows, the task of manually configuring routes on each router becomes increasingly complex and time-consuming. This difficulty is exacerbated in large networks or when frequent changes are required, making static routing less suitable for rapidly expanding or dynamic network environments. 4.2 Maintenance Overhead The manual nature of static routing introduces a considerable maintenance overhead. Network administrators must manually update routing tables whenever network changes occur, such as adding or removing subnets or changing network topologies. This can lead to human errors, potentially causing network outages or misconfigurations, especially in complex networks with numerous routes. 4.3 Limited Fault Tolerance Static routing lacks the inherent fault tolerance capabilities of dynamic routing protocols. Since static routes do not automatically adjust to network changes, such as link failures or congestion, they cannot reroute traffic around network issues. This limitation requires manual intervention to update routes in response to network changes, potentially leading to downtime until the issue is addressed. 5. Practical Applications of Static Routing 5.1 Use Cases in Small Networks Static routing is particularly advantageous in small network setups, where the network topology is simple and changes infrequently. In such environments, the ease of setup and minimal configuration requirements make static routing an efficient choice. Small businesses, home offices, and lab environments can benefit from the straightforward nature of static routes, ensuring reliable connectivity without the complexity of dynamic routing protocols. 5.2 Integrating Static Routes in Dynamic Environments In larger, dynamic networks that primarily use dynamic routing protocols, static routing still has a vital role. Static routes can be integrated to provide specific, unchanging paths for critical network traffic, such as between key servers and devices. This integration ensures that certain data flows remain unaffected by the potentially fluctuating decisions of dynamic routing algorithms, combining the best of both worlds for enhanced network performance and reliability. 5.3 Static Routing as a Backup Strategy Static routing can serve as an effective backup strategy in networks where redundancy and fault tolerance are critical. By configuring static routes as alternatives to dynamically learned paths, network administrators can ensure a fallback mechanism in case dynamic routes become unavailable. This application is especially useful in scenarios where maintaining constant network availability is paramount, providing a predefined path for rerouting traffic when needed. 6. Configuring Static Routing 6.1 On Windows Server Configuring static routes on Windows Server involves using the Routing and Remote Access Service (RRAS) or the command-line interface (CLI). Administrators can add static routes to direct traffic destined for specific subnets through predefined gateways, enhancing control over network traffic flow and improving security and performance for Windows-based networks. Configuring static routing on Windows involves using the route command via the Command Prompt: - Open Command Prompt as Administrator: Right-click on the Start menu, choose “Command Prompt (Admin)” or “Windows PowerShell (Admin)”. - View Current Routes: To display the existing routing table, enter: - Add a Static Route: For example, to add a route to the network with a subnet mask of255.255.255.0 via the gateway192.168.1.1 route add 192.168.2.0 mask 255.255.255.0 192.168.1.1 - Verify the Route: Check that the new route has been successfully added to the routing table: - Making the Route Persistent: By default, routes added with the command are not persistent and will be removed upon reboot. To make the route persistent across reboots, add the-p route -p add 192.168.2.0 mask 255.255.255.0 192.168.1.1 - Deleting a Route (if needed): If you need to remove a route, you can use: route delete 192.168.2.0 By following these examples, readers can learn how to configure static routing on both Linux and Windows systems, enhancing their understanding of network routing and management. 6.2 On Linux Systems Linux systems offer powerful tools for static routing configuration, such as the ip route command or older route command. These commands allow network administrators to define explicit routes for traffic, specifying the destination network, subnet mask, and next-hop address. For a Linux system, static routes can be added to direct traffic to different networks via specific gateways. Here’s a step-by-step example using the ip command, a modern replacement for the older route - Open the Terminal: Access the command line interface. - View Current Routes: To see the existing routing table, use: ip route show - Add a Static Route: Suppose you want to add a route to the network via the gateway192.168.1.1 . The command would be: sudo ip route add 192.168.2.0/24 via 192.168.1.1 - Verify the Route: Confirm that the new route has been added by viewing the routing table again: ip route show - Making the Route Persistent: Changes made with the ip route add command are temporary and will be lost after a reboot. To make the route persistent, you can add it to the network configuration file, which varies by distribution. For example, on Debian-based systems, you can add your static route to/etc/network/interfaces . - Apply Changes: If you’ve added the route to a configuration file for persistence, apply the changes by restarting the networking service or rebooting the system. 6.3 On Cisco Devices Cisco routers and switches support static routing through the Cisco IOS CLI. By using the ip route command, administrators can specify network destinations and the interfaces or next-hop IP addresses traffic should use. This capability enables precise control over how data traverses the network, optimizing performance and security for Cisco-based infrastructures. See: Add a static route using the Cisco IOS (Wikipedia). 7. Examples and Best Practices 7.1 Step-by-Step Configuration Guides Providing detailed, step-by-step guides for configuring static routes on various platforms ensures that network administrators can apply best practices to their unique network environments. These guides should cover the basics of adding, modifying, and removing static routes, as well as advanced topics like route summarization and redundancy. 7.2 Troubleshooting Common Issues Understanding common static routing issues, such as incorrect route entries, subnet mask mismatches, and next-hop accessibility problems, is crucial. Best practices include regular verification of routing tables, clear documentation of all static routes, and the use of diagnostic tools like ping to validate route functionality. Static routing, with its predictability, security advantages, and low resource consumption, remains a critical component of network design and management. Whether used in small networks, as part of a dynamic routing strategy, or as a backup solution, understanding its applications, configuration, and best practices is essential for network administrators. By balancing the strengths and limitations of static routing, professionals can optimize their network infrastructure for both performance and reliability. - “TCP/IP Guide” by Charles M. Kozierok - “Cisco CCNA Routing and Switching 200-125 Official Cert Guide” by Wendell Odom - Online Resources: - Cisco Learning Network - Linux Documentation Project
<urn:uuid:23a25abf-90ab-40d8-ad42-0e38327e7c7e>
CC-MAIN-2024-38
https://networkencyclopedia.com/static-routing/
2024-09-12T13:04:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00730.warc.gz
en
0.876082
2,732
3.421875
3
The Ultimate Containerization Guide: Docker, Kubernetes and More Containers are for everyone. Learn how to start with them When working with software, countless companies have bemoaned the fact that the virtual machines needed to run different parts of it increase their costs considerably. Recently, many have started using a method called "Containerization," which bundles the application together with all of its related configuration files, libraries, and dependencies. So why are huge companies, like IBM and Google, getting so excited over this approach? Simply put, containerization helps companies reduce overhead costs, makes the software more portable, and allows it to be scaled much more easily. In this article, we’ll go over: - Why container technology is becoming so popular - A brief history of containerization - An overview of Docker and Kubernetes - Who should use containers - The best container solutions Containerization vs. Virtualization Many people try to put containerization against virtualization as if they’re completely different approaches, which isn’t exactly the best way to look at things. Containerization actually solved all the shortcoming of full virtualization, which is why it’s gained favor from developers across the world. So, to get started, let’s take a look at a few differences between the two technologies: Once a software component called hypervisor is installed on the server, you can create virtual machines (VMs), which act as separate servers running their own operating systems. This approach usually ends up costing a lot of money since hypervisor and all of the VMs can often take up around 20% of a server’s performance. Containerization is virtualization of the OS kernel. While containers are running isolated processes, they’re sharing a common OS, binaries, and libraries. This doesn’t use nearly as much space and memory as the traditional VM, which reduces overhead costs considerably. Containers are much much quicker to create and activate because, unlike VMs, you don’t need to install an operating system every time you set one up. Implementation is also much easier since containers are isolated from their surrounding environment, making deploying them in a different environment much simpler. History of containers: 1970-now Many of us first started hearing about containers in 2013 when an up-and-coming open-source software called Docker appeared and brought containers into the public sphere. But, the idea of containers is actually much older than that. Back in the 70s, the operating system UNIX came out with a “chroot” (change root) command, which allowed users to isolate file systems. 20 years later, FreeBSD modified this idea to create a jail command, which let admins divide computer systems into smaller systems (or “jails”). In 2005, Solaris OS created containers that isolated applications so that they could run independently without potentially affecting each other. One year later, Google engineers created cgroups (control groups) technology, which was later absorbed into the Linux kernel, which then led to the emergence of the “light-weight” virtualization LXC. But, all of these solutions still needed people with extensive knowledge to manually configure them, so they didn’t gain much traction among experts. Docker was the first to solve this issue in 2013. They used Linux containers, added more functionalities (like portable container images), and made them easier to manage. And just like that, Docker’s containers became almost an overnight success. Docker: #1 in container technology While there are a few options out there to choose from, Dockers is, without a doubt, the most widespread container technology out there today. Today, 83% of all containers run on Docker. But what is Dockers? Dockers is a software that allows users to encapsulate apps, libraries, configurational files, binaries and other dependent files into a container so that they can be run in any environment without running into any issues. But Docker doesn’t stop there, the software also manages the container, from creation to the end of its lifecycle. While it is important to mention that there are other container technologies that exist (LXC, Open-VZ, CoreOS, etc.), they aren’t as well known as Docker because they require experts to set them up and maintain them. Docker Terms You Should Know V souvislosti s Dockerem často narazíte na celou řadu pojmů. Co přesně znamenají? - Dockerfile – textový soubor s instrukcemi k vytvoření Docker image. Specifikuje operační systém, na kterém bude běžet kontejner, jazyky, lokace, porty a další komponenty. - Docker image – komprimovaná, samostatná část softwaru vytvořená příkazy v Dockerfile. Je to v podstatě šablona (aplikace plus požadované knihovny a binární soubory) potřebná k vytvoření a spuštění Docker kontejneru. Images mohou být použité ke sdílení kontejnerizovaných aplikací. - Docker run – příkaz, který spouští kontejnery. Každý kontejner je instancí jednoho image. - Docker Hub – úložiště pro sdílení a management kontejnerů, kde najdete oficiální Docker images z open-source projektů i neoficiální images veřejnosti. Je ale možné pracovat i s lokálními Docker úložišti. - Docker Engine – jádro Dockeru, technologie na principu klient-server, která vytváří a provozuje kontejnery. Kubernetes and other orchestrators Managing containers manually on a single server is tough enough, but what if your containers are spread across hundreds of servers? Impossible, right? Not for an orchestrator like Kubernetes. An orchestrator is a tool that manages container networks, including the launching, maintaining, and scaling of containerized applications. Orchestrators have made the lives of many people significantly easier, especially microservices developers. They carry out tasks like making sure that server capacity is being used efficiently, launches services based on needs, stop old container versions and activates new ones, and more. There are quite a few different container orchestrator tools out there, but, just as with container technologies, there’s one that stands out from the pack–Kubernetes, an open-source orchestrator with a wide variety of options that can be customized to fit your needs. To show you why Kubernetes is the clear winner, let’s take a look at some alternatives. Docker’s orchestrator, Docker Swarm, is more simple than Kubernetes, but it was designed primarily for smaller container clusters. While Kubernetes takes a bit more work to deploy, it’s a much more comprehensive Docker-approved solution that gives you an easy to manage and resilient application infrastructure. Mesos, an Apache project, isn’t just an orchestration tool, it’s essentially a cloud OS that coordinates both containerized and non-containerized components. Many platforms can run simultaneously in Mesos, including Kuberenetes. Microservices and Containers Kontejnerové technologie bývají často zmiňovány v souvislosti s microservices. O co se jedná? Microservices představují způsob vývoje a provozování aplikací rozdělených na menší moduly, které spolu komunikují prostřednictvím API. Na rozdíl od velkých monolitických aplikací, jsou microservices snadné na údržbu, protože změna v jednom modulu se neprojeví v ostatních částech. Koncept microservices začal vznikat nezávisle na vývoji kontejnerů, ale právě díky kontejnerům je možné architekturu microservices v praxi efektivně využívat. Vývoj microservices v kontejnerech umožňují orchestrátory, které zvládají management obrovského množství kontejnerů a umožňují například i rolling update (při nasazení nových verzí aplikace udržuje repliky obou verzí pro případný rollback na předešlou verzi). Can containers help me? Now that we know that using containers can have quite a few benefits, but you might be wondering if this technology can help you. To make the answer simple: anyone who works with software can benefit from using container technology. Let’s just take a look at a few ways containers can help you: - Containers make software super portable. - Since the software is enclosed in a container, it can run on any machine without issue. - It’s more secure. Isolated software won’t have a negative impact on other systems. - Easily scalable software. Whether the number of users is increasing or decreasing, containers allow you to immediately react. Making Scaling Easy Škálování s kontejnery je velmi rychlé (mnohem rychlejší než vznik nového virtuálního stroje) a je tak možné reagovat okamžitě na velké výkyvy ve využívání aplikace. Kontejnery zároveň umožňují lépe využít zdroje serveru tím, že je aplikace rozdělená na menší části. Navíc můžeme některé mikroslužby naškálovat vícekrát, některé jen jednou, podle potřeby. Transition to containers While it is possible to encapsulate a giant monolithic application in one container, you won’t get all the benefits that containers have to offer. Containers use server’s resources most efficiently when apps are divided into smaller parts (especially since microservices might need to be scaled many times, and other portions might only need to be scaled once). There are a few ways you can break down monolithic applications, but unfortunately, no universal method exists. The more data dependencies there are, the more difficult it is to divide. Which container tool is best for me? Since both Docker and Kubernetes are open-source projects, you can start running containers on your own. But, keep in mind that transitioning large projects to containers can be a massive undertaking. If you don’t have the time or expertise to do it yourself, we recommend reaching out to an organization to set your solution up for you. In Master DC, we offer a few types of container setups you can choose from: - Virtual servers (VPS) with KVM virtualization — allows the user to install Docker and run own their containers - Managed Kubernetes cluster running on physical servers — ideal for larger projects since the Kuberenetes orchestrator requires at least 2-3 servers to operate - Managed Kubernetes cluster running completely on the cloud — ideal for smaller projects - Hybrid managed Kubernetes cluster — orchestrator runs in the cloud, containers run on dedicated physical servers The future of containers Everyone knows that containers are going to be influential in the technology of the near future, and now that large players like Google and IBM are placing their bets on container tools, we don’t see this going away anytime soon. In 2019, Turbomic report has shown that 26 % of IT companies have already started to use containerized applications, and that number is expected to double by 2021. Some companies have started using containers outside of servers. Linux recently launched Flatpak, a container-based software utility for software deployment and package management. With Flatpak, people can use applications that are isolated from the rest of the system, which strengthens security, makes updating easier, and discourages wasting resources collecting unnecessary data. Team Silverview also went all-in with container technology when they created Fedora, an experimental OS that distributes all software into containers. Want to make sure your company isn’t left behind? At Master DC, we can work with you to determine which container solution best fits your needs, and make the transition as smooth as possible.
<urn:uuid:fbda60d8-a178-4bbc-8768-f1e4152ec0cf>
CC-MAIN-2024-38
https://www.masterdc.com/blog/ultimate-containerization-guide/
2024-09-12T11:40:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00730.warc.gz
en
0.756072
3,009
2.640625
3
Today’s technology and business environments change so fast that solutions created by software projects can be obsolete by the time they’re delivered. Traditional approaches to enterprise IT aren’t designed to respond to the unprecedented speed and uncertainty. That’s why we build software through short iterations in close collaboration with our customers. While our project teams use best practices to ensure the software is developed the right way, customer acceptance tests at weekly demonstrations validate the software does the right things. The emerging solution evolves and improves through an ongoing cycle of team learning and customer feedback. Instead of merely mitigating the risk of change, we leverage unforeseen changes to create solutions that are better than what could have been envisioned at the start of a project. “The term “software” was first proposed by Alan Turing and used in this sense by John W. Tukey in 1957. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.” – Wikipedia.org
<urn:uuid:f29d6ac8-a9ac-4839-a238-cb5a1b217998>
CC-MAIN-2024-38
https://www.barnonetech.com/software/
2024-09-14T23:45:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00530.warc.gz
en
0.930682
245
3.109375
3
Process industry is defined where chemical change takes place during the manufacturing process. Process industry consists of continuous and batch process segments. Key industry sectors that qualify as process industries include chemicals and petrochemicals, oil & gas, food & beverages, tobacco processing, coal, textile fabric processing, textile yarns, mineral and metal refining and extrusion, wood , minerals, pulp and paper, printing, publishing, pharmaceutical– bulk chemicals and formulation, water and wastewater, and other consumables. Methods of measurement for flow, pressure, level, and temperature have been around for a long time. This implies that sensors and transmitters have achieved a certain degree of penetration in almost every process industry, especially in industrialized regions such as North America and Western Europe. The challenge for the manufacturers is to achieve growth in mature markets. Process industry users are knowledgeable about the sensor technology and demand low-cost products. With limited technical innovation taking place in sensor technologies, product differentiation has become a critical challenge for the manufacturers. Refining the distribution strategies is one of the challenges that many manufacturers come across to achieve higher sales, while reducing sales overheads. Technology and Communication Trends Technological trends in the process industries sensors are witnessed in providing the most affordable, reliable, and safest way of communication from the field up to the system. Due to continuous process a lot of process data are generated which are used to monitor and control. To move the data to and fro, the most commonly used field buses in process industry are Foundation & Profibus PA at the systems level. At the device level below PLC level HART protocol is the most popular process. Most process plants such as refineries, integrated steel plants, and petrochemical plants are generally spread to large physical area. There is a large installed base of legacy systems which are wired. While upgrading these legacy systems the wired sensors are replaced by wireless sensors to ensure failsafe connectivity and two way data movement. Process industries are also looking at fiber optics as a potential solution in the retrofit of major plant sites. Integration of diagnostics with verification features has made the sensors smarter. These features are able to inform about the status of the instrument to the engineers. Adding a digital signal processor (DSP) capability is currently a common trend in the market, which results in faster detection of signals and diagnostics. Sensors are gradually becoming intelligent with predictive capability. The installation of intelligent sensors has enabled activation of real-time corrective in critical process applications. Key Networks Protocols Automation systems are complex as they are structured into many hierarchical levels with each level having communication which places different requirements on the communication network. The industrial network consists of functionality based levels starting from field level at the bottom of the hierarchy through control level to information level at the top. The field devices, sensors, and actuators communicate through buses. Actuators and sensors use buses to connect discrete devices with built in intelligence. Buses are designed to compress large data/information into bits and bytes to ensure uninterrupted flow of information. The effort is to reduce cost per node and at the same time ensure quality communication. The fieldbus networks with truly distributed control and intelligent devices are most common combination. Networks that are commonly used on devicebus and fieldbus include Profibus-DP, Foundation Fieldbus, DeviceNet, LonWorks, SDS, CANOpen and Interbus-S. Among these fieldbuses networks Profibus is the most deployed in process industry for control applications. These include its versions DP, FMS, and PA. The buses are networked with a wire (one/two) of multi-point input sensors, intelligent devices, sub-networks such as AS-I, and operator interfaces and variety of different valves for diverse applications. The traditional 4-20 mA had been widely used. Now for a long time 4-20 mA with HART continued to be in demand in instruments as there exists a large installed base of legacy systems. Foundation Fieldbus and Profibus PA digital communication protocol have strongly been penetrating into process industry. Over the next 3-5 years usage of Profibus PA and Foundation Fieldbus is expected to grow in greenfield investments/projects, driven by the new trend of reading out additional/advanced information for diagnostics, and condition monitoring from the field devices, which is hard to achieve by the HART protocol. Wireless seems to be a new frontier but yet to come as real industrial mass application. The wireless technology today is still largely limited to monitoring applications only. With the advent of IOT platform sensor are likely to play larger role in data mining from field devices to enable real-time remote monitoring and control of process plants. (i) 4-20 mA HART 4-20 mA with HART is very close to conventional 4-20 mA, and the user does not have to understand the software in depth. It does not give any advanced information. Ease of use and cost effective installation are its major advantages. It does not require training to use. (ii) FOUNDATION Fieldbus & Profibus PA Both FOUNDATION Fieldbus & Profibus PA provide additional information such as diagnostics and condition monitoring. They reduce life-cycle cost of production line, and the total cost of ownership of the plant. They allow integration of plant assets on a single plant automation system on digital communication networks. Both allow for the reporting of self diagnostics, calibration, and environmental conditions of field instruments without disturbing the plant control. Usage of PROFIBUS and Fieldbus Foundation field instruments requires a lot more training. Software knowledge is necessary to be able to use Profibus and Foundation Fieldbus instruments In the past, the usage of Profibus and FOUNDATION Fieldbus depended on the industry type. For instance, the chemical and the oil and gas industries were more inclined toward foundation fieldbus, whereas the water industry was more inclined toward Profibus. In the past, the usage of Profibus and FOUNDATION Fieldbus also varied by geographic regions. For instance, in Europe Profibus (a German standard driven mainly by Siemens) was preferred, and in America’s favored FOUNDATION Fieldbus (driven mainly by Emerson Process) was finding favor. This scenario is now changing as market participants are witnessing the usage of both FOUNDATION Fieldbus and Profibus in all types of process plants across all regions. Wireless and Fieldbus technologies are expected to achieve growth from greenfield projects.
<urn:uuid:f070ae1c-f854-4b29-b14c-e8c48a50cb21>
CC-MAIN-2024-38
https://www.frost.com/growth-opportunity-news/sensors-enable-efficient-network-protocols-for-monitoring-and-control-of-process-industry-applications/
2024-09-16T06:02:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00430.warc.gz
en
0.943326
1,314
2.53125
3
In Part 1 of this article, we discussed how you can route between VMs on same host using networks connected to “Internal” virtual switch. Now let’s look at how to route between VMs when your lab consists of more than one hosts. Obviously, what I am going to cover isn’t the only way to do this but is one of many. I can certainly go on an on about all other options but I am trying to show the simplest possible one. For this lab, we are going to assume a small network where you have two Hyper-V hosts connected via single switch. We aren’t going to be fancy with VLANs on the switch or dual connections on host. Single cable to each host and flat network separated only logically so you won’t be able to talk to VMs on different subnets without routing configured. I have setup Windows Server 2012 R2 hosts for this article. I have also created two VMs that will go on two networks. These are same VMs, except, they now reside on two separate hosts as well as on two separate network subnets. The VM in “New York” is called NY-S1. The VM in “Europe” is called EU-S1. Here’s what the IP addressing looks like: Subnet | VM | VM IP | Router IP | 172.16.111.0/24 | NY-S1 | 172.16.111.11 | 172.16.111.1 | 172.16.112.0/24 | EU-S1 | 172.16.112.11 | 172.16.112.1 | Staying with the theme of saving resources and using Hyper-V host as router, we are going to do the same here. Since we have two hosts, pick one that you will use as router. As we did in previous article, let’s configure our Windows Server 2012 R2 host first. To keep this simple, I will use elevated PowerShell and run the following: Set-ItemProperty -Path HKLM:\system\CurrentControlSet\services\Tcpip\Parameters -Name IpEnableRouter -Value 1 There is going to be no response from PowerShell except it will return you back to prompt. If you get something back, most likely it would be because you didn’t elevate PowerShell or you don’t have administrative permissions to edit that registry key. Since we are changing parameters for TCPIP service, the change won’t be effective until after a reboot. Go ahead and reboot your Hyper-V host now. Since we are trying to route between two hosts, we can’t use “internal” type of VMSwitch. MAke sure you have atleast one VMswitch of type “external” created and management OS is allowed to share the connection (shown in screenshot below). You will also find a corresponding NIC on your host which is named vEthernet… in my case it is “vEthernet (Intel(R) 82579LM Gigabit Network Connection – Virtual Switch)”. Let’s go ahead and add two IP addresses we need for routing to this interface. You can keep existing IP you may have for your lab network as long as the ranges don’t conflict. New-NetIPAddress -InterfaceAlias ‘vEthernet (Intel(R) 82579LM Gigabit Network Connection – Virtual Switch)’ -IPAddress 172.16.111.1 -PrefixLength 24 New-NetIPAddress -InterfaceAlias ‘vEthernet (Intel(R) 82579LM Gigabit Network Connection – Virtual Switch)’ -IPAddress 172.16.112.1 -PrefixLength 24 This will configure host interfaces with IP addresses that will become default gateway for VMs. HEre’s what your network properties would look like: Ignore that default gateway I have crossed out in the image as that is for my lab network and isn’t applicable to our scenario here. With host configuration out of the way we can now connect VMs to the external switch, it not connected already: Connect-VMNetworkAdapter -VMName NY-S1 –SwitchName ‘Intel(R) 82579LM Gigabit Network Connection – Virtual Switch’ Connect-VMNetworkAdapter -VMName EU-S1 -SwitchName ‘Intel(R) 82579LM Gigabit Network Connection – Virtual Switch’ Notice the name of VSwitch. I have identical host configuration so they seem to be the same switch name, you may have different names per host or even more readable names if you renamed your VSwitch. Last step is to configure VMs with their respective IP and default gateway. I am sure this isn’t something you need help with so go ahead and take care of that step. At this time, you should be able to ping EU VM from NY and vice versa. Keep in mind that default firewall rules on your VM may be blocking ICMP and you may get request timed out. If so, check your firewall configuration and allow ICMP or test using something else that is allowed by firewall rules. How’s that for a router that’s built-into your environment and doesn’t need an extra VM chewing up those valuable computer resources? Remember, all things discussed in this articles are for your LAB not for your production environment. Please use proper routing for that.
<urn:uuid:c0078ccf-9f8b-46b0-aa6a-c068e6652ee2>
CC-MAIN-2024-38
https://bhargavs.com/index.php/2013/10/21/routing-for-hyper-v-lab-part-2/
2024-09-17T10:55:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00330.warc.gz
en
0.91098
1,167
2.5625
3
Not all radio spectrums are equal. Sub 1GHz offers the best coverage profile; however, the amount of low band spectrum available is limited. Frequency range two (FR2), i.e. greater than 6GHz, offers a large amount of spectrum with a significantly wide bandwidth (up to 400MHz), but it offers limited coverage. In fact, it is an excellent radio channel for gigabit throughput, but the coverage is limited to hundreds of feet. C-band spectrum, which is part of frequency range one (FR1), and also called mid-band spectrum, offers a good compromise between coverage and high throughput. As part of 3GPP release 15, three bands n77, n78, and n79 were identified for 5G operation in the C-band, with a potential service bandwidth of up to 100 MHz. With 100 MHz of bandwidth, C-Band can truly enable the enhanced mobile broadband (eMBB) use case for 5G. One thing to note is that C-band offers only Time Division Duplexing (TDD). TDD delivers a full-duplex communication channel over a half-duplex communication link. This means both the transmitter and receiver use the same frequency but transmit and receive traffic at different times by using synchronized time intervals. Advances in digital signal processing and the computation speed of hardware allow for TDD operations, but it does offer some challenges. Let’s review the benefits of TDD and some of the timing and synchronization requirements to ensure it can deliver a similar quality of RF services as Frequency Division Duplexing (FDD). TDD turns out to be a more attractive option from the spectral efficiency point of view because it requires only an unpaired spectrum for the operation which is beneficial considering the scarcity of frequency resources. Also, physical layer features such as massive MIMO, beamforming, and precoding, that rely on channel state information (CSI) measurement in the uplink are more robust due to channel reciprocity. Reference: 5G Timing and Synchronization Handbook for TDD Deployment
<urn:uuid:42515c76-19e6-460f-aa8c-097975213a2b>
CC-MAIN-2024-38
https://moniem-tech.com/2022/03/06/why-is-c-band-spectrum-important-for-5g/
2024-09-17T12:55:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00330.warc.gz
en
0.933732
429
2.5625
3
Cybercriminals expand repertoire of tricks to avoid detection March was testament to the fact that cybercriminals are not averse to exploiting tragedies in order to spread malware, according to Kaspersky Lab. Scammers and malware writers used the devastating events in Japan to spread malicious links to their own versions of the “latest news”. Cybercriminals created malicious websites with content connected in some way to the disaster and sent out letters making emotional requests for money to be transferred to the message sender in order to help those who have suffered. March also saw cybercriminals use Java exploits as a weapon of choice. Of the five exploits to appear in the Top 20 malicious programs on the Internet in March, three of them were for vulnerabilities in Java. Malware writers were also surprisingly quick to react to announcements of new vulnerabilities. A good example of this is a vulnerability in Adobe Flash Player that allowed cybercriminals to gain control of a user’s computer. The vulnerability was announced by Adobe on 14 March and by the next day, Kaspersky Lab had already detected an exploit for it. Protection against antivirus programs Another notable trend was that the malevolent users behind HTML pages that are used in scams or to spread malware are constantly coming up with new ways to hide their creations from antivirus programs. In February cybercriminals were using CSS to protect scripts from being detected. Now, instead of CSS, they are using textarea tags on their malicious HTML pages. Cybercriminals use the tag as a container to store data that will later be used by the main script. For example, Trojan-Downloader.JS.Agent.fun at 9th position in the Top 20 rating of malicious programs on the Internet uses the data in the < textarea > tag to run other exploits. In addition, according to Kaspersky Security Network (KSN) statistics, malware writers are actively modifying the exploits they use in drive-by attacks in order to avoid detection. At the beginning of March, Kaspersky Lab’s experts detected infected versions of legitimate apps on Android Market. They contained root exploits that allow a malicious program to obtain root access on Android smartphones, giving full administrator-level access to the device’s operating system. As well as a root exploit, the malicious APK archive contained two other malicious components. One of them sent an XML file containing IMEI, IMSI and other device information to a remote server and awaited further instructions. The other component had Trojan-downloader functionality.
<urn:uuid:28b317fb-0775-4c6e-b0c1-72bd33753af3>
CC-MAIN-2024-38
https://www.helpnetsecurity.com/2011/05/05/cybercriminals-expand-repertoire-of-tricks-to-avoid-detection/
2024-09-18T19:36:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00230.warc.gz
en
0.941406
520
2.578125
3
Password hashing algorithms are used to store and securely protect user passwords. They are a vital part of the authentication process and must be implemented correctly to ensure the system's security. In this article, we will discuss the various password hashing algorithms, their pros and cons, and how they can be used to protect users' accounts. First, let's take a look at what a password-hashing algorithm is. It is a mathematical process that takes a plain text password and transforms it into an unintelligible string of characters. This string, or hash, is then stored in the database instead of the plain text password. This process is known as one-way encryption, as the original plain text password cannot be retrieved from the hash, making it impossible for hackers to access the user's account. The most common password hashing algorithms are PBKDF2, bcrypt, and script. - PBKDF2 (Password-Based Key Derivation Function 2) is a widely used algorithm that employs a salt to protect against brute force attacks. - Bcrypt is a more advanced version of PBKDF2 and uses a high iteration count to slow down brute-force attempts. - Scrypt is a memory-hard algorithm that requires a large amount of RAM and processing power to generate a hash. Each of these algorithms has its pros and cons. PBKDF2 is simple to implement but is considered to be less secure than more advanced algorithms. Bcrypt is more secure but is more resource-intensive. Scrypt is the most secure but is also the most resource-intensive. When it comes to security, it is essential to choose the most secure algorithm for your system. Generally, bcrypt is considered to be the best choice. It is highly secure and is also relatively simple to implement. Finally, it is essential to note that password-hashing algorithms are not foolproof. If an attacker can gain access to the database, they may still be able to crack the hashes and gain access to the user's accounts. To prevent this, it is essential to implement other security measures, such as two-factor authentication and strong passwords. In conclusion, password-hashing algorithms are essential to the authentication process. They provide an extra security layer and help protect user accounts from unauthorized access. When selecting a password hashing algorithm, it is crucial to choose one that is secure and simple to implement.
<urn:uuid:aa5a7f78-44f3-4116-add9-7856768c0027>
CC-MAIN-2024-38
https://guptadeepak.com/password-hashing-algorithms-101/
2024-09-07T19:39:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00330.warc.gz
en
0.938433
487
3.328125
3
CALL "C$SLEEP" USING NUM-SEC NUM-SEC Numeric or alphanumeric parameter | The number of seconds to sleep. This parameter is a an unsigned fixed-point numeric parameter, or an alphanumeric data item containing an unsigned fixed-point number. | This routine can be used to impose slight delays in loops. For example, you might want to introduce a delay in a loop that is waiting for a record to become unlocked. Calling C$SLEEP will allow the machine to execute other programs while you wait. The C$SLEEP routine is passed one argument. This argument is the number of seconds you want to pause. For example, to pause the program for five and a half seconds, you could use either of the following: CALL "C$SLEEP" USING 5.5 CALL "C$SLEEP" USING "5.5" The amount of time paused is only approximate. Depending on the granularity of the system clock and the current load on the machine, the time paused may actually be shorter or longer than the time requested. Typically, the time paused will be within one second or one-tenth of a second of the amount requested (unless the machine is excessively loaded). If the sleep duration is zero, this function does nothing. If the sleep duration is signed, this function generates a run-time system error.
<urn:uuid:a51236e7-5a65-403f-9382-2512734ceba1>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/visual-cobol/vc60/EclWin/BKPPPPLIBRS088.html
2024-09-09T02:46:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00230.warc.gz
en
0.865679
294
2.828125
3
Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') Multiple SQL injection vulnerabilities in index.php in Pirates of The Caribbean in the E-Gold Game Series allow remote attackers to execute arbitrary SQL commands via the (1) x and (2) y parameters. CWE-89 - SQL Injection Structured Query Language (SQL) injection attacks are one of the most common types of vulnerabilities. They exploit weaknesses in vulnerable applications to gain unauthorized access to backend databases. This often occurs when an attacker enters unexpected SQL syntax in an input field. The resulting SQL statement behaves in the background in an unintended manner, which allows the possibility of unauthorized data retrieval, data modification, execution of database administration operations, and execution of commands on the operating system.
<urn:uuid:47188994-5ac4-49b3-ac27-6175745043f7>
CC-MAIN-2024-38
https://devhub.checkmarx.com/cve-details/cve-2009-3184/
2024-09-12T15:23:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00830.warc.gz
en
0.827808
157
2.90625
3
By the DynaSis Team Have you heard of “Shellshock”―the newest computer vulnerability to hit the news? If so, you may be wondering if your firm is at risk. Or, perhaps you heard that Shellshock doesn’t affect Windows devices, so you have dismissed it as a non-event for your office. In either case, we encourage you to read this alert. Discovered on September 12 and made public on September 24, Shellshock (also known as Bashdoor) is actually a family of bugs in a program called Bash. Written more than two decades ago, Bash is a “command shell” program―it interprets commands from users and other computers and relays them to the machine on which it is installed. Experts now believe that the bugs in Bash may have been introduced into the software code accidentally in 1992. Bash can run on devices and systems that use the Linux or UNIX operating systems or Apple OS X, but vulnerability doesn’t stop there. UNIX is deeply ingrained into the Internet, and experts estimate that as many as 70% of Internet-connected devices run Bash. It’s also used frequently in consumer electronics, from watches to cameras. Here are the takeaways you need to protect your firm. From a broader perspective, we find it deeply concerning that a software flaw could have existed for 22 years, undetected. It makes us wonder how many other “low-level” programs―perhaps that are also deeply ingrained in the Internet or other systems―have similar flaws. To learn more about Shellshock or to discuss proactive software updates, vulnerability assessments and/or software audits, fill out our inquiry form or give us a call at (770) 569-4600.
<urn:uuid:ac4f67a0-b437-451e-b44c-397653c8fa28>
CC-MAIN-2024-38
https://dynasis.com/blogs-articles/the-shellshock-bug-is-your-company-at-risk/
2024-09-20T03:14:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00230.warc.gz
en
0.952783
365
2.546875
3
Fintech is more than just a technological advancement—it’s a catalyst for inclusive financial growth, providing services to those traditionally left out of the banking system. It’s transforming the way we approach banking and financial transactions, making them more accessible, faster, and often more affordable. This exploration will delve into the role of fintech in enhancing financial inclusion, and the challenges that come with it. Bridging the Banking Gap with Fintech The advent of fintech has made financial services available to the masses, including in areas where traditional banking has failed to penetrate. Innovations such as mobile wallets and digital payment platforms have become crucial for those who were previously unbanked. They offer a new realm of financial opportunities that didn’t exist before, enabling economic participation from all corners of society. Ensuring Security and Privacy in the Fintech Era With the increasing digitization of financial services, security and privacy have never been more important. Fintech companies are at the forefront of implementing security measures to protect user data. Privacy-by-design is an essential approach to this, ensuring that products are Secure from the outset, thus avoiding potential vulnerabilities and maintaining trust with users. Balancing Innovation with Regulation As the fintech industry grows rapidly, regulations struggle to keep up. The balance between innovation and regulation is delicate, as it’s essential to encourage progress while protecting consumers from potential risks. Regulatory sandboxes have emerged as a solution, allowing fintechs to test new products under regulatory supervision but with more freedom than traditional settings allow. Tackling the Digital Divide and Algorithmic Bias Despite its potential, fintech must overcome the digital divide to achieve true inclusivity. It’s important to ensure that everyone has the necessary access to technology and understands how to use it. Additionally, it’s critical to address the risk of bias in financial algorithms, which could perpetuate inequality if not carefully managed. The Imperative of Cybersecurity Cybersecurity is crucial in the age of digital finance. Fintech companies must prioritize protecting their systems and customers from cyber threats. This involves a multi-faceted strategy that includes strong defenses, as well as education and awareness for both users and the workforce. Cybersecurity is not just about prevention; it’s about building a resilient infrastructure for a more secure financial future.
<urn:uuid:8bda7c47-25ab-4b2e-aaff-532f74438bf4>
CC-MAIN-2024-38
https://bankingcurated.com/trends-and-future/is-fintech-the-key-to-ethical-financial-inclusion/
2024-09-10T12:06:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00230.warc.gz
en
0.955131
492
2.53125
3