text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
In this Cisco CCNA tutorial, you’ll learn about the bottom four OSI Layers and their definitions. Scroll down for the video and also the text tutorial. Cisco The Lower OSI Layers Video Tutorial I wanted to personally thank you for teaching the ins and outs of networking! I’ve been recommending your videos to everyone I know because they are the absolute best! Where network engineers are not typically very concerned with the top three layers, we are very concerned with the bottom four layers. This is really bread and butter stuff for us. Layer 4 – The Transport Layer We'll start with Layer 4, the Transport Layer. The main characteristics of this layer are whether TCP or UDP transport is going to be used and the port number. If we want the communication between the two hosts to be reliable, then we'll use TCP. If speed is more important than reliability, like for voice or video of traffic, then we'll use UDP instead. The other main characteristic of this layer is the port number, for example, port number 80 for HTTP web traffic, port number 25 for SMTP email. Layer 4, Transport Layer, defines services to segment, transfer, and reassemble the data for individual communications between the end devices. It breaks down large files into smaller segments that are less likely to incur transmission problems. Layer 3 – The Network Layer The next layer is Layer 3, the Network Layer. The most important information at the Network Layer is the source and destination IP address. Again, there's a lot of other information also carried in the Layer 3 header. Routers are Layer 3 devices. They operate at Layer 3 of the OSI model. The Network Layer provides connectivity and path selection between two host systems that may be located on geographically separated networks. The Network Layer is the layer that manages the connectivity of hosts by providing logical addressing. IP addressing is our logical addressing. Layer 2 – The Data-Link Layer The next layer is Layer 2, the Data Link Layer. The most important information here is the source and destination Layer 2 address. Again, just like with Layer 3 and 4, other pieces of information are also included in the Layer 2 header. For example, the source and destination MAC address if Ethernet is the Layer 2 technology. Different Layer 2 technologies use different formats for their addressing. For example, old legacy Frame Relay uses DLCI or DLCI numbers for the addressing. With Ethernet, which is what is always used in our Local Area Networks, it's the MAC address that is used here. Switches operate at Layer 2. Our switches are Layer 2 aware devices. The definition for Data-Link Layer, it defines how data is formatted for transmission and how access to the physical media is controlled. It also typically includes error detection on correction to ensure reliable delivery of the data. Layer 1 – The Physical Layer Finally, we have Layer 1, the Physical Layer. This concerns literally the physical components of the network. For example, the actual physical cables being used. Physical Layer enables bit transmission, the 1s and 0s, between end devices. It defines specifications needed for activating, maintaining, and deactivating the physical link between end devices. For example, voltage levels, physical data rates, maximum transmission distances, and physical connectors, etc. The TCP/IP and OSI Networking Models: https://www.ciscopress.com/articles/article.asp?p=1757634&seqNum=2 Introduction to the OSI Model: https://networklessons.com/cisco/ccna-routing-switching-icnd1-100-105/introduction-to-the-osi-model Cisco Open Systems Interconnection OSI Model Overview: https://www.flackbox.com/cisco-open-systems-interconnection-osi-model-overview Cisco The Upper OSI Layers: https://www.flackbox.com/cisco-the-upper-osi-layers Text by Libby Teofilo, Technical Writer at www.flackbox.com With a mission to spread network awareness through writing, Libby consistently immerses herself into the unrelenting process of knowledge acquisition and dissemination. If not engrossed in technology, you might see her with a book in one hand and a coffee in the other.
<urn:uuid:4b2973c9-4d22-4c51-a44b-101fd6fd6e6e>
CC-MAIN-2024-38
https://www.flackbox.com/cisco-the-lower-osi-layers
2024-09-19T04:16:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00787.warc.gz
en
0.899807
909
3.75
4
How to Secure Company Financial Data What Is Financial Data Security? Financial data security encompasses the technology, policies, processes, and physical safeguards that are put in place to protect sensitive financial data. Financial data security includes protections for hardware, software, networks, storage devices, and user devices, as well as authentication, access, and administrative controls. Financial data security aims to protect information throughout its lifecycle from unauthorized access, corruption, and theft. Financial data security protects information related to financial accounts and transactions, such as customer account numbers, credit card numbers, transaction data, sales data, purchase history, credit information, and credit rating data. It also includes a company’s assets and liabilities, such as real estate, equipment, furniture, computers, intellectual property, patents, and debt owed. In addition, financial data security ensures customer trust and compliance with legal requirements. What Are the Three Types of Data Security? Financial data security protections fall under the three core elements of general data security—confidentiality, integrity, and availability. These three elements are referred to as the CIA Triad and serve as a proven model for financial data security. The functions of each element of the CIA Triad are as follows. Authorized users can only access data based on their assigned privileges and is protected against accidental or malicious unauthorized access. - Integrity All data that is stored or transferred must remain reliable, accurate, and not subject to unauthorized changes. - Availability Data needs to be consistently, readily, and securely accessible to authorized users. There are many options to meet the requirements for financial data security. Following are several of the most widely used forms of financial data security. - Data erasure This is the process of overwriting and deleting data so it cannot be accessed. Data erasure is permanent and irreversible. - Data masking Data masking can hide information by obscuring and replacing specific letters or numbers. Once the information has been through a data masking process, it can only be decoded or decrypted by authorized users. - Data resiliency Financial data security requires that information be recoverable in the event of theft, disaster, or accidental damage or deletion. - Encryption Data encryption provides financial data security by using algorithms to scramble data and render it unreadable. Only authorized and authenticated users can access the data using decryption keys. How Do You Secure Financial Data? Following are financial data security solutions that align with best practices that direct the protection of sensitive banking and financial information. The type or blend of solutions used for financial data security will differ according to the size and type of organization. - Anomaly detection - Anti-malware software - Application security - Data backups - Data governance - Data loss protection (DLP) - Data management - Data security and privacy frameworks - Differentiation between personal data and sensitive personal data - Email security - Encryption for data at rest and in motion - Endpoint threat detection and response (ETDR) - Identity and access management (IAM) - Incident response plans (IRP) - Intrusion prevention systems (IPS) - Network segmentation - Periodic risk assessments - Role-based access controls (RBAC) - Security awareness training - Security information and event management (SIEM) - Strong passwords - Third-party risk management - User activity monitoring - Virtual private networks (VPN) - Web security - Wireless security Are the Financial Data Security Laws? Yes, there are a number of laws that dictate financial data security requirements. Failure to comply with financial data security rules can result in stiff fines and other stringent penalties. The following are several of the key laws that regulate financial data security. Financial Industry Regulatory Authority (FINRA) FINRA (the Financial Industry Regulatory Authority) has a number of rules for the financial industry detailing requirements for SEC members related to the information they need to collect, maintain, and protect. FINRA regulations ensure that regulators and investors have fast and secure access to critical information and protect investors’ and stakeholders’ information. Gramm-Leach-Bliley Act (GLBA) The Gramm-Leach-Bliley Act, referred to as GLBA, requires a financial institution to disclose the policies and practices it has in place to protect the confidentiality, security, and integrity of nonpublic personal information about consumers, even those who are not customers. Payment Card Industry Data Security Standard (PCI-DSS) Payment Card Industry Data Security Standard (PCI DSS) sets forth financial data security standards designed to ensure that any organization that accepts, processes, stores, or transmits credit card information maintains a secure environment. There are 12 requirements to maintain PCI-DSS compliance. 1. Assign a unique ID to each person with computer access. 2. Develop and maintain secure systems and applications. 3. Do not use vendor-supplied defaults for system passwords and other security parameters. 4. Encrypt transmission of cardholder data across open, public networks. 5. Install and maintain a firewall configuration to protect cardholder data. 6. Maintain a policy that addresses information security for all personnel. 7. Protect stored cardholder data. 8. Regularly test security systems and processes. 9. Restrict access to cardholder data by business need to know. 10. Restrict physical access to cardholder data. 11. Track and monitor all access to network resources and cardholder data. 12. Use and regularly update anti-virus software or programs. Sarbanes-Oxley Act (SOX) The Sarbanes-Oxley Act, referred to as SOX, is a law aimed at improving the quality and reliability of reporting by companies participating in the public capital market. Part of SOX regulates data storage, both on-premises and with cloud providers. SOX also mandates that data must be encrypted with a 256-bit AES key, regardless of content. Securities and Exchange Commission (SEC) rules The Securities and Exchange Commission, or SEC, has a number of rules governing financial data security. Among them is Rule 30 of SEC Regulation S-P, which requires companies to maintain written policies and procedures that detail the administrative, technical, and physical safeguards that should be in place to protect customer data. Another couple of SEC regulations that govern financial data security are Rules 31a-2 and 204-2. These rules set forth the criteria by which funds and advisers can maintain records electronically. In addition, they must establish and maintain procedures to: - Ensure that electronic copies of non-electronic originals are complete, true, and legible. - Limit access to the records to authorized personnel, the Commission, and, for funds, fund directors. - Safeguard the records from loss, alteration, or destruction Financial Data Security Critical as Threats Continue to Expand Many financial data security solutions have proven track records addressing threats that target this industry. However, the rise of ransomware and other sophisticated threats pushes these solutions to their limits. As a result, financial data security can only be accomplished by continually integrating a rich mesh of tools that provide proactive detection and mitigation as well as ensure preparedness for swift and complete recovery in the event of a successful attack. Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide. Last Updated: 2nd October, 2023
<urn:uuid:e44284c6-6579-49dd-839a-8e03f71a6c11>
CC-MAIN-2024-38
https://www.egnyte.com/guides/financial-services/financial-data-security
2024-09-09T11:10:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00787.warc.gz
en
0.91239
1,555
2.828125
3
The rise of microgrids, while not inevitable, is a natural next step in the progression of smart grid technology. As automation, data collection and transport, and monitoring capabilities have grown into standard smart grid technologies, companies, military bases, small towns and even cities are tapping into the possibilities for self-sustaining microgrids. What are Microgrids? Microgrids are, essentially, self-contained local energy grids. In most instances, they are attached to the greater grid (macrogrid), but can disconnect if necessary for autonomous operation. In other scenarios, they are local grids powered by alternative energy means. For instance, according to a 2014 article from Navigant Research, Alaska leads the world in microgrid deployment due to the small communities that rely almost exclusively on local energy – in some cases, 100 percent renewable energy. The viability of these kinds of energy distribution networks was not always apparent. For years, the United States has relied on a connected grid system that could be prone to huge shutdowns or security risks. As the technology has improved, microgrids that can disconnect from the macrogrid and function autonomously have opened huge possibilities for smart cities, the Industrial Internet of Things (IIoT), and more. Smart Cities Powered by Microgrids Smart cities rely strongly on the backbone of wireless technology. Imagine a scenario in which a city’s electricity grid went down, killing the wireless networks and effectively bringing any connected technology to a grinding halt. It could mean the shutdown of public transit, water and wastewater treatment facilities, electricity, vehicles, stoplights – the list can go on. Any IoT or IIoT systems would shut down. However, with a smart city set up with a microgrid concept, if a part of the macrogrid went down, microgrids could disconnect and allow normal functionality without service shutdowns. If hackers or other security concerns hit the macrogrid, microgrids can disconnect and protect the system from further threat. And, in many cases, microgrid technology is driving the rise of alternative energy and energy independence. Renewable Energy and Microgrids One of the main problems facing renewable energy has always been storage. How can renewable energy sources create excess energy and store that energy for future use in case of macrogrid failure? What cities and small towns are finding out is that by building a renewable energy system connected to a microgrid, they can effectively develop net-zero communities that don’t have to rely on energy storage in the instance of macrogrid failure. As these technologies have matured and become implemented in different use-case examples, the possibility for more intricate and complex systems is apparent. As the IIoT continues to adopt microgrid technologies and practices, industry practices will mature, creating greater efficiency both operationally and with regard to energy usage and distribution. The future of smart cities and a stronger connected infrastructure could be poised to accelerate along with the growth of microgrid applications.
<urn:uuid:53930d6b-ce7a-48bb-9958-fcfff7b1f0f6>
CC-MAIN-2024-38
https://www.freewave.com/microgrids-promise-smart-industry-possibilities/
2024-09-09T10:48:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00787.warc.gz
en
0.934535
605
3.625
4
Recently there have been posters in London Underground stations warning users of Oyster Cards - the Transport for London (TfL) NFC enabled electronic travel wallet - that there is a risk of “card clash”. These posters warn that they need to keep other contactless NFC payment cards separate from their Oyster Card when they “touch in” on a bus to avoid the risk that the wrong card would be charged. TfL will be rolling out the ability to use NFC enabled payment cards on the Tube (London Underground), Overground and DLR later in 2014, and this could lead to further problems. The charges on the London Underground are based on the journey made and the system depends upon the same card “touched in” on a reader at the origin of the journey being “touched out” at the destination. If a different card is used at each end of the journey both cards are charged the maximum fare. NFC technology is an important enabling technology for the Internet of Things (IoT) and the vision for the IoT makes bold promises of benefits for individuals and businesses. These benefits include making life easier for the individual while allowing businesses to be more efficient. Being charged twice for the same journey doesn’t seem to square with these claims - so what is happening here? Near Field Communications (NFC) is a set of standards for devices including smartphones and contactless cards that allow a radio frequency connection to be established quickly and simply by bringing two devices close together (within 10cm to 20cm). NFC standards cover communications protocols and data exchange formats, and are based on existing radio-frequency identification (RFID) standards. An important aspect of these protocols is singulation. When different NFC devices are in the RF field of a reader, it needs a way to discriminate between them in order to establish single interactions with one or each of them. This is achieved through the singulation protocol, which is usually run at the time the reader starts a new communication session. During this initial phase each device identifies itself to the reader, communicating an identifier that will be then used by the reader to contact them individually. At the NFC device protocol level the ability to distinguish between cards is taken care of, so it looks like the problem lies at the application or system level. The whole system relies on the same card being used on entry and on exit. The technical protection provided by the NFC protocols cannot protect the system if the application does not take account of the possibility for more than one card being detected at either end. In view of the number of passengers entering and leaving the Tube at peak times it is understandable that throughput may need to take priority over flexibility, however getting to grips with details like this will be essential to realize the potential benefits of the Internet of Things.
<urn:uuid:90c60c46-a888-4c79-b51c-bd6ad243a684>
CC-MAIN-2024-38
https://www.kuppingercole.com/blog/small/card-clash-on-the-london-underground
2024-09-09T11:40:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00787.warc.gz
en
0.946609
570
2.71875
3
WHAT IS DMR? DMR stands for Digital Mobile Radio and is an international standard that has been defined for two-way radios. The DMR standard allows equipment developed by different manufacturers to operate together on the same network for all the functions defined within the standard. The aim of the DMR standard was to create a digital radio system with low complexity and low cost that still allows for equipment from different manufacturers to work together, allowing users to shop around rather than being locked into a proprietary system which would be costly to replace and maintain. The European Telecommunications Standards Institute (ETSI) is responsible for the creation and maintenance of the DMR standard. The standard was first ratified in 2005 and has subsequently been updated and revised several times, most recently in November 2018. What is DMR? Is all DMR equipment compatible? Yes, and no; while basic functionality will work between systems (such as voice transmission) if your chosen manufacturer has performed interoperability testing, you might be using features outside those defined by the DMR standard. Before deciding what equipment to purchase, you should check what features you are currently using and find out if any are unique to your current equipment. What frequency does DMR equipment work on? Digital Mobile Radio works between the frequencies of 30 MHz (Megahertz) and 1000 MHz, also known as 1 GHz (Gigahertz). This range of DMR frequencies is divided into two categories: - Very High Frequency (VHF) - Range between 30 MHz and 300 MHz - Ultra High Frequency (UHF) - Range between 300 MHz and 1 GHz. From these ranges, most DMR equipment falls into the 136 - 174 MHz and 403 - 527 MHz parts of the spectrum. Each country has its own organisation tasked with allocating licences, but some DMR frequencies are allocated as licence-free (for DMR Tier I) while others require a licence to operate. Do I need a licence for DMR equipment? The short answer is “it depends”. If you have a small number of users and basic communication requirements you may be happy to use DMR Tier I (see what are DMR tiers?) radios which do not require a license and are simple to use. It should be noted that Tier I equipment has a shorter range and are susceptible to interference from other users. If you require a more complex system you will need to apply for a licence from the frequency licencing body in your country. DMR licencing is a complex topic, but with Motorola Solutions’ extensive partner network, we can pair you with a local expert to help you find the solution that is right for you. How does DMR compare to TETRA? DMR and TETRA are both standards created by the European Telecommunications Standards Institute (ETSI) to address different types of user. DMR is targeted at the commercial markets while TETRA is targeted at public safety. This is demonstrated best by the difference in interoperability testing for the two systems. DMR system interoperability is tested by manufacturers working together on a one-to-one basis. For comparison, TETRA products must be tested by an independent third-party featuring multiple vendors. DMR tests only include 6 mandatory interoperability features, whereas TETRA testing requires 49 mandatory features to be tested. The higher number of tests for TETRA equipment allows public safety bodies such as the police and fire services to make separate purchase decisions without worrying about compatibility between their systems during an emergency. However, the more stringent TETRA standard does not allow manufacturers to be flexible in the way they implement new and exciting features in the way that the DMR standard does. Motorola Solutions can help you to pick the right radio technology for your requirements. Contact us today using the button below. What is the range of DMR? The answer to this question depends on the equipment you are using and the infrastructure you have installed around it. As an example, the International Space Station, orbiting at an altitude of 408 km uses DMR to communicate with the earth, but there are very few obstructions between the station and the antennas on the ground so the signal is easily received. In comparison, a DMR Tier I radio operating inside a building may only work for around 100 metres. DMR radio systems can have any range you wish them to have if the correct infrastructure is installed; DMR Repeaters can extend signals over a large area, but pockets of connectivity can be created by distributing data between repeaters through other means such as microwave links or the internet. Motorola Solutions extensive partner network means can we can pair you with a local expert to help you find the right system solution for your unique requirements. Contact us today to start your journey. What is a DMR Repeater? DMR radios are able to communicate with each other directly without a centralised system, but this is not always an ideal situation. Signals between radios connecting directly to each other can be hampered by obstructions in the line-of-sight between them such as trees, buildings, and hills. A DMR repeater added to the system allows radios to send their communications via a central point which repeats the message to the rest of the system. By installing a repeater high up (often on top of a building) the calls to the repeater are less affected by the obstructions. It also allows radios located far away from the repeater in opposite directions to communicate with each other, effectively increasing the range of the system. Repeater stations can be connected together by either retransmitting received signals (parroting) or by sending received signals to other repeaters on the system via other methods (internet, unidirectional transmitter, etc). With these methods, the range of the system is only limited by the amount of infrastructure you install. Explore LMR from Motorola Solutions APX and ASTRO P25 With mission-critical communications, there’s no room for error, the APX radio series creates intelligent action and ensures you are always connected to your team to ensure the best outcomes. Create a safer and more efficient workflow with MOTOTRBO, enhancing productivity and keeping your team connected in a ruggedized manner. To ensure the best customer service, keep your staff connected so they remain concerned with putting the customer first through Business Lite Radios. For families and casual users, Talkabout radios keep you connected through hands-free communication so you can enjoy conquering moments. Maintain and restore your system through improved network responses to reduce risks and manage the complexity of your networks through our LMR services.
<urn:uuid:fa166a25-c922-4f3c-acc2-18f74db57553>
CC-MAIN-2024-38
https://www.motorolasolutions.com/en_xa/solutions/what-is-dmr.html
2024-09-12T00:14:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00587.warc.gz
en
0.945631
1,359
2.65625
3
Managing a technology infrastructure can be challenging, especially without an in-house internet technology (IT) team. Even with an IT department, you can easily miss threats and vulnerabilities. IT cybersecurity testing methods offer the extra level of protection companies need to keep data secure. Cybercrime attacks are multifaceted, with different hacks, tools and programs being used. Ninety-three percent of cybercriminals can penetrate company networks. They gain network perimeter and access local resources amongst a mix of industries. On average, it can take a hacker 2 days to get in. Research shows that 43% of cyberattacks are aimed at small businesses, but only 14% can defend themselves. An alarming 84% of attacks are distributed by email as more and more people access it through their mobile. Working with a third-party cybersecurity agency is an accessible way to employ cybersecurity testing. Below, you can learn more about cybersecurity testing and how an IT company can perform security testing to safeguard your company’s information. What Is Cybersecurity Testing? The Federal Bureau of Investigations (FBI) researches cybercrime every year. In 2020, compromises to business email alone accounted for more than $1.8 billion lost. That’s not to mention the numerous other business aspects that cyber threats can impact. 2022 had the second-highest number of data compromises in a single yea with at least 42 million impacted, according to the Identity Theft Research Center (ITRC). The potential total loss increased to $10.2 billion in 2022 overall. Of that, $2.7 billion was business email compromise. With the many risks of security vulnerabilities, cybersecurity tests are valuable to businesses of all sizes. This testing assesses and measures security vulnerabilities in a computer system to determine how effective a strategy is at preventing an attack. IT professionals specializing in network security manage cybersecurity tests to gauge their results. The cybersecurity testing approach incorporates various methodologies to find weaknesses in a security strategy. These cybersecurity assessment-type strategies have developed over time as technology became more advanced and threats became more common. For example, antivirus software became one of the first methods of defense against malicious attacks, and we still count on it and similar strategies today to protect our computers from malware and other unwanted security threats. While it’s wise to use basic antivirus software to detect suspicious behaviors on your computer, it’s not an airtight method. Even though basic subscription services can put a stop to a range of threats, they won’t stop every single one. There’s far more to cybersecurity than putting a stop to viruses. As an example, phishing is a common cause of network security breaches. This type of threat comes in the form of an email or other digital message that tricks users into clicking an included link. These attacks usually involve a hacker posing as a reputable site to get link interaction. Another common threat is ransomware, where hackers gain access to sensitive data or complete databases and hold it at a ransom. These cybersecurity threats are present at all times, and a few simple software programs won’t stop them — which is why security testing is done. With cybersecurity testing, professionals can detect the gaps before they turn into costly vulnerabilities. What Are the Different Types of Cybersecurity Testing? The need for cybersecurity testing is clear, and IT professionals use a range of methods to address potential threats and strengthen a company’s infrastructure. Consider that four out of 10 businesses have experienced fraud between 2020 and 2022. Hackers and organized cybercrime groups were the biggest threat, with 31% being hackers and 28% organized crime. Understanding the different testing methods can help you create an organized strategy for your cybersecurity approach. The best way to use cybersecurity testing methods is to create a schedule for various tests to keep your security systems robust and up to date. Explore the different cybersecurity methods and testing processes to find out what processes your company may benefit from most. 1. Cybersecurity Audit A cybersecurity audit is designed to be a comprehensive overview of your network, looking for vulnerabilities as it assesses whether your system is compliant with relevant regulations. These audits usually give companies a proactive approach to the security design process. Once they know what gaps they need to fill, they can design a security setup with more intention. Independent IT professionals usually conduct audits to eliminate any conflict of interest. Sometimes, they’re handled internally, but it’s a rare occurrence. There’s a range of regulated procedures used in an audit to ensure IT professionals assess every area of a security system. A complete audit process covers substantial ground, and it usually starts with a review of a company’s data security policies. During the review, professionals will consider how policies support the confidentiality, availability and integrity of a company’s data. Creating a wide few of security environments gives IT professionals a sense of what needs the most attention. Other processes in an audit may include compiling a list of relevant security regulations for a company and building a network map to see how every system connects. IT auditing professionals will work closely with cybersecurity personnel in the company to ensure all responsibilities are clear within the enterprise. Many factors can affect how often a business opts for a cybersecurity audit, but doing so annually is generally recommended. As a rule, companies should also employ audits when they’ve altered their network setups, introduced new software, expanded or made any other significant changes to their technology ecosystem. Note that industries with higher compliance requirements such as HIPAA compliance may choose to do more audits throughout the year to align with relevant standards and regulations. Additionally, budget restraints may determine how often a business chooses to conduct a security audit. 2. Penetration Test Often called pen testing, penetration testing is a form of ethical hacking. During a pen test, IT professionals intentionally launch a cyberattack on a system to access or exploit applications, websites and networks. The main objective of a pen test is to identify areas of weakness in a security system. The specific goals of a pen test depend on the area professionals hack. In the case of networks, the aim is to calibrate firewall rules, close unused ports and eliminate any loopholes. For websites, professionals want to identify and report notable vulnerabilities like cross-site scripting and buffer overflow. There are several methods of to test a network security with penetration testing, and the type that IT workers use will depend on an organization’s goals and security concerns: - Internal tests: These pen tests are performed within a company’s environment and simulate events where a hacker penetrates the network perimeter or an authorized user abuses access to private data. - External tests: IT professionals perform external tests by hacking a network perimeter through an outside source, like the Internet. - Blind tests: In a blind test, testers will simulate the actions of a real hacker. IT professionals go into the process with little to no information about a company’s security infrastructure, and they attempt to access the network perimeter. During the test, they rely on third-party online information to access the network, which can reveal how much private information is readily available to the public. - Double-blind tests: This test is similar to a blind test, but members in the company, like IT personnel, are unaware of the penetration test. This method tests threat identification processes and associated procedures to determine how well they can hold up against a hacker. - Targeted tests: Unlike blind tests, targeted tests require complete transparency. IT teams are involved in the process to address specific concerns about a network. These tests take less time to execute, but they may not provide a full picture of a company’s cybersecurity. Typically, businesses should perform penetration tests annually or after any major changes to network infrastructure. 3. Vulnerability Scan A vulnerability scan is the process of identifying security weaknesses in systems and software with the goal of protecting an organization from breaches. This scan is often confused with penetration testing because they have similar functions. However, they’re different. While pen testing involves simulated hacking that can locate the root cause of gaps, vulnerability scanning is an automated test that simply identifies gaps. IT professionals use designated software to identify vulnerabilities. These scanners create an inventory for all systems and run them against a database of known vulnerabilities to see potential matches. At the end of the scan, known vulnerabilities will be highlighted for a company to handle. There are several vulnerabilities a scan might identify within a network. In 2020, the Cybersecurity and Infrastructure Security Agency (CISA) identified the most encountered vulnerabilities. The most common vulnerability they found was remote code execution (RCE). This vulnerability involves a hacker running code of any kind with system-level privileges on networks with the required weaknesses. In 2022, research showed that 83% of organizations have experienced more than one data breach. The biggest culprit was ransomware, which served by 15%. While the global average cost of a data breach reached $4.35 million, the number is more than double in the US. Other vulnerabilities include: - Arbitrary code execution: An attacker can run commands or code on a vulnerable device. - Arbitrary file reading: An attacker can read or write any content in a file system. - Path traversal: A vulnerability that gives attackers access to unauthorized files. 4. Security Scan A security or configuration scan searches for misconfiguration in a system. A misconfiguration is an incorrect or suboptimal design of a system or system component that can lead to vulnerabilities. A misconfiguration occurs when security systems aren’t defined or the default values aren’t maintained. Unfortunately, hackers know misconfigurations are easy to detect. Typically, exploited misconfigurations can lead to high-volume data leakage that can cause harm to businesses. Common misconfigurations include: - Default account settings - Unencrypted files - Unpatched systems - Outdated web apps - Insufficient firewall These incorrect designs can classify as a vulnerability that may be identified during a vulnerability scan. However, security scans operate under the intention of only looking for misconfigurations, making them a more pointed cybersecurity test. As more applications shift to the cloud, misconfigurations are easy to overlook. Many misconfigurations come from the cloud and hybrid environments brought about by an increase in remote workforces. Research conducted by Gartner claims that 99% of cloud misconfigurations through 2025 will be the customer’s fault. That said, companies have complete oversight into network configurations — it’s a matter of paying attention to them. Among all other IT demands, it can be easy to miss them, even though they’re easy to address. This fact is the reason security scans are essential to companies’ cybersecurity frameworks. Considering the ease of overlooking misconfigurations, performing regular security scans can give your team the foresight it needs to secure its network. While annual security scans are a smart move, you may choose to conduct them more frequently. Performing them a few times a year can help your company keep up with possible vulnerabilities. 5. Risk Assessment A cybersecurity risk assessment is a process that analyzes the various security controls in an organization and what possible threats can occur within them. These assessments are comprehensive processes that assess existing risks and create strategies for mitigating them. The information assets that are vulnerable to risks include hardware, software, intellectual property, customer data and more. There are four essential steps to a risk assessment: - Identify: The first step is about identifying all essential assets in your company’s technology infrastructure. IT professionals will determine all sensitive data associated with said assets and create a profile of risks for each one. - Assess: IT team members will evaluate risk levels and determine how many resources a company will need to dedicate to risk mitigation. This step aims to find the relationship between vulnerabilities, assets and mitigation. - Mitigate: The risk assessment team will create a plan for risk mitigation and enforce security controls for all identified risks. - Prevent: A company’s personnel will enforce ongoing mitigation by implementing designated tools and processes to minimize threats as they arise. According to priorities, risk assessment teams will roll out mitigation and prevention. Some risks will pose more potential harm than others, making mitigation critical. As a general rule, companies should conduct risk assessments at least once yearly. These assessments should also occur when your business changes its technology infrastructure, which may include cloud migration, new applications or large expansions. 6. Posture Assessment A posture assessment is the best initial test among the security testing methods because it can guide your approach to security. This assessment refers to your cybersecurity posture — the strength of your protocols and controls at preventing cyber threats. IT professionals perform posture assessments through a range of processes that look at internal and external factors. Unlike audits or pen tests, posture assessments can provide definite guidance for improving cybersecurity maturity. This guidance often seeks to maximize return-on-investment (ROI) for security protocols. These assessments can use a combination of methods like ethical hacking, security scanning and risk assessments to define security posture to: - Identify and address the value of company data - Define threat exposure and risks - Evaluate if appropriate security methods are in place - Recommend a concrete plan for strengthening defenses Conducting posture assessments can be a wise move in a variety of circumstances — you can conduct them to optimize ROI, get started with a new strategy, prepare for organizational changes or address security gaps. While you may not need to perform them regularly, they’re an excellent option for companies of all sizes. Partnering with a reliable third-party IT agency is the key to effective security testing tools and methods. At Ascendant, we work with you to bring cybersecurity tests to your technology infrastructure. Our services include the whole circle, from managed IT, to security and cloud computing. We can support you in your IT goals and focus on customizing your IT management strategy. With our cybersecurity consultations, we can employ the appropriate solutions for your business. Let us act as an extension to your existing IT team or support you completely. To learn more about our capabilities, get in touch with one of our professionals today — we look forward to partnering with you!
<urn:uuid:1d87ce2c-c26e-4b91-9a26-6dcc8a768acd>
CC-MAIN-2024-38
https://ascendantusa.com/2022/02/15/cybersecurity-testing-methods/
2024-09-18T04:03:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00087.warc.gz
en
0.934975
2,915
2.828125
3
Definition: Egress Filtering Egress filtering is a network security measure used to monitor and control the outbound traffic from a network. Its primary function is to ensure that only authorized data leaves the network while blocking any unauthorized or potentially harmful data from being transmitted out. This security measure helps in preventing data breaches, malware exfiltration, and unauthorized access to sensitive information. Understanding Egress Filtering Egress filtering plays a crucial role in modern cybersecurity strategies. As networks become more complex and interconnected, the need to monitor both inbound and outbound traffic becomes essential. While many organizations focus heavily on filtering and securing inbound traffic, they often overlook the importance of controlling what leaves their network. This is where egress filtering comes into play. How Egress Filtering Works Egress filtering works by analyzing outbound packets at the network’s perimeter, typically at the router, firewall, or gateway. The filtering process involves inspecting the headers and content of each packet to determine whether it meets the organization’s security policies. If a packet violates these policies, it is blocked from leaving the network. The process can be broken down into the following steps: - Packet Inspection: Each outbound packet is inspected for its source IP address, destination IP address, protocol type, and other attributes. - Policy Enforcement: The packet is compared against predefined security policies. These policies define what types of traffic are allowed to leave the network. For example, policies may permit only HTTP and HTTPS traffic while blocking other protocols. - Decision Making: Based on the inspection and policy comparison, a decision is made to either allow the packet to exit the network or block it. - Logging and Alerting: If a packet is blocked, the event is typically logged, and alerts may be generated to notify administrators of potential security incidents. Importance of Egress Filtering The importance of egress filtering cannot be overstated in the context of comprehensive network security. While inbound filtering is essential for preventing unauthorized access, egress filtering helps protect the integrity of the network by ensuring that sensitive data does not leave the network unauthorized. Preventing Data Exfiltration One of the primary benefits of egress filtering is the prevention of data exfiltration. In the event of a network breach, attackers often attempt to transfer stolen data out of the network. Egress filtering can block these attempts by identifying suspicious outbound traffic and stopping it before it reaches its destination. Mitigating the Spread of Malware Egress filtering is also effective in mitigating the spread of malware within a network. Once malware infiltrates a network, it often tries to communicate with external command-and-control servers or exfiltrate data. By monitoring and controlling outbound traffic, egress filtering can detect and block such communications, thereby containing the spread of malware. Many regulatory frameworks and industry standards require organizations to implement robust security measures to protect sensitive data. Egress filtering helps organizations comply with these regulations by providing a mechanism to control and monitor outbound traffic, ensuring that sensitive information does not leave the network without authorization. Implementing Egress Filtering Implementing egress filtering involves several key steps, including the establishment of security policies, configuration of network devices, and ongoing monitoring and maintenance. Establishing Security Policies The first step in implementing egress filtering is to establish clear and comprehensive security policies. These policies should define what types of outbound traffic are permitted and which are not. For example, an organization may allow outbound traffic only on specific ports or to specific IP addresses while blocking all other traffic. Configuring Network Devices Once policies are established, the next step is to configure network devices such as firewalls, routers, and gateways to enforce these policies. This typically involves setting up rules that inspect outbound traffic and either allow or block it based on the established policies. Monitoring and Maintenance Egress filtering is not a one-time setup but requires ongoing monitoring and maintenance. Network administrators must regularly review logs, update security policies, and adjust configurations as necessary to respond to new threats and changes in the network environment. Challenges of Egress Filtering While egress filtering is an essential component of network security, it comes with its own set of challenges. Complexity and Performance Impact Implementing egress filtering can be complex, especially in large or highly dynamic networks. The need to inspect every outbound packet can also introduce performance overhead, potentially slowing down network traffic. Organizations must carefully balance security needs with performance considerations. Egress filtering can sometimes lead to false positives, where legitimate traffic is incorrectly blocked. This can disrupt business operations and lead to frustration among users. To minimize false positives, organizations must fine-tune their filtering policies and ensure that legitimate traffic is properly accounted for. Attackers may employ evasion techniques to bypass egress filtering, such as encrypting outbound traffic or using non-standard protocols. To counter these techniques, organizations must stay up-to-date with the latest threats and continually refine their filtering mechanisms. Best Practices for Egress Filtering To maximize the effectiveness of egress filtering, organizations should follow these best practices: - Regularly Update Security Policies: As threats evolve, so should your security policies. Regularly review and update your egress filtering rules to ensure they remain effective. - Use Layered Security: Egress filtering should be part of a multi-layered security strategy that includes firewalls, intrusion detection/prevention systems, and endpoint protection. - Monitor Logs and Alerts: Continuously monitor logs and alerts generated by egress filtering to identify and respond to potential security incidents in real-time. - Educate Employees: Ensure that employees understand the importance of egress filtering and adhere to security best practices, such as avoiding the use of unauthorized applications that may generate suspicious outbound traffic. - Test and Audit Regularly: Regularly test and audit your egress filtering setup to identify any weaknesses or areas for improvement. Frequently Asked Questions Related to Egress Filtering What is egress filtering in network security? Egress filtering is a network security measure that monitors and controls outbound traffic from a network. It ensures that only authorized data can exit the network, thereby preventing unauthorized data transmission, data breaches, and malware exfiltration. Why is egress filtering important? Egress filtering is important because it helps prevent data exfiltration, mitigate the spread of malware, and ensure compliance with regulatory standards. By controlling outbound traffic, organizations can protect sensitive information from leaving the network unauthorized. How does egress filtering work? Egress filtering works by inspecting outbound packets at the network perimeter, such as routers or firewalls. These packets are checked against predefined security policies. If a packet violates these policies, it is blocked from exiting the network. What are the challenges of implementing egress filtering? Challenges of egress filtering include the complexity of setup, potential performance impact, false positives where legitimate traffic is blocked, and evasion techniques used by attackers to bypass the filtering mechanisms. What are some best practices for egress filtering? Best practices for egress filtering include regularly updating security policies, using a multi-layered security approach, monitoring logs and alerts, educating employees on security practices, and conducting regular tests and audits of the egress filtering setup.
<urn:uuid:5fdc87aa-ab5d-41ce-9dbf-ccc67f09ad22>
CC-MAIN-2024-38
https://www.ituonline.com/tech-definitions/what-is-egress-filtering/
2024-09-18T05:36:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00087.warc.gz
en
0.911062
1,514
3.546875
4
With constant pressure to increase efficiencies and reduce costs, many organisations are turning to robotic process automation (RPA) as a solution. According to Gartner, spending on RPA technology will reach $2.4 billion by 2022. This automation takes over mundane, repetitive tasks to free the human workforce to focus on other higher-level activities. What is robotic process automation (RPA)? RPA is technology that can automate business processes that are rules-based, structured and repetitive. A company can use RPA tools to communicate with other digital systems, capture data, retrieve information, process a transaction and more. Consider RPA “robots” that are programmed to complete specific business processes. Financial firms were the first to adopt RPA, but there are now companies in many industries, including healthcare, retail, manufacturing and more who use RPA technology. Robotic process automation reduces labour costs as well as prevents human error. In one example, a large consumer and commercial bank used 85 software bots to run 13 processes that handled 1.5 million requests in a year. This allowed the bank to add capacity that equalled 230 full-time employees at just about 30 per cent the cost of recruiting more staff. While RPA has impressive results for many companies, there are many examples where RPA implementations have failed. Often, companies underestimate the cost and time involved in installing RPA solutions, and they discover it’s more complex than first thought to scale RPA technology successfully. The next phase of RPA technology will combine artificial intelligence (AI) and machine learning to make it more powerful. Imagine RPA being the arms and legs of a bot and artificial intelligence is the brains. AI gets more intelligent over time by assessing the data that RPA can provide. Instead of just completing a programmed action, RPA, with the help of AI, would be able to determine what action to take based on the data. 10 Amazing Examples of Robotic Process Automation in practise RPA helps companies from numerous industries complete a wide variety of tasks. When I work with businesses helping them with their digital transformation and to improve performance, I see hundreds of great RPA examples. Here are just 10 of them: 1. Call centre operations Many of the customer requests received by call centres can be supported with RPA technology; common customer queries and solutions can be provided to agents via a dashboard. When an issue gets escalated to human customer service agents, RPA can help consolidate all the information about a customer on a single screen, so the agents have all the information they need from multiple systems to provide exemplary service. 2. Data migration/entry and forms processing Employees are often required to pull relevant information from legacy systems in order to have the data available for newer systems. RPA can support this manual process and complete it without introducing human error. When paper forms need to be transferred to digital, an RPA solution can read the forms and then get the data into the system freeing up humans to do other things. 3. Claims administration In healthcare and insurance, RPA is used to input and process claims. RPA tools can do this faster and with fewer errors than humans. The tech can also identify exceptions that don’t comply to ultimately save unnecessary payments. 4. Onboarding employees RPA provides the perfect solution to ensure that every employee is onboarded according to the established process and that they receive all the information required to comply with company guidelines. 5. Help desk As the first line to address user’s technical problems, RPA can help diminish the workload of the human help desk by taking care of straightforward, repetitive issues. These level-one tech support issues are simple but time-consuming. In addition, regular diagnostic tests of a company’s computer systems completed by bots will help the human IT staff stay ahead of issues. 6. Support the sales process Any sales division would tell you, time that should be spent building relationships is instead used on administrative tasks such as updating the customer relationship management (CRM) system, setting up the client in the billing system, and inputting data into sales metrics and monitoring systems. Robotic process automation can be used to streamline each of these activities. 7. Scheduling systems Online scheduling of patients for healthcare appointments can be enhanced with RPA technology. Bots can gather all patient details such as insurance information, appointment request, location preferences and more to make appointment scheduling more efficient. 8. Credit card applications Today, bots are behind the scenes processing the majority of credit card applications. They can be programmed to easily handle all aspects of the process from gathering information and documents, doing credit and background cheques, and ultimately deciding if the applicant is worthy of receiving credit and issuing the actual card. 9. Expense management Most companies require their employees to input details on expense reports such as business name, data and amounts that an RPA bot can automatically extract from submitted receipts. 10. Pulling data from multiple websites to find the best deal Whether you’re looking to travel or purchase a vehicle, you want to get the best deal, and RPA tech can help make it happen by scraping data off websites, comparing it and showing you the best deal. These ten examples will hopefully give you a flavour of how RPA is used in practise today and hopefully shows the enormous potential of this technology for any business.
<urn:uuid:e9445c20-0735-4dbc-9586-c76318665484>
CC-MAIN-2024-38
https://bernardmarr.com/10-amazing-examples-of-robotic-process-automation-in-practice/
2024-09-20T14:07:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00787.warc.gz
en
0.933043
1,110
2.8125
3
The digital divide was an issue during the earlier days of telephone services, long before the term was coined, remained so through the early days of broadband, and still is a problem today. Last week, the Federal Communications Commission said that it is seeking comments on ways to update its Lifeline program in order to bring broadband to more poor people. Yesterday, Wired discussed the effort and mentioned some providers, including Comcast, CenturyLink and Cox, that offer low-cost broadband access. There is more news on that front than the announcement by the FCC which, of course, is limited to the United States. One of the highest-profile private sector initiatives is Project Loon from Google. The initiative, which has one of the most comically poorly chosen names in recent memory, began in 2011. The goal is to provide Internet access to remote areas via a network of balloons floating in the stratosphere. Bloomberg offered a very interesting update on Project Loon based on information provided by Google at its I/O developers conference last week in San Francisco. Important advances have been made. Engineers have developed a way of launching the balloons more easily and with less personnel, a big step considering where the balloons are deployed. They also found a way to send signals from balloon to balloon. Enabling signals to be sent between balloons greatly reduces the need for ground stations and makes the overall success of the project more likely. The story says that the company hopes to achieve “a few days of continuous service” in tests by the end of the year. It’s a tough project: So far during trials in Australia, Chile, New Zealand, Brazil, and other countries, Google has succeeded only in providing intermittent access before the wind carries a balloon off. If it can overcome the remaining challenges, Cassidy is hoping to roll out the service more widely by the end of 2016 and is looking at underserved Internet markets such as Africa, Latin America, and Southeast Asia as the best places to start. Facebook also is addressing the worldwide digital divide, with Internet.org, providing access to Facebook and a number of other sites via 2G and 3G networks. Mobile data and broadband are not necessary. So far, according to TheNextWeb, Internet.org is available in more than 10 countries, including Zambia – the first launch, last July – Tanzania, Kenya, Colombia, India, Indonesia and Malawi. Sites include AccuWeather, BBC, BabyCenter, Malaria No More and Urdupoint Cooking. The approach is controversial, however. Critics say that it violates the spirit of net neutrality and provides Facebook inappropriate control over the online experiences of users. Facebook Chairman and CEO Mark Zuckerberg explains and defends the Internet.org project in this video. There is nothing new under the sun, including the inequality in resources between rich and poor. Access to the Internet is essential, following only things such as food, water and shelter. Hopefully, the FCC, Facebook, Google and others will be successful in breaking down some of those barriers. Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at firstname.lastname@example.org and via twitter at @DailyMusicBrk.
<urn:uuid:84cc17ce-f62f-495e-a285-1f11e10e094a>
CC-MAIN-2024-38
https://www.itbusinessedge.com/networking/efforts-continue-to-breach-the-persistent-digital-divide/
2024-09-15T19:25:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00387.warc.gz
en
0.955314
722
2.53125
3
Misconfigured IOT Devices There are billions of IoT devices online — more connected things than people in the world. A forecast from International Data Corporation (IDC) estimates that there will be more than 41 billion connected IoT devices in 2025. But this rise of IoT devices also comes with scores of new security challenges. The default or zero credentials threat IoT devices include everything from smart watches, thermostats, bulbs, refrigerators, smart TVs, baby monitors, alarms to medical equipment, food sensors, traffic routing, air-conditioning — to name a few. One of the IoT challenges is the weak authentication and the use of default credentials in these devices that have usually embedded systems with no configuration required. According to a 2020 whitepaper Internet of Things (IoT) | The rise of the connected world from Deloitte, about 70% of the devices are configured to use the factory-set default usernames/passwords. Many users will never change these default passwords, and many device manufacturers share their default credentials online. According to Symantec and its Threat Landscape Trends – Q2 2020 report, ‘123456’, ‘Default’, ‘admin’, ‘user” and ‘root’ are among the top 10 passwords used in IoT attacks (‘123456’ being the most commonly used). Avast researchers discovered that 600,000 Chinese manufactured GPS trackers had ‘123456’ as default password. And these devices were exposing real-time GPS coordinates of children. Yet another example includes baby monitors that had the default password ‘123’ written on the back of the device, allowing hackers to spy on users. And if that is not enough, you would also be able to easily find websites allowing you to look directly for IoT device default passwords per brand and model. These loose password protocols are but one reason why IoT devices are exceptionally vulnerable to malware and a host of other cyber attacks. The threat of default credentials is significant and is beginning to be recognized as such. For instance, in July 2020 the U.K. government proposed a new consumer protection law banning single, universal passwords for devices. The vulnerabilities threat In addition, there are also plenty of devices with no passwords or authentication at all. Both hackers and security researchers are using search engines for the Internet of Things, like Shodan or Censys, are able to detect vulnerable “things” connected into the Internet such as security cameras, webcams, SCADA, air-conditioning, etc. Many articles relate stories of hackers or researchers having access to complete stranger’s webcams — posing dangers to both privacy and security, as cameras could be compromised for spying or even blackmail purposes. With search engines like Shodan or Censys enterprises can search by IP, server, type, ports, banner, etc. This capability allows cybersecurity analysts to detect CVE (Common Vulnerabilities and Exposures) vulnerabilities. For instance, BlueKeep (CVE- 2019-0708) is a software vulnerability that affects older versions of Microsoft Windows (such as Windows XP, Windows 7 and Windows Server 2008 R2). It attacks the operating system’s Remote Desktop Protocol (RDP) and allows for the possibility of remote code execution. Vulnerable systems could be infected with cryptocurrency miners or even ransomware in some instances. Approximately 1 million systems were vulnerable to the BlueKeep vulnerability in May 2019. According to the SANS Internet Storm Center, in November 2020 over 245 000 systems were still left unpatched and therefore still running the vulnerable Windows RDP service. That represents 25% of the original number, a rather high number more than 18 months after the disclosure of the vulnerability and in the middle of a ransomware spree. How can CybelAngel helps prevent IoT attacks Traditional vulnerability management solutions are incapable of guaranteeing the safety of IoT assets at scale. Hence businesses worldwide rely on CybelAngel to prevent harmful attacks by detecting and securing vulnerable IoT devices before these are breached. CybelAngel’s Asset Discovery and Monitoring solution eliminates shadow risk by alerting you to vulnerable assets through a white-hat, risk-based approach that allows your organization to prevent attacks against valuable data…or persons. Get your demo of our Asset Discovery & Monitoring solution just CLICK HERE.
<urn:uuid:8c5a0a8d-4596-4fce-9793-e019a5a5e52c>
CC-MAIN-2024-38
https://cybelangel.com/misconfigured-iot/
2024-09-20T17:32:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00887.warc.gz
en
0.911961
884
2.734375
3
Artificial intelligence seems to be overturning every part of life. How about this one: AI and its country cousin, machine learning, working together to develop... Artificial intelligence seems to be overturning every part of life. How about this one: AI and its country cousin, machine learning, working together to develop new drugs. To see how AI can help and what some of the risks might be, Federal Drive with Tom Temin spoke with the Food and Drug Administration’s Associate Director for Policy Analysis, within the Center for Drug Evaluation and Research, Dr. Tala Fakhouri. Tom Temin I imagine this is something of great interest to CDER, where you work in FDA and FDA writ large. I imagine the drug companies, the manufacturers and developers they’ve got to be looking AI. Fair to say? Tala Fakhouri That is fair to say, in fact, we’ve received over 175 submissions for drug approval that included the use of AI and machine learning and drug development. And the use has really traversed the spectrum of drug development from drug discovery all the way to clinical research, to manufacturing and to post-market safety surveillance. Across the Defense Department, human resources and hiring organizations are leaning in hard on customer experience. | Download our new ebook, sponsored by Maximus, to learn more! Tom Temin Yeah, that was really my next question. Where in the lifecycle of a drug does all this apply? Because at the development stage it’s really they develop new molecules, essentially. How could AI help in that stage? Let’s concentrate there for a moment. Tala Fakhouri Right. So artificial intelligence and machine learning, for example, can be used to predict how specific proteins will fold or to predict certain targets for molecules that are already on the market or to discover new uses for existing molecules. This is something that we call drug repurposing. These uses are very exciting. And we think they may contribute to the development of safer drugs faster. However, a lot of the application of AI in that early phase of drug discovery is outside of what FDA regulates. But we still see submissions that will include information about the use of AI in that early stage of drug development. Tom Temin And do the same worries apply for AI developed as applied to everywhere else? And that is, did they use sufficient and correct data such that the output is reliable? Will that molecule will do what they hope it will do. Is that the case? I mean, you worry about the data and the algorithms. Tala Fakhouri One way that we evaluate the use of AI and machine learning in drug development, let’s say we got an application with AI being used in clinical research to predict outcomes for patients, predict how they’ll respond to a treatment. For example, the way that we would review this application would take into consideration the benefits and the risk of using this technology. Specifically, we emphasize the ethical use of AI. We emphasize issues related to transparency. We need to know, for example, the data that was used to develop these models. Is that data high quality data? Does it address issues related to bias, which may then lead to bias in the algorithm itself? We also look at the model’s performance to make sure that it is predicting or it’s performing in a way that is consistent with how the sponsor or the specific researcher had intended it to do. Tom Temin Got it. And let’s move on to the topic of how AI could apply to the clinical testing, because that’s in some ways one of the longest parts of drug development. You might be able to come up with the new drug in six months, but then you’ve got to spend five years testing it. And that could be really controversial, I imagine, because tests take as long as they take and developments of after effects or cures take as long as they take. Can I speed that up in a way that you can rely on it? Am I asking the right question? Tala Fakhouri You are asking the right question. AI can be used in clinical research. In fact, for us, on the FDA side, the majority of AI uses in drug submissions are in the clinical research part of the spectrum. AI can be used for outcome prediction. This is one of the strengths of AI and machine learning is its predictive power, so it can take information about the patients, for example, about their lab values, their demographics, and predict how they would respond to a specific drug and if they will respond to a specific dose. This is wonderful, because you could do things like dose optimization using this technology and it’s pretty fast. So we do expect it to expedite certain aspects of clinical research. We also know that AI, for example, is used for something that would be called patient selection and stratification. Finding the patients that would be able to respond to the drug is very important, AI can be used to be able to do that. There’s also new applications of AI that are very interesting to the FDA. For example, the creation of something known as a digital twin. So, for example, you would have a single arm trial. Where everyone is taking the treatment and then you would simulate what would happen to the specific patient had they not taken the placebo. So this is another application that we expect to see. Tom Temin We’re speaking with Dr. Tala Fakhouri. She is associate director for policy analysis at the Center for Drug Evaluation and Research in the Food and Drug Administration. And for the FDA, what is it that you need in the FDA to be able to keep up with this? You hinted earlier that there might be an extension of regulatory oversight that you would need. And would that have to come from Congress, for example. Tala Fakhouri For us on the FDA side, specifically procedure, the evidentiary standards needed to support drug approval remain the same regardless of the technology that you’re using. It’s very important to emphasize that, that the paradigm that we currently use has not changed. We are actively monitoring advances in AI machine learning, and we continuously engage with experts, whether through expert workshops or recently in May, we published two papers, two discussion papers, one focusing on the entire drug development landscape, and the other one more specifically targeting the use of AI in drug manufacturing. In that document, in both documents, we raise questions to help engage with the community, with stakeholders, and we hope to receive a lot of good feedback. The purpose of this discussion papers is really to be able to understand if there are areas or gaps where additional regulatory clarity is needed. But I can tell you as of now, with the 175 submissions that we’ve received, our evidentiary standards are the same. The paradigm that we’re using has not changed, because there isn’t a need to provide additional clarity as of now. Tom Temin But it sounds like you have the potential maybe for some additional rulemaking based on what these submissions say and where those gaps might be. Tala Fakhouri So after we receive the comments on the docket for the two discussion papers, we plan to carefully and thoughtfully analyze all of the feedback that we’ve received. We plan to conduct public workshops next year to be able to address needs for the community, in terms of additional regulatory clarity. And if there is a need to provide future guidance, of course we’ll be happy to do that, because we want to make sure that this technology is used in a responsible way and used to develop new, safe, effective medications for the public. Tom Temin And what about the requirements that FDA would have in terms of your own people and their knowledge to keep up with developments in AI and algorithms and how this is all being used? Because there’s many forms of AI, many sources of AI, and they’ve got to keep up with that no less than the drug industry. Tala Fakhouri Right. So internally within the FDA, within CDER, we are conducting a lot of work internally to be able to bolster our workforce, make sure that folks are trained in the use of these technologies. You can take classes, you can attend seminars, but also in terms of hiring, hiring experts, that could help us better understand the use of this technology in practice. Tom Temin A final question with AI do you anticipate just from your general sense of what’s going on in the world, that this has the potential to lower the cost of drug development and deployment? I mean, the ideal world, the latest cancer drug would cost as much as an aspirin or as little as an aspirin, probably that’s unlikely. Could this drive cost out of the entire lifecycle here? Do you think. Tala Fakhouri Costing of drugs is outside of the domain of what I work on within the agency. But one can expect that if you have drugs being developed faster, this may reduce costs on all ends. Want to stay up to date with the latest federal news and information from all your devices? Download the revamped Federal News Network app Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area. Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years.
<urn:uuid:89b70016-a1e1-4a4e-b403-a3b7259b0f0c>
CC-MAIN-2024-38
https://federalnewsnetwork.com/artificial-intelligence/2023/07/the-fda-ponders-whether-artificial-intelligence-can-help-with-drug-approval/
2024-09-08T16:51:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00251.warc.gz
en
0.968027
1,962
2.609375
3
And what’s more interesting? The hack uses nothing more than guesswork by querying multiple e-commerce sites. In a new research paper entitled “Does The Online Card Payment Landscape Unwittingly Facilitate Fraud?“ The technique, dubbed Distributed Guessing Attack, can circumvent all the security features put in place to protect online payments from fraud. The issue relies on the Visa payment system, where an attacker can guess and attempt all possible permutations and combinations of expiration dates and CVV numbers on hundreds of websites. - Online payment systems do not detect multiple incorrect payment requests if they’re performed across multiple sites. They also allow a maximum of 20 attempts per card on each site. - Web sites do not run checks regularly, varying the card information requested. Newcastle University PhD candidate Mohammed Ali says neither weakness is alone too severe, but when used together and exploited properly, a cyber criminal can recover a credit card’s security information in just 6 seconds, presenting “a serious risk to the whole payment system.” Here’s how the attack works: The attack is nothing but a very clever brute force attack that works against some of the most popular e-commerce sites. So, instead of brute-forcing just one retailer’s website that could trigger a fraud detection system due to incorrect guesses or lock the card, the researchers spread out guesses for the card’s CVC number across multiple sites with each attempt narrowing the possible combinations until a valid expiration dates and CVV numbers are determined. The video demonstration shows that it only takes 6 seconds for a specially designed tool to reveal a card’s secure code. Once a valid 16-digit number is obtained, the hacker use web bots to brute force three-digit card verification value (or CVV) and expiration date to hundreds of retailers at once. The CVV takes a maximum of 1,000 guesses to crack it and the expiry date takes no more than 60 attempts. The bots then work to obtain the billing address, if required. “These experiments have also shown that it is possible to run multiple bots at the same time on hundreds of payment sites without triggering any alarms in the payment system,” researchers explain in the paper. “Combining that knowledge with the fact that an online payment request typically gets authorized within two seconds makes the attack viable and scalable in real time. As an illustration, with the website bot configured cleverly to run on 30 sites, an attacker can obtain the correct information within four seconds.” The attack works against Visa card customers, as the company does not detect multiple attempts to use a card across its network, while MasterCard detects the brute force attack after fewer than 10 attempts, even when the guesses are spread across multiple websites. How to Protect yourself? The team investigated the Alexa top-400 online merchants’ payment websites and found that the current payment platform facilitates the distributed guessing attack. The researchers contacted the 36 biggest websites against which they ran their distributed card number-guessing attack and notified them of their findings. As a result of the disclosure, eight sites have already changed their security systems to thwart the attacks. However, the other 28 websites made no changes despite the disclosure. For Visa, the best way to thwart the distributed card number-guessing attack is to adopt a similar approach to MasterCard and lock a card when someone tries to guess card details multiple times, even tried across multiple websites. For customers, avoid using Visa credit or debit cards for making online payments, always keep an eye on your statements, and keep spending limit on your Visa card as low as possible.
<urn:uuid:e2f7f007-817e-468b-bc4f-e92990605352>
CC-MAIN-2024-38
https://debuglies.com/2016/12/06/video-to-hack-credit-card-6-seconds-experts-reveal/
2024-09-09T21:30:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00151.warc.gz
en
0.92232
760
2.578125
3
The SIPOC diagram is perhaps one of the most underutilized tools for people involved in process mapping. Here are 5 reasons why you should add this powerful tool to your process management toolbox. What is a SIPOC Diagram? A SIPOC diagram is a tool that can be used to map out a process at a high level. It stands for Suppliers, Inputs, Process, Outputs, and Customers. To learn more, check out this article "What is a SIPOC Diagram, an Introduction" Here are the top 5 benefits of a SIPOC diagram. 1. Helps to Define a Process A SIPOC diagram provides an overall look at a process. This diagram focuses on suppliers, inputs, key activities, outputs, and customers. Create a SIPOC diagram as a starting point when designing a new process or when trying to improve an existing one. 2. Helps to Identify Problems Another benefit is that it can help to identify problems within a process. Start by mapping the process and identifying suppliers, inputs, outputs, and customers. Use this diagram in problem-solving sessions with stakeholders to help to find any potential issues. For example, delays in the process could be caused by an issue with one of the suppliers. 3. Helps to Improve Communication A SIPOC diagram can also help to improve communication within an organization. It gets people on the same page by clearly mapping out the process and identifying the various stakeholders. This information helps to ensure that all team members are working towards the same goal. This can be especially helpful in large organizations where there may be many different departments and individuals involved in a process. 4. Helps to Reduce Waste Another benefit is that it can help to reduce waste within a process. The SIPOC provides an excellent high-level view of the process. This view can make it easier to spot areas where there may be unnecessary steps or duplicate information. The SIPOC is often used in lean six sigma projects to streamline the process and make it more efficient. 5. Can Be Used for Multiple Purposes Finally, a SIPOC diagram is that it can be used for multiple purposes. In addition to being used for process improvement, it can also be used in a number of areas. These include training new employees, project management, understanding customer requirements, or documenting an existing process. What to learn more about process diagrams? Check out this article on Process Diagrams: Why they are important & examples to get you started. One of the problems with process mapping is that it often gets right into the weeds. Details are important, but so is a high-level view that provides context and aids in communication. Consider using this tool as part of your next process mapping exercise.
<urn:uuid:994b1259-4d89-454b-b192-05fa1758e70a>
CC-MAIN-2024-38
https://navvia.com/blog/5-benefits-of-a-sipoc-diagram
2024-09-09T22:21:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00151.warc.gz
en
0.939569
578
2.984375
3
Before you develop a bot, understand the bot and its capabilities to help you and your business. This understanding brings clarity and efficiency to the bot development. Chatbots are artificial intelligence systems that the users interact with through text or voice interface. These interactions are straightforward, like asking a bot about the weather report or trace a missing entry in your bank account. Further, you can structure interactions, with users choosing options from a list of items presented or unstructured freestyle flow similar to a conversation involving a human agent. Whatever be the type of user interaction, good design helps in building an efficient bot. A good design allows answering most of the user queries, anticipates all the conversation flows, and expects the unexpected. Post ver 7.2 release, platform has an option to help you design your conversations using the Storyboard feature. It is an intuitive conversation designer that simplifies and streamlines the bot blueprinting process, without the need to have external flow charts, tracking, and versioning tools. See here for more. Platform Recommendations: The following steps are considered while designing a bot: - Understand the User Needs: To set the scope of the bot. The business sponsors, business analysts, and product owners play an important role to identify the user requirement by gathering market requirements and assessing internal needs. - Set the Chatbot Goals: It helps you create a well-defined use case. This involves converting the above-identified scope to a use case. It is advisable to involve the bot developer in this phase. - Design a Chatbot Conversation: To define chatbot behavior in every possible scenario in its interaction with the user. Simulating conversations go a long way in identifying every possible scenario. Once the bot capabilities and ideal use case are well-defined, the bot developer can begin the process of configuring bot tasks, define intents and entities, and build the conversational dialog. Things to keep in mind while designing a chatbot: Try to answer the following questions (some if not all): - Who is the target audience? Technical help bots targetted for a tech-savvy customer need a different design when compared to help bots for a layman like, say, a bank customer. Hence assessing the target audience is always important. - What bot persona will resonate the most with this group? This will help define how the bot talks and act in every situation. - What is the purpose of the bot? The goal i.e. the customer query that the bot needs to address defines the endpoint of any conversation. - What pain points will the bot solve? The purpose and scope of bots are set to identify what the bot addresses and when the human agent needs to take over. - What benefits will the bot provide for us or our customers? The main benefit of using a bot is time-saving. The user need not waste their time waiting for a human agent to be available for answering their query. You, as the business owner, need not worry about not being there to cater to all customer needs. - What tasks do I want my bot to perform? Simulation of user conversation helps identify the tasks that need to be catered to by the bot. - What channels will the bot live in? This will to some extent drive the way the bot is presented, the various options available for the chatbot is limited by the channel/medium it is used. - What languages should my bot speak? When catering to a multi-lingual community the language support is imperative and building the dictionary simultaneously is useful.
<urn:uuid:2f296122-cdf1-40b8-bcb4-653eb3b97d82>
CC-MAIN-2024-38
https://developer.kore.ai/docs/bots/bot-builder-tool/bot-creation/design/
2024-09-11T04:36:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00051.warc.gz
en
0.899088
725
2.875
3
Like every other industry, healthcare is also experiencing a period of rapid change. An ageing population, coupled with rising healthcare costs, has created significant challenges for the sector. Telehealth, the delivery of healthcare services remotely, has emerged as a valuable solution to address these issues. The Rise of Telehealth Telehealth is changing how patients access care. By connecting patients with providers, it offers increased accessibility, efficiency, and cost-effectiveness. A recent study by McKinsey found that virtual visits increased by 1,000% between February and March 2020 alone, highlighting the rapid adoption of telehealth during the COVID-19 pandemic. IoT: A Foundation for Telehealth At the core of successful telehealth lies the Internet of Things (IoT). By connecting a vast network of medical devices, sensors, and wearables, IoT enables real-time data collection and analysis, improving healthcare delivery. Some Important applications of IoT in telehealth include: - Remote Patient Monitoring (RPM): IoT devices continuously track vital signs, medication adherence, and other health metrics, enabling early detection of potential health issues and proactive interventions. - Chronic Disease Management: IoT devices provide essential data for managing chronic conditions like diabetes, asthma, and hypertension. By tracking blood glucose levels, peak flow rates, or blood pressure, patients and providers can make informed decisions to optimize care. - Elderly Care: IoT solutions enhance safety and independence for seniors with features like fall detection and remote medical alert systems. Advancing Healthcare Delivery with Telehealth and IoT The synergy between telehealth and IoT is driving a healthcare revolution. By combining remote consultations with real-time patient data, healthcare providers can deliver more personalized, proactive, and efficient care. For example, a patient with chronic heart failure can use IoT-enabled devices to monitor blood pressure, heart rate, and oxygen levels. This data is transmitted securely to healthcare providers, who can then remotely adjust medication, provide lifestyle guidance, and schedule necessary in-person visits. The Business Case for Telehealth and IoT The business case for telehealth and IoT is compelling. Studies have shown that telehealth can reduce healthcare costs, improve patient satisfaction, and increase access to care for underserved populations. By investing in telehealth and IoT solutions, healthcare organizations can: - Enhance patient experience and satisfaction - Improve population health management - Reduce readmissions and emergency department visits - Increase revenue through new care models Bridgera: Optimizing Healthcare with IoT-Powered Telehealth Bridgera is at the forefront of developing cutting-edge telehealth solutions. We utilize the combined power of IoT, analytics, mobile technology, and cloud computing to deliver our secure and scalable platform, Bridgera myHealth. myHealth enables healthcare providers to: - Build secure telehealth monitoring systems: myHealth prioritizes patient privacy with HIPAA compliance, ensuring secure data storage, end-to-end encryption, and advanced authentication protocols. - Manage diverse IoT devices: Seamlessly connect and manage a wide range of medical devices and sensors for efficient remote data collection and analysis. - Gain AI-powered insights: Leverage AI for anomaly detection, personalized care plans, and data-driven decision-making, leading to improved patient outcomes. - Provide real-time patient monitoring with alerts: Track patient vitals and health status in real-time, allowing for prompt intervention with any concerning changes. - Offer personalized care via telehealth communication tools: Communicate with patients through video, voice, or text for timely, personalized care plans, improving patient engagement. By partnering with Bridgera, healthcare organizations can deliver exceptional patient care, improve operational efficiency, and achieve better health outcomes. That’s all that a truly dedicated healthcare organisation would want. Contact us today to learn how we can help you transform your healthcare delivery. About Bridgera: Bridgera effortlessly combines innovation and expertise to deliver cutting-edge solutions using connected intelligence. We engineer experiences that go beyond expectations, equipping our clients with the tools they need to excel in an increasingly interconnected world. Since our establishment in 2015, Bridgera, headquartered in Raleigh, NC, has specialized in crafting and managing tailored SaaS solutions for web, mobile, and IoT applications across North America. About Author: Gayatri Sriaadhibhatla is a seasoned writer with a diverse portfolio spanning multiple industries. Her passion for technology and a keen interest in emerging IoT trends drive her writing pursuits. Always eager to expand her knowledge, she is dedicated to delivering insightful content that informs the audience. Search Our Blog - Effective Data Management Strategies for IoT Projects - IoT Device Monitoring Systems: An In-Depth Guide for Industries - Unlock the Potential of IoT Solutions: 5 Tips for OEM Leaders to Maximize ROI - IoT Solutions and Services: How to Identify the Right Providers? - Best Practices for Improving IoT Platform Security
<urn:uuid:2ea842b3-227c-41ce-9acb-757afc29ae84>
CC-MAIN-2024-38
https://bridgera.com/a-guide-to-the-technology-behind-telehealth-platform-iot/
2024-09-18T12:08:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00351.warc.gz
en
0.903205
1,012
2.546875
3
Comparison between ISO 27001 and PCI-DSS ISO 27001 and PCI-DSS are two of the most widely used international standards for information security. ISO 27001 is a comprehensive standard that provides a framework for organizations to manage risk and protect information assets, while PCI-DSS is a specific standard that provides a set of requirements for protecting credit card data. Both standards are designed to protect sensitive data, but ISO 27001 is more comprehensive and covers a wider range of areas, while PCI-DSS is more specific and focused on protecting credit card data. What is ISO 27001? ISO 27001 is an international standard that outlines the requirements for an information security management system (ISMS). It provides a framework for organizations to develop, implement, and maintain a comprehensive information security program. The standard focuses on protecting the confidentiality, integrity, and availability of data and information systems. It also covers topics such as risk assessment, incident response, and security controls. The standard provides a set of best practices for an organization to follow in order to ensure the security of its information assets. ISO 27001 is commonly used by organizations to demonstrate their commitment to information security and to demonstrate compliance with applicable laws and regulations. What is PCI-DSS? The Payment Card Industry Data Security Standard (PCI-DSS) is a set of security requirements designed to ensure that all companies that process, store, or transmit credit card information maintain a secure environment. The standard was created by the Payment Card Industry Security Standards Council (PCI SSC) and is managed by the major credit card companies such as Visa, MasterCard, American Express, and Discover. PCI-DSS is designed to protect cardholder data and prevent fraud by requiring companies to implement strong data security measures. These measures include encryption, firewalls, access control, network segmentation, and regular security assessments. Companies must adhere to the PCI-DSS requirements in order to remain compliant and avoid penalties. A Comparison Between ISO 27001 and PCI-DSS 1. Both standards focus on the security of sensitive data. 2. Both standards require a risk assessment to be conducted to identify vulnerabilities and threats. 3. Both standards require organizations to implement measures to protect against identified risks. 4. Both standards require organizations to regularly review and update their security policies and procedures. 5. Both standards require organizations to maintain comprehensive documentation of their security processes and procedures. 6. Both standards require organizations to monitor and audit their security posture on a regular basis. 7. Both standards require organizations to train their personnel on security best practices. 8. Both standards require organizations to have incident response plans in place. The Key Differences Between ISO 27001 and PCI-DSS 1. ISO 27001 is an information security standard, while PCI-DSS is a payment card industry security standard. 2. ISO 27001 is focused on protecting the confidentiality, integrity, and availability of information, while PCI-DSS focuses on protecting cardholder data. 3. ISO 27001 is applicable to any organization, while PCI-DSS is only applicable to organizations that process, store or transmit payment card data. 4. ISO 27001 requires organizations to have a comprehensive information security management system, while PCI-DSS requires organizations to have specific security controls in place. 5. ISO 27001 requires organizations to carry out regular risk assessments and audits, while PCI-DSS requires organizations to carry out quarterly security scans.
<urn:uuid:4e4b094f-f51a-481d-bead-bf55e8498f48>
CC-MAIN-2024-38
https://www.6clicks.com/resources/comparisons/iso-27001-vs-pci-dss
2024-09-18T11:09:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00351.warc.gz
en
0.919323
710
2.515625
3
Improve Your Password Security Users who neglect cyber advice and reuse the same passwords on multiple websites face increased cyber risk, and should to rethink their actions to better protect their online accounts. Since 90% of cyber-attacks require human interaction to be successful, a people-centric approach to security is essential for organisations. May 5th is World Password Day and to help internet users and enterprises, here are some top tips on password management and creation that can be leveraged to increase cyber security Passwords are one of the first critical barriers between a person, a threat actor and a successful cyber attack. One of the most common mistakes that people make is reusing the same ID/email address and password across multiple sites and devices. Password reuse is exacerbated by the increasing volume and success rates threat actors are reaping with advanced credential phishing campaigns that use fake websites resembling the login page of a legitimate online service to steal usernames and passwords. Consumers are well advised to use different passwords, especially on critical financial and data-driven accounts. Be sure to turn on multi-factor authentication (MFA) if available for as many accounts as possible. If MFA is not an option for the account, use a password manager. A password manager creates randomized passwords that are safely stored, encrypted, and accessible across all personal devices and reduces the burden of trying to remember complicated login credentials across multiple websites. If you use a passphrase as part of your password, make sure you never use common words or phrases, names or dates associated with you or direct family members. It’s also best to change all passwords twice a year and change business passwords every three months. In almost every case, cyber attacks require human interaction to be successful, it remains important for businesses to implement a people-centric approach to security. Ensure that both your remote and in-office employees receive training and education on basic cybersecurity best practices, including how to identify a credential phishing attempt and how to securely manage passwords. Additional Password Management & Creation Tips Use multi-factor authentication (MFA) for as many accounts as possible. The basic concept is to use two forms of ‘evidence’ that validate an identity before access is granted, increasing account protection. For example, when you sign into your account, you will receive an alert to your phone requesting confirmation in order to log in. This approach frustrates the automated systems threat actors use to guess passwords or when plugging in stolen passwords. Use a secure password management application that can recall multiple passwords and automatically inputs them when needed. Using a password management application removes the need to remember and juggle multiple passwords, which makes users more inclined to use more secure and longer passwords. When it comes to password creation, avoid common words, phrases, names, and dates associated with you or direct family members. Threat actors can easily cross reference any data captured on you to arrive at the correct combination to break into your accounts. You should also change personal passwords twice a year and avoid reusing passwords across accounts. For business passwords, change your critical passwords every 3 months and putting an automated system policy in place that places a deadline on refreshing passwords. That policy can determine passwords requirements and prevent recent passwords from being used. You Might Also Read:
<urn:uuid:6554be70-31ee-4598-95b0-fddcaf9dbc42>
CC-MAIN-2024-38
https://www.cybersecurityintelligence.com/blog/improve-your-cyber-security-on-world-password-day-6279.html
2024-09-18T12:18:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00351.warc.gz
en
0.934789
665
2.671875
3
The rising concerns over carbon emission and climate change globally have created havoc among people. Such concerns led climate change authorities in the different countries to call for encouragement to sell and adopt fully battery-electric vehicles by mid-century. To achieve the goal of a net-zero state, all vehicles including heavy-goods vehicles and passenger vehicles are required to be fossil fuel free. The goal is to achieve net-zero carbon emission globally by 2040 and to restrict global temperature rise to 1.5 degrees. According to Our World in Data, road transport contributes almost 11.9% of CO2 emissions due to using fossil fuels in road transport. 60% of this emission is due to cars, buses, and motorcycles while 40% is due to road freight such as lorries and trucks. Thus, governments are actively working with authorities in other countries to join forces with societies, companies, and frontliners of climate change to inspire environmentally friendly actions. All these initiatives are providing traction to the adoption of electric vehicles. Owing to these efforts, AllTheResearch concluded that the global electric vehicles market is expected to reach USD 1,323.0 Bn by 2027, exhibiting a flourishing CAGR of 34.1% by 2027. Sales of all-electric vehicles are increased by 150% last year, representing nearly 28% of sales of automotive in the global market. These numbers suggest that the electric vehicles market has reached a turning point as 7% of vehicles on road have been electrified. Key challenges related to electric vehicles Despite the immense benefits to the environment, electric vehicles are struggling to soar in the market owing to challenges such as EV charging stations, charging cars with an app, weak data transmission, and optimizing energy usage. Additionally, a few other challenges include: Read this whitepaper to understand: Author Name: Nitish P.
<urn:uuid:8fe33b94-852e-40f7-8fd3-a35fb5bd819b>
CC-MAIN-2024-38
https://www.alltheresearch.com/white-paper/role-of-iot-in-the-transformation-of-ev-sector
2024-09-19T18:30:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00251.warc.gz
en
0.961049
377
2.71875
3
In today's digital age, cybersecurity is critical. It encompasses the strategies and measures put in place to safeguard not only our computers and networks but also the invaluable data they house. This all-encompassing shield defends against unauthorized access and threats lurking in the vast digital landscape. In essence, cybersecurity is the guardian of our digital realm, preserving the confidentiality, integrity and availability of our data. It's the defensive frontline protecting against a myriad of cyber adversaries who seek to compromise the very essence of our digital existence, making it an inseparable component of both personal and organizational data and analytics protection. Why is cybersecurity important? The ongoing pressures arising from geopolitical tensions underscore the critical role of cybersecurity. It serves as the guardian of supply chains, physical infrastructure and external networks, including vital investment partnerships. Organizations that prioritize cyber resilience are better equipped to face the challenges of this new era, preserving the integrity and continuity of their operations in an increasingly interconnected world. Our latest research, State of Cybersecurity 2023, reveals a striking truth: 97% of organizations have experienced a surge in cyber threats amid the geopolitical unrest. More than half of organizations prioritize fortifying third-party and external network defenses, acknowledging these as the most susceptible areas for attack. These findings underscore the critical role of cybersecurity in safeguarding organizational integrity and resilience amidst the complexities of the modern world. How does cybersecurity work? Cybersecurity is a multifaceted system employing technologies and protocols to safeguard digital assets. It deploys stringent access controls to limit access to authorized users. Firewalls and Intrusion Detection Systems (IDS) monitor network traffic, using predefined rules and anomaly detection to thwart threats. Encryption converts data into an unreadable format, ensuring confidentiality. Endpoint security, including antivirus and intrusion prevention, guards against malware and unauthorized access. Security Information and Event Management (SIEM) tools enable real-time threat detection. Penetration testing identifies vulnerabilities, while security policies and training instill best practices and awareness. Incident response plans guide reactions to breaches. Continuous monitoring of network traffic and system logs identifies threats, and patch management keeps systems up to date, closing vulnerabilities. Cybersecurity adapts to evolving threats, protecting digital assets in a digitized world. What are the types of cybersecurity? The following cybersecurity domains work together to create a comprehensive defense against evolving cyber threats, securing digital assets and maintaining the integrity of information systems. Protecting interconnected systems using firewalls, intrusion detection systems and virtual private networks. Safeguarding individual devices with antivirus, endpoint detection and response and mobile device management. Maintaining data security in cloud platforms through encryption and access controls. Securing software through coding practices, vulnerability assessments and web application firewalls. IAM and Data Security Managing user access and protecting sensitive data with encryption and data loss prevention solutions.
<urn:uuid:fb92b24c-1698-4797-95f0-758ae17db6f3>
CC-MAIN-2024-38
https://www.accenture.com/ae-en/insights/cyber-security-index
2024-09-07T14:29:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00451.warc.gz
en
0.898896
583
2.90625
3
What Is an Incident Command System? Benefits of an Incident Command System An ICS safeguards organizations against the risks and damages posed by cyberattacks, minimizing the impact of potential incidents. As the cyber threat landscape continues to evolve, an ICS is a vital asset that offers robust protection in the event of a security compromise. By fostering a secure environment, the ICS promotes a shared understanding of roles, actions, and protocols, ensuring swift and coordinated responses. Cyberattacks aim to breach network security and steal confidential data, and several risks necessitate an ICS, including: Functions of an Incident Command System The planning component involves determining the technical basis for the plan’s operations, coordinating the various functions, and processing incident information. It gathers and analyzes relevant data to develop comprehensive response strategies, facilitating a well-coordinated response system. 5. Finance and Administration Elements of Effective Incident Command Systems Clear Chain of Command A Manageable Span of Control Information and Intelligence Management (IIM) Incident Management vs. Incident Command System Incident management detects and resolves any information technology (IT) events that can disrupt an organization’s critical operations and coordinates actions to mitigate an incident. It involves planning, organizing, and evaluating various response efforts and is broader than an ICS. Incident management encompasses the overall management and coordination of incident responses, proactively preparing organizations for cyberattacks. While an ICS is also a proactive method of defending against cyber threats, it is much more specific than incident management. An ICS is a fixed structure designed to facilitate effective incident management and provides a clear set of emergency management protocols.
<urn:uuid:0faf3475-79ea-4f79-980f-7b53f9ad9b3f>
CC-MAIN-2024-38
https://www.blackberry.com/us/en/solutions/endpoint-security/managed-security-services/incident-command-system
2024-09-07T15:02:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00451.warc.gz
en
0.913122
337
2.984375
3
The US is infamous for a glaring lack of comprehensive data protection regulations when it comes to consumer data protection. Public outcry and increasingly advanced cybercrime had already resulted in the developing of the General Data Protection Regulation (GDPR), touted as the world’s most complete data security law. The GDPR has sweeping control over the processing of personal data by public and private entities. What’s more, it has much broader definitions of personal data. Aside from names and addresses, the GDPR defines every data that could identify a person as personal data in the present or future. Only citizens from the European Union benefit from the comprehensive data protection provided by GDPR. The regulation applies to public and private entities outside the EU, provided they offer goods or services to people living in their jurisdiction. Non-compliance is often met with steep penalties of up to 20 million euros or 4% of the company’s global revenue, depending on which is higher. There are only six specific guidelines for lawful data processing, detailed in Article 6. Apart from penalties, GDPR empowers data subjects to take legal action or receive compensation for damages. In stark contrast to the EU’s well-executed data protection regulation, data protection in the US is governed by multiple laws typically segmented into specific industries. While states like Utah are making serious progress in data protection regulation, many entities in the US remain highly irregulated and free to process consumer data as they please. Utah will soon pass legislation to give consumers access to their personal information and control how companies handle it. This could start as soon as the last day of 2023. Utah will join three other states that have already passed similar statutes: Colorado, Virginia, and California. The new law would allow consumers full rights to access, transfer, and delete their data and opt-out of the sale of their personal information for personalized advertising. While these laws show tremendous progress in the right direction, US data protection laws are still a long way from the efficiency and comprehensiveness of the GDPR. For instance, only the attorney general can enforce a statute on non-compliant entities, something that the GDPR leaves entirely to the consumer. It does not help that the remaining data protection laws in the US are largely segmented into industries, such as the Health Insurance Portability and Accountability Act (HIPPA), which was passed in 1996 to protect sensitive patient data from disclosure without the knowledge and express consent of the patient. The body with the broadest privacy authority in the US is the Federal Trade Commission (FTC). However, its jurisdiction is limited to companies practicing interstate commerce, which does not include financial institutions and network carriers. The FTC also adopts a different approach when regulating data protection. Unlike the two-tier fine system of the GDPR, the FTC seeks settlements from large corporations to deter misconduct. In 2019, the body settled with Facebook for $5,000,000,000 after the corporation violated an FTC order. While settlements of such magnitude are enough to deter large corporations, the FTC still has a lot of gaps through which small businesses and entities, particularly those that don’t operate in interstate commerce, can fall through and remain unregulated. Compared to countries outside the EU, the US is still far from meaningful data protection regulations. Canada, for example, passed the Personal Information Protection and Electronic Documents Act (PIPEDA) in 2000, and despite limiting its jurisdiction to private commercial enterprises, the EU considers it to be adequate. South Africa, South Korea, Israel, and Argentina have similar data protection regulations. Data protection will only grow in importance, and more governments will pass legislation to protect their citizens’ personal information. Comprehensive protection merely entails giving individuals control over the handling, sharing, and selling of their personal information. While a chunk of the world is largely ahead of the US in data protection, solutions are not beyond reach. By expanding the jurisdiction of the FTC or allowing states to create their own data protection statutes, the US can attempt to replicate the level of protection currently provided by the GDPR. Nevertheless, it is too hopeful to expect uniform regulation akin to the GDPR at the federal level in the US. Data protection is yet to kick off in the majority of the United States fully.
<urn:uuid:2a893bcd-9e95-4e36-bba3-5139ddd485c6>
CC-MAIN-2024-38
https://www.adzapier.com/why-the-united-states-still-struggles-with-data-protection-regulation
2024-09-08T18:47:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00351.warc.gz
en
0.940062
884
2.71875
3
Threat Posed By Satellite Systems The satellite communications that ships, planes and the military use to connect to the Internet are vulnerable to hackers that, in the worst-case scenario, could carry out “cyber-physical attacks”, turning satellite antennas into weapons. A new research has found that a number of popular satellite communication systems are vulnerable to the attacks, which could also leak information and hack connected devices. The attacks, which are merely a nuisance for the aviation sector, could pose a safety risk for military and maritime users, the research claims. The attack works by connecting to the satellite antenna from the ground, through the Internet, and then using security weaknesses in the software that operates the antenna to seize control. At the very least, the attack offers the ability to disrupt, intercept or modify all communications passed through the antenna, allowing an attacker to, for instance, eavesdrop on emails sent through an in-flight WiFi system, or attempt to launch further hacking attacks against devices connected to the satellite network. In some situations, the safety risk is higher still. In the case of the military, for instance, the attack also exposes the location of the satellite antenna, since they usually need an attached GPS device to function, reports theguardian.com. The hackers couldn’t actually affect any systems that control airplanes. Military or maritime spheres are vulnerable because these are remote vulnerabilities, anyone on the Internet can hack into a connected vulnerable SATCOM device. Ruben Santamarta, a researcher for the information security firm IOActive, carried out the study, said: “If you can pinpoint the location of a military base, that’s a safety risk, but not for a plane or a ship”, whose locations are generally public. Both military and maritime users are also at the risk of what Santamarta described as “cyber-physical attacks”: repositioning the antenna and setting its output as high as it will go, to launch a “high-intensity radio frequency (HIRF) attack”. “We’re basically turning Satcom devices into radio frequency weapons,” Santamarta said. “It’s pretty much the same principle behind the microwave oven.” A HIRF attack can cause physical damage to electrical systems. You Might Also Read:
<urn:uuid:502f7166-27c9-4dbd-b332-403c935fc8c5>
CC-MAIN-2024-38
https://www.cybersecurityintelligence.com/blog/threat-posed-by-satellite-systems-3678.html
2024-09-10T01:38:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00251.warc.gz
en
0.941222
487
2.953125
3
Among the jobs of an operating system is to maintain the disk file system including the directory structure, file names and for each file, the file size and the pertitent date information for when the file was created, modified and potentially accessed. I have been editing some videos lately and after producing a MP4 using ffmpeg; I used PowerShell to set the file create and modified times to the date that the video was recorded. This makes it easier to sort and allows viewing software to display timelines, good things. MP4 files also have an internal date that is set by cameras to note the datetime of when the video was recorded. I was surprised to learn that Windows Explorer displays the MP4 date rather than the file date and this can be problematic if the MP4 file does not contain valid date data. This post describes the issue in detail and provides steps for adjusting all the dates to the same datetime. Notice the use of term “datetime” rather than “date” and “time”. In Windows and its probably an ISO standard, computers store file dates and times as one field, which on Windows NTFS is a 64-bit signed count of 100ns periods since January 1, 1600. The point is that it can handle splitting a second up into 100 pieces intervals per second and 64-bits is a bit enough space that it can accurately store both date and time for a very long time. As users, we never see this and as programmers, we usually don’t need to worry about the detail of the field, its a 64-bit number that represents the file’s date and time in one go, “datetime”. Since it’s one field, comparisons of before, after and same become very easy and can be done in one operation and since its signed, you can “”subtract” datetimes to figure out how long it was between two times. Windows stores 3 datetime fields for every file on disk MP4 stores its own datetime of when the file was created – normally stored by a video camera assuming you remembered to set the clock! On Windows NT systems, the Create date is not the date that the file was first created, it is the date that this specific file was created, so if you copy the file from one location to a new location, that new file will have a fresh Create date equal to “now” and if you then edit the image/video with a program, this will update the “modified” date and all concept of when the file was created will be lost. Image and video cameras work to solve this by storing the date that the picture was taken of video recorded directly inside the image/video file. As a side note, the technique in PowerShell to adjust the fle create date back to what it is supposed to be is: $fdate = "1941-12-07 07:00" $dname = Get-ChildItem ("Pearl*.jpg") foreach ($name in $dname) Write-Output $name.FullName, $fdate; $name.CreationTime = $fdate $name.LastWriteTime = $fdate In my recent case, the files were MP4, videos of the kids in their younger days and I had imported these from analog camera to computer and after processing through a few different tools, the last step was using ffmpeg to convert to MP4. This worked great! Then I set the CreationTime to the historic datetime of the video being recorded and … though I was done. View with Windows Explorer and … the DATE DID NOT TAKE! Explorer is showing that the video’s datetime is 12/31/1969 7:00pm. That isn’t right! Compare the GUI view to command prompt view. Never trust a GUI! Command prompt has the “correct” datetime. As the explorer GUI to show file details, and we have a match! It is listed as “Media Created” date. And use EXIFTOOL to show the innerds of the data and we have a winner! The dates inside the MP4 file are all zeros! Actually, this makes sense. I captured the video content from one tool, plumbed it through a few others, and out the other side, ffmpeg produced the MP4 file. By the time it got there, the only record of the datetime that the video was recorded was in the name of the file. The MP4 file has zeros as its internal record of when the file was created. Sometimes things try to do too much The job of an operating system is to maintain the file structure on disk. Esoteric concepts like what is in the files for video editing applications, is IMHO an application responsibility. BUT – Windows Explorer is trying to help out and is displaying the MP4 internal date rather than the Create, Modified, Accessed time from the file on disk. Explorer is trying a bit to be a video display application, something that really isn’t its job. Then again, it can also show thumbnails for images and … people like that. Is it something that SHOULD be in an operating system? The lines blur and we can all have a nice debate on whose job it is to display the video content and whose job it is to display the image/video content. Back to reality – this is not showing the date that I want it to show and the solution is to either configure Explorer to show a different date field as date or to modify the MP4 files to have datetime information embedded. “B” is the better solution. I went looking on the internets and found this fine post on StackOverflow, link. Bingo! I am not the first person to have this problem. In the post, Edward Brey even provides Visual Basic source code to modify the MP4 datetimes to be any date you want. Awecome. Took a bit to compile that up and ran it and everything looked good, but I wasn’t done. In running the VB code to adjust the MP4 datetime, the file modified date was also modified, so I ran my PowerShell bat file one more time to modify the file date times and … by magic, the MP4 datetime is now incorrect. For some files, off by 5 hours, for others, off by 4. A bit of study concluded that the 4 vs. 5 is daylight savings time or not of the date of the files set against the create time in the file. When PowerShell (.NET) is adjusting the CreateTime, it is also adjusting the MP4 internal create date! What? Why. - I did not ask PowerShell to adjust the datetimes that are in the file contents, only the file “date”. But, the embedded date in the MP4 file did change! - And we’re back to the job of operating systems vs. the job of applications, but this could be a very long diversion And its worse. The MP4 datetime was “correct” per me in the before and after setting the datetime of the file create date, the MP4 datetime was now incorrect by a period of time equal to number of timezones away from GMT on the date that the file was recorded. I studied this for about a day and concluded that MP4 internal datetimes are the datetime that the file was created/recorded/modified and that time shall always be ZULU! The VB code didn’t know if the time I gave it was local time or GMT/UTC/Zulu, so it went with what it had. When the file CreatedData was set from PowerShell, the .net runtime decided to help out and adjust the MP4 datetime to match the create date on the file itself. Okay, problem now understood and dates all adjusted. What is “odd” is that if PowerShell and .NET adjust the date of the MP4 file when the internal date is concluded “wrong”, why doesn’t it also do that when the internal datetime is zeros? Consistency here would be a win. Please add your comments below. If you want the compiled version of the program to adjust the MP4 internal dates, drop a line and I’ll send your way. email to joe at this domain. Originally posted Dec 22, 2018
<urn:uuid:21bb9107-7245-4c3c-9045-5549d0e9ad2d>
CC-MAIN-2024-38
https://www.joenord.com/explorer-shows-mp4-internal-date-rather-than-date-of-file-on-disk/
2024-09-12T12:51:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00051.warc.gz
en
0.955534
1,764
2.828125
3
Google HeAR AI Model Will Diagnose Tuberculosis Through Cough Analysis Google HeAR is an AI model designed to detect diseases, such as tuberculosis (TB) and chronic obstructive pulmonary disease (COPD) by analyzing cough sound. The Health Acoustic Representations (HeAR) model, introduced this year for bio-acoustic analysis, is capable of recognizing sound patterns and generating important health information. Trained on a vast dataset of 300 million audio samples, including coughs, breaths, and other body sounds, the Google HeAR can identify subtle acoustic biomarkers that may indicate diseases like TB and COPD. Remote Areas’ AI Disease Detection What makes the Google HeAT model apart is its ability to detect diseases early in areas lacking healthcare facilities. Tuberculosis, for instance, is a curable disease that often goes undiagnosed due to the absence of reliable and affordable diagnostic tools. With its low cost and user-friendly method of detecting TB and other respiratory conditions, Google’s HeAR could lead to more timely and effective treatment. In another development, Google is also partnering with Indian-based healthcare company, Swaasa, to integrate HeAR into their AI tool which assesses lung health through cough sounds analysis. “Every missed case of tuberculosis is a tragedy; every late diagnosis, a heartbreak. Acoustic biomarkers offer the potential to rewrite this narrative. I am deeply grateful for the role HeAR can play in this transformative journey,” said Sujay Kakarmath, product manager at Google Research working on HeAR. Eliminating Tuberculosis by 2030 The search engine parent is also collaborating with organizations like Stop TB Partnership to bring together experts and affected communities to put an end to tuberculosis by 2030. “Solutions like HeAR will enable AI-powered acoustic analysis to advance tuberculosis screening and detection, offering a potentially low-impact, accessible tool for those who need it most,” digital health specialist with the Stop TB Partnership, Zhi Zhen Qin highlighted the partnership’s important role. However, despite its promising potential, the Google HeAR model will be accessed through smartphones, triggering a line of questions around the collection of sensitive data from these devices. The AI model will need extensive testing and validation to guarantee its accuracy and reliability across different populations and environments. Inside Telecom provides you with an extensive list of content covering all aspects of the Tech industry. Keep an eye on our Medtech section to stay informed and updated with our daily articles.
<urn:uuid:343a55d6-7f07-4c0b-b1d8-acc679c6cb3c>
CC-MAIN-2024-38
https://insidetelecom.com/google-hear-ai-model-will-diagnose-tuberculosis-through-cough/
2024-09-16T03:41:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00651.warc.gz
en
0.929625
522
2.671875
3
- Koumoutsakos and the Citadel team, joined by researchers from ETH Zurich in Switzerland, illustrated their methodology by employing thousands of virtual machines on Google Cloud to create a replicated supercomputing platform. - The research presents a promising breakthrough with potential alternatives for researchers and organizations requiring extensive compute power. Harvard University researchers utilized Google LLC’s public cloud infrastructure to replicate a supercomputer that was used for a heart disease study. They assert that this original utilization of cloud computing resources can aid fellow researchers grappling with limited access to robust supercomputers in completing their studies. According to Harvard professor Petros Koumoutsakos, the study aimed to replicate a novel therapy designed to dissolve blood clots and tumor cells within the human circulatory system, as reported by Reuters. His team needed substantial computing power, usually accessible only through supercomputers. According to Koumoutsakos, the research team successfully secured sufficient supercomputer time for a single thorough simulation. Yet, they faced a challenge as they couldn’t replicate the process for improving or optimizing the test’s elements. It’s a prevalent issue for scientific research teams. Only a few supercomputers available to scientists in the United States are powerful enough to do the billions of computations required for a study like Koumoutsakos’. As a result, a notable waiting list is in place for individuals seeking access to these machines. To overcome this obstacle, Koumoutsakos and his team collaborated with Citadel Enterprise Americas LLC, exploring the possibility of replicating a supercomputer within the public cloud environment. This approach eliminates the need for resource access waiting times. However, the public cloud doesn’t offer a straightforward solution since platforms like Google Cloud aren’t inherently tailored to manage researchers’ tasks. Instead, cloud instances are optimized for numerous smaller computing tasks, like serving web pages, streaming videos, hosting applications, and facilitating database access. Conversely, the cloud is renowned for its reliability and resilience, offering uninterrupted availability and eliminating waiting lists for access. Koumoutsakos and the Citadel team, joined by researchers from ETH Zurich in Switzerland, illustrated their methodology by employing thousands of virtual machines on Google Cloud to create a replicated supercomputing platform. By employing “extensively tuned code,” they harnessed the capabilities of distributed cloud resources to attain a notable efficiency of 80%, providing dedicated supercomputer facilities. Bill Magro, Google Cloud’s Chief Technologist for High-Performance Computing, highlighted that the cloud has the distinctive potential to address technical, scientific, and engineering computing challenges. He elaborated that adapting cloud infrastructure to mimic supercomputer functionality necessitates alterations across software, networking, and the physical configuration of hardware components. Margo added, “Google Cloud’s high-performance computing technologies and solutions are purpose-built to both simplify and scale the largest, most complex workloads, enabling researchers to dramatically accelerate time to discovery and impact.” The research presents a promising breakthrough with potential alternatives for researchers and organizations requiring extensive compute power. However, industry insiders find it unsurprising that Koumoutsakos and his team accomplished it, noted Holger Mueller, an analyst at Constellation Research Inc. He highlights that Google’s cloud platform has historically been exceptionally configurable due to the demands of Google’s internal workloads, which have consistently required such high levels of configurability. “An example is Google’s translation models, which have been running for many years,” Halger said. “They need high-end instances and a fast network, which are the hallmarks of supercomputers, and this is exactly what Google Cloud provides too.” Mueller added that it’s improbable that the limited number of supercomputer providers would be overly concerned about the emergence of the public cloud as a competitor in the high-performance computing sector, considering that cloud platforms are equally sought after and in high demand. “Just about every cloud platform is seeing capacity constraints now with the interest in AI workloads, and it will remain that way for the foreseeable future.”
<urn:uuid:5ede184d-7beb-41b6-aa95-98d4a9b9ed01>
CC-MAIN-2024-38
https://www.itclouddemand.com/news/cloud-news/harvard-researchers-harness-google-cloud-to-clone-a-supercomputer/
2024-09-17T08:47:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00551.warc.gz
en
0.927444
851
2.75
3
A solar probe Johns Hopkins University Applied Physics Laboratory designed and built for NASA has arrived at Lockheed Martin‘s Astrotech Space Operations facility in Florida to begin pre-launch tests and preparations. APL said Thursday the Parker Solar Probe is scheduled to lift off July 31 at the Kennedy Space Center to study the sun’s outer atmosphere over a seven-year mission. The spacecraft, named for astrophysicist Eugene Parker, will work to provide data for researchers to forecast major eruptions on the sun as well as space weather events that may affect ground-based technology, satellites or astronauts in space. Parker Solar Probe will undergo comprehensive tests, final assembly and mating to a Delta IV Heavy rocket’s third stage at the Astrotech facility. The launch preparation phase will also involve the installation of a thermal protection system designed to protect the spacecraft from the extremely hot temperature in the sun’s corona.
<urn:uuid:9066bb32-0097-4345-9ec1-af155b3294f7>
CC-MAIN-2024-38
https://executivebiz.com/2018/04/nasas-solar-probe-to-undergo-final-assembly-tests-at-astrotech-facility-in-florida/
2024-09-19T22:04:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00351.warc.gz
en
0.909131
196
2.71875
3
While organisations have endeavoured to adapt to the huge changes brought on by the coronavirus pandemic, there has been an increase in cyber attackers looking to exploit the situation for their own gain. These were the findings of a recent Mimecast report, which found that email-borne impersonation fraud attacks increased by 30 per cent in the first 100 days of the COVID-19 pandemic. Also known as ‘phishing’, this method involves infiltrating systems by replicating known authentication processes and tricking users into handing over their log in credentials. It is often conducted on a large scale, with attackers choosing targets indiscriminately. For example, attackers recently targeted the German government’s private sector task force commissioned to obtain medical equipment for healthcare providers treating COVID-19 patients with a high-profile phishing campaign. With social distancing in place, many organisations are beginning to return employees to their former workplace environments. However, many will still need to keep employees spread across remote and office environments. During this time, it is imperative that all remote workers are aware not only of how a phishing attack works, but also the impact that phishing can have on business resilience overall. Here are some core priorities for organisations to observe to stay secure in the future. How phishing works The anatomy of an effective phishing attack is rooted more in social engineering than technology. Phishing messages try to trick individuals into taking an action, such as clicking on a link or providing personal information, by offering scenarios of financial gains or ramifications, or the potential of work disruption or playing into personal panic. However, phishing messages typically have tell-tale signs that can – and should – give users pause. Attempts to obfuscate the sender, poor spelling and grammar, and malicious attachments are a few of the classic signs that the message is not genuine. Be aware of ‘pretexting’ Attackers often attempt to impersonate a known person or entity to obtain private information or to carry out an action. This is also known as pretexting, and is commonly executed by crafting a fraudulent email or text message to execute an action that is not part of the standard process. One example is calling the service desk and pretending to be a valid user to get a password reset. Another ruse attackers frequently take advantage of is an out-of-band wire transfer or an invoice payment for a critical vendor. Small companies have traditionally been the targets, but larger companies are increasingly being targeted. Organisations must understand that pretexting is considered fraud and is often not covered by cyber insurance policies. Therefore, it’s critical that organisations design effective business processes with oversight so there are no single points of approval or execution, and stick to them. While it may be tempting to bypass processes, such as accounts payable or IT procurement, businesses can’t afford to let their guard down – especially when large numbers of workers are logging on remotely as is the case for so many today. The roles of change, uncertainty and user isolation Phishing attack messages that have the highest response rates are often related to time-bound events, such as open enrolment periods or satisfaction surveys. Some other common phishing message themes include unpaid invoices, confirming personal information and problems with logins. Before acting, think about what is being asked. For example, phishing attacks may take advantage of the fact that many workers are currently anticipating updates from their employers about returning to the workplace. The email may ask users to log in to a new system designed to allocate socially distant spaces within the workspace upon their return. This tactic exploits the user’s often unconscious confirmation bias, not only impersonating their employer but also taking advantage of their expectations around returning to work and acknowledgement of social distancing. If unsure whether it might be a malicious message, encourage staff to ask a colleague or the IT team to analyse the message (including the full Simple Mail Transfer Protocol (SMTP) information). Employee education is key Phishing is often discussed within the cybersecurity space, but the conversations typically don’t involve intent and rigor. The common compliance measure usually involves in-person or virtual annual training, along with some other method of education, such as hanging posters around the workplace. This approach pre-dates highly connected computing environments and doesn’t address the urgency needed for the current threat landscape or pattern of working experienced by so many in 2020. Organisations must conduct security awareness education with the same decisiveness and gravity that other industries do with safety training. For example, it’s not uncommon for drivers in the commercial trucking and transport sector to take monthly training modules, or for managers to participate in quarterly safety meetings. Maintaining business resilience in the ‘new normal’ The need for organisations to be proactive about cyber hygiene is higher than ever. As organisations gradually transition into the new normal, bad actors will continue to take advantage of the situation. By looking out for pretexting, paying attention to the signs, and emphasising regular training, companies will be better positioned to fend off a renewed surge in phishing attacks. In particular, organisations must take the time now to invest time and resources into regularly training and educating staff on information security awareness. Resilience can be built into the DNA of new working imperatives by spreading ongoing awareness critical cyber threats amongst all users. A data-compromising cyberattack could potentially be just around the corner, so organisations must establish plans and capabilities that reduce risk and prevent data loss, leakage or offline systems from disrupting business continuity. The opinions expressed in this article belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.
<urn:uuid:bf20c885-5567-48e9-8620-be7078ccc3b7>
CC-MAIN-2024-38
https://informationsecuritybuzz.com/how-to-counter-phishing-vulnerabilities-when-returning-to-work/
2024-09-21T04:08:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00251.warc.gz
en
0.956347
1,170
3.078125
3
An Application Platform as a Service is a type of cloud computing that provides users with on-demand access to preconfigured application environments. With it, you can quickly and easily deploy applications without worrying about the underlying infrastructure. That makes it an excellent choice for businesses wanting to get up and running quickly without investing in their IT infrastructure. This post provides complete details about aPaaS, including its definition, benefits, working, and associated risks. What is aPaaS? It is a term used for software platforms that provide a cloud-based environment where users can develop, deploy and manage application services without the burden of handling the infrastructure and platform layers. It is classified by two features. - Low code development tools - Rapid application development (RAD) How does it work? It is a cloud computing platform that provides web developers with on-demand access to calculate resources, such as memory, storage, and CPU cycles. It helps them deploy their applications without managing the underlying infrastructure. Application platforms typically include an operating system, a database management system, middleware, and programming languages. In addition, they offer services such as debugging and testing tools, backup and disaster recovery capabilities, load balancing, and scalability features. Some platforms also offer pricing models that allow customers to pay for only the resources they use. Reduced time to market Applications can be deployed more quickly with APaaS than if they were developed and deployed on traditional infrastructure. Developers can focus on developing applications rather than configuring and managing the underlying infrastructure. That can lead to faster development times and improved application quality. Scalability and flexibility Applications can be scaled up or down as needed and easily adapted to changes in demand. This can often be less expensive than building and maintaining an in-house application development platform. You can pay only for what you use with aPaaS, so there are no upfront costs or long-term contracts. This can save you a lot of money in the long run. - One is that you may be locked into a provider, so switching can be difficult if you’re unhappy with their services. - Your applications will be running on someone else’s infrastructure, so your data may not be as secure as you imagine. - Your applications may not be compatible with the Application Platform as a Service provider you choose. - There is always the risk that the Application Platform as a Service provider could go out of business, leaving your applications stranded without a platform to run on. - aPaaS providers often charge by the hour, so you can spend a lot of money if you use it for a long time. The most popular providers are - Microsoft Azure - Google App Engine - Amazon Web Services (AWS) Elastic Beanstalk - Apache Stratos - Magento Commerce Cloud aPaaS vs PaaS PaaS offers a complete application development and operations environment, including everything from the operating system to middleware and application frameworks. On the other hand, aPaaS focus exclusively on the Application Development Platform as a Service layer of the cloud computing stack. That includes providing tools and services for developing and hosting web applications. So, in summary, PaaS offers a more comprehensive platform for developing and running applications, while aPaaS aims to provide tools and services for developing web applications. Difference between aPaaS, iPaaS and SaaS aPaaS | iPaaS | SaaS | It enables rapid application development and provides a cloud environment for deploying, designing, and maintaining business applications. This platform offers high productivity, a quick way to build apps, and high control. It also automates the application life cycle. | iPaaS is a cloud-based, multi-tenant integration platform hosted by a vendor on-premise. With this platform, applications on the cloud or on-premise can have data flow between them for integration. | Vendors host SaaS applications. It provides applications in the form of ready-to-use to meet organizational needs. Customers can get applications over the internet. | Examples: ServiceNow Mendix, Outsystems, SalesForce | Examples: Zapier, Segment, Jitterbit, IBM App Connect | Examples: Google Apps, Dropbox, Concur, Cisco WebEx | What are the common features of aPaaS? Elasticity: Scale up or down as needed. Autoscaling: Automatically add or remove resources based on load. Multi-tenancy: Serve multiple customers/tenants on a single platform. API management: Control access and usage of APIs. Platform services: Run common functions like storage, computing, and networking. What are low-code development tools? Low code development tools allow business users to create custom applications without learning to code. These tools typically use a visual interface, so you can drag and drop different components to create your application. Low code development tools are a great option for businesses that need custom applications but don’t have the resources or time to learn to code. What is rapid application development? Rapid application development (RAD) is a software development methodology that emphasizes speed and agility. RAD tools and techniques allow developers to create applications more quickly and efficiently than traditional methods. aPaaS is an excellent solution for businesses that need to deploy applications quickly with minimal management. It offers scalability, flexibility, and cost savings, and most providers offer a pay-as-you-go pricing model. However, some risks are associated with using it, including being locked into a provider and data security risks. Before choosing the provider, be sure to do your research and compare prices and features.
<urn:uuid:012652ad-8f72-4d6f-80ae-4789c889a63d>
CC-MAIN-2024-38
https://www.erp-information.com/apaas
2024-09-21T02:53:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00251.warc.gz
en
0.931881
1,194
2.546875
3
All you need to know about interworks.cloud and the cloud industry When jumping into the Cloud, or at least, when considering to do so, you might want to investigate what it is about. Geeks though have a tendency to use abbreviations and if you are unfamiliar with that it is a bit frustrating to find your way around information. At the same time, new technology needs to be called new names. So here is a basic Cloud glossary for you: Cloud Computing term #1: IaaS IaaS stands for Infrastructure as a service and if you could visualize what it is, think of it as the most basic level of cloud computing. IaaS handles all the equipment, from the servers used to comprise a cloud (private or public) to the networking components essential for the network to… work. Cloud Computing term #2: Private Cloud Some companies decide to set up their own cloud with infrastructure owned by the company alone. It allows the company to benefit from the cloud features yet they keep everything in their premises and establish exclusive use. Cloud Computing term #3: Public Cloud It is the exact opposite of the private cloud. Public is the cloud network that is handled by a third party, not the company, and it also handles the data of other clients. Companies utilizing the public cloud may not have their data “near” them but they save up in time and cost on issues concerning maintenance and they pay the services they enjoy, per use. Cloud Computing term #4: Hybrid Cloud As the name denotes, hybrid is a combination between the private and the public cloud. Many companies need to keep some of their critical data in a private cloud and at the same time, enjoy the scalability in use and resources of the public cloud. Therefore they deploy both into a combined solution that meets their needs. Cloud Computing term #5: SaaS The abbreviation stands for Software as a Service and it refers to web-hosted applications, software, which is ready to use. Cloud Computing term #6: PaaS It stands for Platform as a Service and it is a category of cloud computing services that provides a platform in which the customer creates the software using tools or libraries from the provider and they control the software deployment and the configuration settings. You can find different kinds of platform vendors around yet the choice is between those who offer the most integrated services as business tools, together with the deployment environment. Cloud Computing term #7: Cloud bursting Cloud bursting is a very useful way to utilize a combination of the private and the public cloud. In Cloud bursting, if you run an application in your private cloud and then the internal server becomes overloaded, then application uses resources in the public cloud is another handy way to combine the private and public cloud. In this combination, an application is run in the private cloud, but if the internal server becomes overloaded, the application then bursts into the public cloud, avoiding potentially devastating downtime and lost revenue. Cloud Computing term #8: Elastic computing Depending on a business needs in the cloud, elastic computing is the ability to increase and decrease computing resources (storage, memory) so that you can always meet the needs during pick usage and scale back in less busy periods. Cloud Computing term #9: Vendor lock-in This term refers to the common instance on not being able to easily switch providers when handling your cloud, because of protocols or data structures used by your current provider. As a company you can choose your service provider according to your preferences and the flexibility you need in case you need to move your data elsewhere. Cloud Computing term #10: Pay per use pricing model This is the pricing model which is based on the consumption of service by the user. A provider using this pay-per-use pricing model will charge for example per gigabyte of data stored in the system.
<urn:uuid:44cc71be-b4d4-4157-a7c3-c97f2dec6297>
CC-MAIN-2024-38
https://globalstaging.interworks.cloud/the-10-cloud-computing-terms/
2024-09-10T03:19:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00351.warc.gz
en
0.94799
782
2.796875
3
By Oliver Cronk, of Scott Logic The tech industry has driven incredibly rapid innovation by taking advantage of increasingly cheap and more powerful computing – but at what unintended cost? What collateral damage has been created in our era of “move fast and break things”? Sadly, it’s now becoming apparent we have overlooked the broader impacts of our technological solutions. As software proliferates through every facet of life and the scale of it increases, we need to think more about where this leads us from people, planet and financial perspectives. Sustainable Information Technology is even more important when you consider that digitalisation (going paperless, telecommuting etc) is often touted as a path to decarbonisation and sustainability. This is more than just a “do good” or “feel good” thing – there are many benefits of pushing towards sustainability and regenerative technology approaches including financial advantages. This is the first in our latest series of blogs on sustainable technology that will explore these issues and, where possible, offer pragmatic suggestions that hopefully raise thought-provoking questions to ask yourself, your suppliers, and technology teams. How we got here In the early days of computing (1950s to 1980s), memory and processing power were extremely scarce and expensive resources. Programming required ingenious techniques to optimise every byte and cycle in order to accomplish anything useful within the tight constraints. Computing was a highly specialised dark art practised by only a handful of knowledgeable people. Moore’s law (which has been running out of steam recently) has made computer chips increasingly cheap and powerful – so efficiency hasn’t been as important a priority. “Hardware is cheap” Thanks to Moore’s law, the more recent breakneck speed of improvement in computing has led to the mantra “hardware is cheap”. Efficient applications haven’t been a priority – instead, the priority has been speed to market and programmer productivity. Something called the Jevons paradox has come into play – the more cheap we make something (in this case through more efficient hardware), the more of it we use. Today, AI and cloud make massive compute power available at the click of a button. As the costs have come down, it’s been very tempting (in many cases unknowingly) to apply brute force rather than carefully crafting solutions. Developer productivity shouldn’t be demonised – it’s been super important – but we need to find smarter ways to balance speed to market without being wasteful. Tech business models driven by growth Technology platforms are commercially driven to grow aggressively, and their primary means of growth is to encourage increased adoption. This presents a challenge as their commercial model is in conflict with attempts to reduce their footprint and impact. Sadly in some cases, this has led to greenwashing (misleading or untrue claims about the positive impact that a service has on the environment), including suggesting that their platforms are always greener than alternatives. Whilst economies of scale and centralisation do have benefits, they are not always a panacea and you should evaluate the performance of your current platforms. This is particularly the case if your current infrastructure operates in parts of the world with cleaner electricity (say Scotland or the Nordics) than the major cloud provider locations (often cities like London or US locations with higher demand for electricity). Ubiquitous cheaper computing is a double-edged sword We are now uncovering the pitfalls of this brute-force approach. Bloated, wasteful applications contribute to growing energy consumption and carbon emissions from data centres. They strain local resources for power and cooling. Materials and energy used in the manufacturing and supply chain (aka embodied carbon from hardware) are almost completely hidden and unknown. We have made using computers and building systems far easier by abstracting away layers of complexity, and this is a good thing, democratising access to computing. Unfortunately however, these layers (such as end-user tools, low or no code, spreadsheets and more recently GenAI) can also add inefficiency and create a lack of transparency regarding what is going on under the hood. Software has real world impacts and the cloud is not ephemeral. As the old joke about Cloud states “it’s someone else’s computer” (often massive racks of them in fact) and it exists somewhere out of sight and out of mind. Cost vs Quality and the role of Architects Technologists (in particular more forward-looking/strategic Architects) already know that we need to go beyond evaluating systems on benchmarks of speed and cost of delivery. Often the champions of quality attributes and non-functional requirements are so often overruled in an era where cost and time pressures have a tendency to drive out software quality. Sadly, this results in unintended consequences. The classic Scope, Cost, Time pyramid – but often it’s the **observable ** functional quality that is prioritised. For that I’ll use a somewhat surreal version of an iceberg – as so much of technical (and effectively sustainability debt – a topic for a future blog) is hidden below the water line. Every engineering decision (or indecision) has ethical and sustainability consequences, often invisible from within our isolated bubbles (for example, we don’t feel or see the impact of electronic waste, but it does exist; it just ends up somewhere else). Just as the industry has had to raise its game on topics such as security, privacy and compliance, we desperately need to raise our game holistically on sustainability. Why not just wait for regulation? While compliance requirements eventually nudge laggards, early adopters reap benefits on multiple fronts. Sustainable practices like streamlining processes, right-sizing resources, and eliminating waste can significantly trim expenses. And sustainability-focused companies (that are genuine and don’t just greenwash) attract top talent and brand affinity. The incentives are there for organisations to get ahead of the curve on environmental practices rather than delay until mandated. Beyond regulatory obligation, optimising for sustainability is an opportunity to reduce costs and create value. The time to start is now as the longer we put this off, the more technical/environmental debt we accumulate. Of course, carbon or environmental pricing/taxation would provide more of a stick, but there are already clear benefits from being a leader rather than a laggard – for example: - More cost-efficient – through measuring and optimising your assets - Managing risks and increasing resilience by being on top of your architecture - More attractive supplier – through demonstrable and transparent actions - More attractive employer – many are now looking for their employer to walk the walk on environmental action and, if they haven’t already, will start to see through greenwash Making Progress Visible – you can’t manage what you can’t measure To enable more conscientious computing, we must start by making impacts visible. As the old saying goes, “you can’t manage what you can’t measure”. Ideally, we need standard global frameworks for efficiency and utilisation, assessing lifecycle product/system carbon footprints, and other aspects that can help expose the true costs of our systems. Visibility into Data centres: where software = physical impact Transparency of the carbon footprint of data centres – beyond just energy consumption (to include water and e-waste) – would connect developers to the real-world impacts of their cloud usage. Every part of the software development and operations lifecycle needs visibility so that we can start to optimise (or at the very least make pragmatic trade-offs). Many of these things are being actively worked on by the likes of Green Software Foundation and the Sustainable Digital Infrastructure Alliance, but they are still very much in their infancy. In the meantime, we should work with what data and proven research are available, learn from others and do our best to fill gaps pragmatically. Of course, end-user devices are also where software has real-world impact – but this will get picked up in a separate article. Beyond measurement – taking action Once we understand the size of the problem, we can prioritise the areas that look the most compelling to address (based on current size or projected growth in usage). You can start by implementing the high-impact, low-effort actions, and progress to weighing up the changes that will require investment (will the effort pay back?). Then you can start tying technology strategy, architecture principles and policies back to your corporate sustainability goals (where these exist). If Environmental, Social and Governance (ESG) isn’t a priority at an organisation-wide level (increasingly rare but not unheard of), look for other areas such as cost savings, marketing, customer and employee retention as drivers and levers for change. In other articles, we will talk about practical actions and decisions you can make, such as: - How we strive for BOTH developer and machine productivity - Making sustainable infrastructure and cloud provider choices - Sustainable design, development and DevOps choices - Carbon aware computing and time and location shifting None of these is a silver bullet that should be applied dogmatically – you will need to carefully consider pragmatic trade-offs. Raising awareness and inspiring action Before all of this, we have to raise awareness of the issue across the technology industry, our organisations and the sector we work in. This blog series (and other supporting material) is part of that, from a Scott Logic point of view. As much as we are a business, we have a social mission. Being an active part of the sustainable software ecosystem, in particular open source communities, is a significant part of our social mission. Education more broadly plays a role too. Environmental science concepts (or at the very least awareness of Greenhouse Gas (GHG) protocols and the concepts explained in the Green Software Foundation certification) integrated into the computer science curriculum could seed the next generation of technologists with sustainability thinking. We also need to educate everyone on the impacts of their technology usage – “Fast Tech” is starting to get mainstream attention, which is encouraging. The Path Forwards With focus and initiative across stakeholders, we can build an ecosystem that values conscientious computing. One where technologists have both the desire and tools to create solutions that uplift society’s sustainable use of digital. The challenges ahead are enormous, but so is the opportunity for positive impact and financial cost savings. Our systems can either contribute to humanity’s burdens or help shoulder them. The choice comes down to thousands of small decisions we make every day as architects and engineers. Do we reach for the quick and easy path, or do the difficult, nuanced work of considering the trade-offs we need to make? Whilst it’s unlikely we can build perfect, zero-impact systems (at least in the medium term), that should not get in the way of making progress. “Perfection (and fear of hypocrisy) is the enemy of progress when it comes to tech sustainability” Recently at a People, Planet, Pint event in Bristol, the comedian Stuart Goldsmith said that our fear of hypocrisy [on the environment] often stops us from taking action. All of us are waking up to the true impacts and costs of our actions and past behaviour and fear that we need to be perfect (across all parts of our lives) before we can really make an impact. The reality is that as important as collective individual actions are, the actions we take at work can make a huge difference. Whilst this topic can feel overwhelming at times, this shouldn’t stop us from taking pragmatic action – particularly when this can have huge effects (imagine if you could easily reduce the energy consumption of your organisation’s tech by just 0.5-1%). Future innovation is going to require elevating both technical and ethical standards. It means creating human-centric and planet-centric systems, not merely human-usable ones. We have the potential to build a future where technology brings out the best in humanity. But we must commit to holding ourselves and our industry to higher standards. The world needs technology pragmatists willing to ask tough questions in pursuit of progress. Together, through conscientious computing, I am confident we can #ArchitectTomorrow and build that world! If you’d like a friendly chat about this topic, our door is open – whether the discussion is to raise awareness, lead to cross-industry/open source collaboration, or something more in-depth. Please do get in touch: email@example.com or connect with me on LinkedIn. This blog originally appeared at https://blog.scottlogic.com/
<urn:uuid:ed1c48df-6fda-43a9-9525-bc34942f5771>
CC-MAIN-2024-38
https://www.architectureandgovernance.com/applications-technology/conscientious-computing-facing-into-big-tech-challenges/
2024-09-14T23:10:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00851.warc.gz
en
0.942568
2,599
2.703125
3
Records Information Management (RIM) refers to the systematic handling of both digital and physical records throughout their entire lifecycle. Effective RIM practices help organizations manage critical information to support strategic planning and informed decision-making. At Armstrong Archives LLC, we provide secure and reliable record storage, document management, scanning, and secure document destruction services. Our comprehensive RIM solutions help you safeguard and manage critical business information. Understanding Records Information Management The management of records has transformed remarkably over time, adapting to technological advancements and changing organizational needs. What is RIMS Records and information management involves the organized control of an organization’s records throughout their lifecycle. It includes the processes of creating, storing, retrieving, and securely disposing of records in both physical and digital formats. Evolution of RIM Practices RIM management practices have evolved from basic manual record-keeping to advanced digital systems. Early methods relied on physical storage, while modern practices leverage technology to save space, enhance efficiency, and ensure compliance. The Importance of RIM in Modern Business RIM plays a critical role in ensuring compliance and enhancing organizational efficiency. Here’s how RIM supports these goals: Ensuring Compliance with Laws and Regulations RIM helps organizations adhere to legal and regulatory standards by maintaining accurate and accessible records. RIM compliance is crucial to avoid legal penalties and to demonstrate transparency and accountability. Proper RIM practices ensure records are stored and disposed of in accordance with relevant laws and standards. Enhancing Organizational Efficiency and Information Retrieval RIM streamlines the management of information, making it easier for employees to retrieve necessary records quickly. Efficient records management reduces time spent searching for documents, thereby increasing productivity. It also supports better decision-making by providing accurate and timely information. Key Components of Records Information Management Effective records and information management is built on several key components that collectively ensure comprehensive and compliant record-keeping practices. These components include well-defined policies and procedures, robust data security measures, and adherence to compliance and legal requirements. RIM Policies and Procedures RIM policies and procedures provide the framework for how records are managed within an organization. These guidelines cover the entire lifecycle of records, from creation and classification to storage, retrieval, and eventual disposal. For example, in the healthcare industry, policies must address how patient records are handled to comply with the Health Insurance Portability and Accountability Act (HIPAA). Similarly, financial institutions must adhere to the Sarbanes-Oxley Act, which mandates the retention and management of financial records. RIM Data Security Data security is paramount in managing records as it protects sensitive information from unauthorized access, breaches, and loss. Organizations must implement strong encryption, access controls, and regular security audits to safeguard both digital and physical records. For instance, law firms dealing with confidential client information must ensure their records are securely stored and accessible only to authorized personnel. Failure to protect these records can lead to severe legal and reputational consequences. RIM Compliance and Legal Considerations Compliance with legal and regulatory requirements is a cornerstone of effective RIM. Organizations must be aware of and adhere to various laws and regulations relevant to their industry. For example, companies in the United States Government must retention regulations set by the National Archives and Records Administration (NARA) for federal records. The federal laws previously mentioned regulate how many industries must treat their records to uphold the law. Ensuring compliance involves regular audits and updates to RIM practices to align with evolving legal standards. Stages of Record Management Effective record management involves several distinct stages, each crucial for ensuring the proper handling and preservation of records. Creation or Receipt The first stage involves creating or receiving a record. It can include documents generated internally, such as reports and memos, or those received from external sources, like invoices and legal contracts. Ensuring accurate and complete documentation at this stage is vital for future reference and compliance. Once records are created or received, they must be classified. Classification involves organizing records into categories based on their content, function, or other relevant criteria. This stage facilitates easy retrieval and management, ensuring that records are stored in a logical and systematic manner. Use and Maintenance During the use and maintenance stage, records are actively used and updated as needed. This includes accessing records for decision-making, auditing, and other business processes. Regular maintenance ensures that records remain accurate, relevant, and up-to-date. Proper storage of records is essential for their preservation and security. Records can be stored physically in secure locations or digitally in encrypted and access-controlled databases. Effective storage solutions protect records from damage, loss, and unauthorized access. The final stage of the records management process is disposal, when information has either become irrelevant or is no longer needed. Disposal must be conducted in accordance with established retention schedules and legal requirements. This can involve securely shredding physical documents or permanently deleting digital files to ensure sensitive information is irretrievable. Record retention schedules vary based on factors such as industry regulations, federal and state laws, type of record, operational needs, and regulatory agency requirements. Types of Records in Records Management Records and information management encompasses a variety of record types, each serving distinct functions within an organization. Let’s understand the following categories of records to enable effective management: Administrative records include documents related to an organization’s general operations. They can encompass policy documents, procedures, correspondence, and minutes of meetings. Administrative records play a crucial role in ensuring smooth day-to-day operations and support administrative functions such as human resources, facilities management, and internal communications. Financial records document an organization’s financial activities and transactions. This category includes invoices, receipts, budgets, financial statements, and tax documents. Proper management of financial records is essential for regulatory compliance, financial auditing, and strategic planning. These records provide a clear picture of the organization’s financial health and are often subject to strict retention and security requirements. Operational records pertain to the core functions and activities specific to an organization’s mission. In a manufacturing company, for instance, operational records might include production schedules, maintenance logs, and quality control reports. In contrast, a healthcare provider would manage patient records, treatment plans, and clinical trial data. These records are critical for ensuring efficient and effective service delivery, quality control, and operational continuity. What is the meaning of records information management? Records Information Management (RIM) refers to the systematic control and administration of an organization’s physical and digital documents throughout their lifecycle. It includes the creation, classification, storage, retrieval, and disposal of records to ensure efficiency, compliance, and strategic decision-making. What are the 5 stages of record management? Effective record and information management entails the following stages: Creation or Receipt: Records such as invoices, emails, and reports are generated or received. Classification: Records are categorized based on content, function, or other criteria for organized storage and retrieval. Use and Maintenance: Records are actively utilized and updated as needed for operational purposes. Storage: Records are stored securely, either physically or digitally, to preserve their integrity and accessibility. Disposal: Records that are no longer needed are securely disposed of according to retention schedules and legal requirements. What are the three main types of records in records management? In records management, the three main classifications are: Administrative Records: Documents related to organizational operations and management, such as policies, procedures, and correspondence. Financial Records: Documents concerning financial transactions and activities, including invoices, budgets, and financial statements. Operational Records: These are records specific to an organization’s core functions and activities, such as production schedules, patient records in healthcare, or case files in legal settings. Best Practices in Records Information Management Implementing best RIM practices ensures efficient and compliant record handling. Strategies for Effective RIM Effective RIM strategies include developing clear policies, conducting regular audits, and providing employee training. These practices ensure consistent record handling, compliance with regulations, and staff awareness of proper procedures. Technologies that Enhance RIM Practices Technologies such as electronic document management systems (EDMS), cloud storage solutions, and automated retention scheduling tools streamline RIM processes. These technologies improve accessibility, enhance security, and ensure adherence to retention schedules and legal requirements. Partner with Armstrong Archives for Expert Record Management Effective records management is essential for maintaining organizational efficiency and ensuring compliance with regulations. Armstrong Archives, based in Dallas-Fort Worth, offers unparalleled expertise in document management services. Our team is dedicated to helping businesses streamline their records retention processes in an increasingly digital world. Contact us today to manage your records securely and efficiently.
<urn:uuid:5ac97875-805e-47fc-94ee-6679ead5adbb>
CC-MAIN-2024-38
https://www.armstrongarchives.com/what-is-records-information-management/
2024-09-17T11:07:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00651.warc.gz
en
0.92069
1,821
2.65625
3
Microsoft Unveils Python in Excel to Enhance Data Analysis Capabilities With Python in Excel, users can now write Python code in cells using a new PY function. This Python code will execute securely in the Microsoft Cloud and outputs will be returned back to the Excel worksheet. Users have access to a wide range of popular Python data science libraries including pandas for data manipulation, Matplotlib for visualization, scikit-learn for machine learning, and statsmodels for statistical modeling. This allows users to perform advanced analysis like visualizing data with charts, applying machine learning algorithms, running predictive analytics, and more - all within Excel using Python. Python in Excel combines Python's powerful data analysis and visualization libraries with Excel's features you know and love. You can manipulate and explore data in Excel using Python plots and libraries, and then use Excel's formulas, charts, and PivotTables to further refine your insights. Some examples of new analysis enabled by Python in Excel: - Creating visualizations like bar charts, line plots, heatmaps, and other advanced charts using Matplotlib and Seaborn libraries. - Applying machine learning techniques like regression analysis, clustering, random forests, and neural networks using scikit-learn. - Running time series forecasting and predictive analytics using statsmodels and other libraries. - Cleaning and transforming data efficiently using pandas and regular expressions. In a statement, Python creator Guido van Rossum said he is excited about the tight integration between Python and Excel, which he expects will open up new use cases for both communities. "I am thrilled to announce the integration of Anaconda Distribution for Python into Microsoft Excel – a major breakthrough that will transform the workflow of millions of Excel users around the world,” said Anaconda CEO and co-founder Peter Wang. Users can easily collaborate on Python-enabled Excel workbooks by sharing with colleagues. The Python code will run for anyone opening the workbook, without requiring Python or any additional setup on their end. The Python in Excel public preview is now available for Windows Excel beta testers. The feature reflects Microsoft's deep commitment to integrating Python across its products and making it more accessible to Excel's millions of daily users. Get started with Python in Excel Python in Excel is rolling out to Public Preview for those in the Microsoft 365 Insiders program Beta Channel. This feature will roll out to Excel for Windows first, starting with build 16818, and then to the other platforms at a later date. To use Python in Excel, join the Microsoft 365 Insider Program. Choose the Beta Channel Insider level to get the latest builds of the Excel application. Once you’ve installed the latest Insider build of Excel, open a blank workbook, and take the following steps. - Select Formulas in the ribbon. - Select Insert Python. - Select the Try Preview button in the dialog that appears. Microsoft plans to make Python in Excel generally available later but will restrict some functionality without a paid license after the preview period. The public preview enables users to start exploring the new data science capabilities Python brings to Excel.
<urn:uuid:75b7186d-4b1f-45dd-ac4f-7e12fef13f06>
CC-MAIN-2024-38
https://www.cyberkendra.com/2023/08/microsoft-unveils-python-in-excel-to.html
2024-09-17T11:21:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00651.warc.gz
en
0.851678
641
2.546875
3
Working with Local Folders To work with items on the local PC, use the left pane of the FTP Client. You can browse disk drives on your computer or local network, create new folders, and do other directory management tasks within the FTP Client. Commands on the File and Edit menus, and most buttons on the toolbar, apply to folders and files in the currently active pane. New folders can be added from the File menu, context menu (right-click), or toolbar. Use the context menu to create shortcuts to folders. To view the local directory structure, use the Go to a different folder list box in the upper left corner of the main window. To see how the current folder fits in the hierarchy on your computer, click the down arrow in the list box. The Tools menu and toolbar provide access to the Up One Level command. Also, you can navigate directly to a folder using the Go to command. To navigate directly to a local folder From the Tools menu, choose Go to. Type the name of the directory you want to open. Select Local Computer to indicate that the folder is available on your PC. You can enter UNC (Universal Naming Convention) names for directory paths. There is a 47-character limit for UNC names, and each name can contain any character, both uppercase or lowercase, except the following: ? " / | < > * : The syntax of a UNC name is as follows:
<urn:uuid:a20bc421-da41-4aef-b632-6e4aafdfa6c8>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/reflection-ftp-client/21-0/en/user-guide/managing-files-and-folders/working-with-local-folders.html
2024-09-07T20:05:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00651.warc.gz
en
0.809954
299
2.546875
3
Today, as I watched CNN, I was saddened to see that a Denial of Service (DOS) or a Distributed Denial of Service (DDOS) attack has brought down Internet services for more than 40 Millions users in the East Coast of the United States. DOS attacks are those launched across the Internet to take websites and other services off the air, often utilizing ‘bots / Botnet’ on infected home and business computers. In fact, we may sometimes be complicit and participating in the attack without our knowledge. The astonishing thing is that DOS attacks are nothing new. They have been around since the early days of the Internet. Case and point, take for example RFC2827, which is also referred today as BCP38 (Business Current Practice 38): The Abstract of the RFC2827 reads: “Recent occurrences of various Denial of Service (DoS) attacks which have employed forged source addresses have proven to be a troublesome issue for Internet Service Providers and the Internet community overall. This paper discusses a simple, effective, and straightforward method for using ingress traffic filtering to prohibit DoS attacks which use forged IP addresses to be propagated from ‘behind’ an Internet Service Provider’s (ISP) aggregation point.” Note: BCP38 is RFC2827: Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing. Image from BCP38 page Paul Ferguson and Daniel Senie wrote the above Abstract in May of the year 2000 as part of the abstract of RFC2827. As an employee at Cisco Systems where Ferguson was also employed at the time, I had reviewed RFC2827 as part of a process to include its features in the Cisco IOS Software. Yes, that was 16 years ago! Over the past 16 years, cyber criminals have sharpened their tools to a point that today, it is really easy and very low cost to launch a DDOS attacks with impunity; helped by the fact that one can hide behind forged source IP addresses. Just last year, in February 2015, the Internet Society convened a roundtable bringing together network operators, vendors, leading security experts, and researchers in this area to discuss the problem of source IP address spoofing with a goal to better understand the challenges of addressing it or help solve the problem, and to identify paths to improve the situation going forward. It seems that BCP38 can help make a dent in the arrogance of those who disrupt our lives at will. The logic is that without spoofed IP addresses, there will be no Distributed Denial of Service (DDoS) attacks. I am not stating that BCP 38, which essentially blocks (or drops) packets from forged source IP addresses from entering the Internet will solve all DOS attacks. In fact, we know that there is a small chance that non-fraudulent packets will also be dropped and that network operators may have to handled those exceptions manually. But those efforts are a small price to pay for our protection. Some may argue that if it is that easy, why didn’t people do it? I ask myself the same question every time there is one of these flamboyant attacks. Maybe some people don’t know they can do some thing about it. Maybe they don’t know they should do something about it. Or maybe they just don’t know. That is one of the reasons Jighi is proudly sponsoring the Ivoire Cyber Security Conference on November 9 – 10 in an attempt to educate and build a community of concerned cyber citizen. For those who think that BCP38 is not be perfect, I say: it gives us a way to stop an attack before it causes systems to go down. It also gives us a way to track weaponized packets to their source; unlike today situation where we are left to speculate. Today’s attack, as reported by CNN simply crippled Amazon, Reddit, Twitter, Netflix and others. Whatever the reasons for our lack of actions, the current Cyber Space is not sustainable.Attackers have more and more sophisticated tools and we are unable to come together to tackle this phenomenon. I say, let’s give a chance to BCP38. Let’s eliminate the easy things with seemingly easy solutions: DOS attacks, and their even nastier cousins Distributed Denial Of Service attacks are easy to do and hard to deal with because of the forged IP addresses. That situation is made worse because the attack cannot be quickly stopped. Why? Because forged source IP addresses make it impossible to determine where the attack is actually coming from.
<urn:uuid:cd25aabb-c2f1-4f1f-a0c7-e3a43f944a07>
CC-MAIN-2024-38
https://www.ciocoverage.com/how-to-stop-distributed-denial-of-service-ddos-why-not-try-bcp38/
2024-09-09T01:58:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00551.warc.gz
en
0.962884
949
2.515625
3
People all around the world enjoy using internet services. This can be at their offices and businesses to speed up their workflow. Alternatively, at their homes to search for information or even to play games and watch movies. Whatever the case might be, having an internet connection has become more of a necessity for most people. When getting to how you can start using these services. The user has to purchase a package from a local ISP service and then subscribe to a package. You can then start using the connection once your modem and router have been set up. Choosing the package is one of the most essential parts as that will determine what your speed and overall bandwidth are. However, one thing that people forget is that selecting the connection type can also be quite important. There are mainly two connection types that can be used. These include having a cabled setup using ethernet wires that have to be installed on both ends of your network. Alternatively, you can go for a wireless connection instead which requires no additional setup. While using Wi-Fi on your devices can be slower than using ethernet cables, people still prefer it because of how clean it looks. However, most computer systems do not come with Wi-Fi capabilities. Additionally, many systems have a weak Wi-Fi network on them which can make it harder to catch signals. This is where Wi-Fi adapters come in that can be used on your devices to both give them Wi-Fi as well as increase their connection range. Does Wi-Fi Adapter Affect Internet Speed? One question that many people ask is ‘Does Wi-Fi adapter affect internet speed’. The answer to this question can be a little complicated but going through the information provided below you should be able to understand it. Simply using a Wi-Fi adapter will not affect the speed of your internet connection. You can add multiple adapters to your network and the speed for your internet should be the same. However, what does affect the speed of your internet is how far the Wi-Fi adapter is from the router. Judging by the distance, you will notice that the further away you move from the router, the slower the speed on your connection will be. However, using a good quality Wi-Fi adapter will ensure that you are provided with a much better range that using a cheap one. Though, there are no Wi-Fi adapters that have unlimited range on them which is why it is important to place your router carefully. This will ensure that you can get signals all-around your home easily. Sometimes you can even install additional routers to increase the range of your connection. Additionally, changing the position of your router will also provide you with better signals. Just make sure that the signals on your device are not getting blocked by any walls or objects. You can even try moving around stuff between the devices to increase the signal strength. Finally, ensuring that your adapter settings have been configured properly is also essential to make it work properly.
<urn:uuid:91427b58-b98a-49b4-87f4-67bc3f0966f7>
CC-MAIN-2024-38
https://internet-access-guide.com/does-wifi-adapter-affect-internet-speed/
2024-09-11T13:15:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00351.warc.gz
en
0.967075
599
2.578125
3
Industry 4.0 is the new revolution which has changed the manufacturing industry. The manufacturing industry has started using artificial intelligence and industrial Internet of Things technology to boost their production and at the same time keep a check on their production equipments. Machine learning techniques have ensured that very less human intervention is needed for production. This new initiative has brought smart working environments where companies can make optimum use of their available resources. The equipments also need to be better equipped with smart technologies in order to achieve the desired level of automation required for a manufacturing company. Predictive maintenance can offer innovative solutions to companies to achieve their desired target. Companies want the manufacturing unit to run at optimal speed at minimum downtime. Machines with moving parts require maintenance at regular intervals to prevent it from any wear and tear. There are two traditional ways to avoid unwanted breakdowns. They are as follows: Check Out This: Managing MFG Fixed Maintenance: Companies schedule fixed maintenance of their equipments at fixed time intervals regardless of the condition of the machine. This approach is precautionary and sometimes can induce unnecessary expense. Condition-Based Maintenance: It is a smarter way to maintain a piece of equipment as condition-based maintenance is done only when a machine requires maintenance. Maintenance is performed before a failure. Predictive maintenance is the best approach as it can predict if a machine requires maintenance well in advance before a system failure. Predictive maintenance with condition monitoring and analysis of IoT data, can predict when a machine requires maintenance. Predictive maintenance can improve technical support drastically by catching the errors that can escape the human eye like vibrations and sound emissions. The new cutting-edge technologies like big data analytics can monitor large volumes of data in real time. The data is fed through low-power sensing devices which are installed directly on machines. These smart sensors consist of power nodes and microcontrollers which offer many advantages over the traditional condition-based maintenance.
<urn:uuid:2b8754b5-7856-44d3-a5a9-ff537a0a6f01>
CC-MAIN-2024-38
https://www.enterprisetechnologyreview.com/news/how-predictive-maintenance-can-help-manufacturing-industry-nwid-832.html
2024-09-11T14:16:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00351.warc.gz
en
0.933968
386
2.78125
3
What is Air Cooling? Air cooling refers to a method used for managing the temperature in various systems, particularly in computing and electronics, by utilizing air as a cooling medium. This cooling process involves directing air over hot components or surfaces to transfer the heat and then expelling it away from the system, thereby regulating the temperature and preventing overheating. In the context of computer systems and data centers, air cooling is a critical aspect of ensuring the proper functioning and longevity of hardware. It involves the use of fans, heatsinks, and ventilation systems to dissipate heat generated by electronic components such as CPUs, GPUs, and power supply units. The efficiency of air cooling depends on factors like airflow management, ambient temperature, and the design of the cooling components. How Air Cooling Works Air cooling systems typically function by drawing cooler air from the environment and directing it over hot components. Heatsinks, which are usually made of metal with high thermal conductivity like aluminum or copper, are attached to heat-generating components. These heatsinks have fins or ridges that increase their surface area, enhancing their ability to dissipate heat into the air. Fans are then used to move the heated air away from the components and out of the system's casing, allowing cooler (data center ambient) air to take its place. The effectiveness of air cooling is influenced by several factors: - Airflow Dynamics: Proper channeling of air within the server is essential for effective cooling. This involves the strategic placement of fans and the design of the system's internal layout (i.e. not blocking the airflow where it is needed the most). - Ambient Temperature: The surrounding temperature plays a significant role in the cooling process. Cooler ambient temperatures generally enhance the cooling effect. - Component Layout: The arrangement of components within a system can affect heat distribution and the efficiency of air cooling. Air cooling is a widely used method due to its simplicity, cost-effectiveness, and ease of maintenance. However, it may be less effective in extremely high-temperature environments or in high-performance systems that generate substantial amounts of heat. High end systems for AI and HPC will typically contain a number of CPUs and GPUs, each generating a substantial amount of heat. Advantages and Limitations of Air Cooling Advantages of Air Cooling - Cost-Effectiveness: Air cooling systems are generally more affordable than other cooling methods, such as liquid cooling. The components (fans and heatsinks) are less expensive and easier to replace or upgrade. - Simplicity and Ease of Installation: Air cooling systems are straightforward to install and maintain. They do not require complex setups like liquid cooling systems, making them accessible for most data center operators. - Reliability: With fewer moving parts compared to liquid cooling systems, air cooling solutions often have a lower risk of failure. The absence of liquids reduces the risk of leaks that could damage electronic hardware. Limitations of Air Cooling - Limited Cooling Capacity: Air cooling may not be sufficient for extremely high-performance systems, such as overclocked CPUs or GPUs, where the heat output exceeds the cooling capacity of air. - Dependence on Ambient Temperature: The efficiency of air cooling is significantly affected by the surrounding temperature. In hot environments, its cooling effectiveness is reduced. - Noise Levels: Fans used in air cooling systems can produce noticeable noise, especially when running at high speeds. This can be a concern in environments where noise reduction is essential. - Space Requirements: Air cooling systems, particularly those with large heatsinks and multiple fans, can be bulky and require more space within the system's chassis. Despite these limitations, air cooling remains a popular choice for many applications due to its practical balance of cost, ease of use, and effectiveness for standard computing needs. Applications and Best Practices for Air Cooling Applications of Air Cooling Air cooling is utilized in a variety of contexts, from consumer electronics to industrial applications: - Personal Computers and Laptops: It is the most common cooling method in personal computing devices, managing the heat generated by CPUs, GPUs, and other internal components. - Data Centers and Servers: Air cooling is employed in data centers to maintain optimal temperatures for servers and networking equipment, often involving sophisticated airflow management systems. - Telecommunications Equipment: In telecom industry, air cooling is used to regulate the temperature of equipment in network towers and control rooms. - Industrial Machinery: Various industrial machines and electronic devices use air cooling to prevent overheating during operation. Best Practices for Air Cooling To maximize the efficiency of air cooling systems, certain best practices should be followed: - Effective Airflow Management: Ensuring a clear path for air to flow through the system is crucial. This involves strategic placement of components and cables to avoid obstructing airflow. Either within the server or external to the server. - Regular Maintenance: Dust and debris can accumulate on fans and heatsinks, reducing their effectiveness. Regular cleaning is essential to maintain optimal cooling performance. - Quality Components: Investing in high-quality fans and heatsinks can significantly improve cooling efficiency. Larger fans can move more air at lower speeds, reducing noise without compromising on cooling. - Environmental Control: Keeping the ambient environment cool can enhance the effectiveness of air cooling systems. This includes considerations for room temperature and humidity. - Monitoring Temperature: Regular and constant monitoring of system and chip temperatures can help in early detection of cooling issues, allowing for timely intervention and adjustments. By adhering to these best practices, air cooling systems can be optimized to provide effective and reliable cooling in various applications, ensuring the longevity and performance of the equipment. Frequently Asked Questions about Air Cooling - What are the main components of an air cooling system in computers? The primary components of an air cooling system in computers include fans, heatsinks, and air ducts. Fans are responsible for moving air in and out of the system, heatsinks are attached to heat-generating components and help dissipate heat, and air ducts direct airflow efficiently within the system. - How often should air cooling systems be maintained for optimal performance? Additionally, it's beneficial to use software tools that monitor the temperatures of CPUs and GPUs. These tools can alert users if there are any issues or if temperatures start to rise unexpectedly, indicating potential cooling inefficiencies. This kind of preventative maintenance, aided by technology, can help in addressing problems before they escalate, ensuring consistent performance and potentially extending the lifespan of the hardware. - Can air cooling be effective for high-performance gaming PCs? Air cooling can be effective for high-performance gaming PCs, especially when using high-quality fans and heatsinks and ensuring good airflow within the case. However, for extreme overclocking or in very high-temperature environments, more advanced cooling solutions, like liquid cooling, might be necessary. - What are the signs of inadequate air cooling in a computer system? Signs of inadequate air cooling in a computer system include elevated internal temperatures, frequent thermal throttling, system instability or crashes, and excessive fan noise. Monitoring hardware temperatures can help in identifying cooling issues early. - How does ambient temperature affect air cooling efficiency? Ambient temperature plays a significant role in air cooling efficiency. Cooler ambient temperatures enhance the cooling effect as the temperature difference between the components and the surrounding air is greater, facilitating more efficient heat transfer.
<urn:uuid:1f7fa955-59be-486f-b2e6-9bac6b032ce9>
CC-MAIN-2024-38
https://www.supermicro.org.cn/en/glossary/air-cooling
2024-09-16T10:31:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00851.warc.gz
en
0.918664
1,514
3.828125
4
Cryptocurrency: The news of FTX going bankrupt has been in the headlines lately. The CEO, Sam Bankman-Fried, is arrested for allegedly channelling the customer’s fund to its sister company, making a $10 billion hole in their finance. That’s a lot of money. It looks like cryptocurrency and their exchanges are a big deal, but what are they exactly? Today, we’re introducing cryptocurrency basics, providing a good starting point to understand the latest news about digital assets. Cryptocurrency, or crypto, is a form of digital currency. As you might have guessed from the name, it uses “cryptography” for safer transactions. The transaction happens in the digital, virtual world, peer-to-peer. When you make or receive a payment, the record is registered on blockchain technology. Cryptocurrencies are also stored in a digital wallet, not a physical one. As it does not need intermediaries, like a bank or the government, it is often described as a currency of a “decentralized network”. You might have heard of one of the first cryptocurrencies: Bitcoin. The cryptocurrency was introduced in 2009 amid the backlash against traditional banking systems. The prices of these cryptocurrencies are sometimes different. It fluctuates in a big range, especially when speculators drive high costs. How do you get a cryptocurrency, then? Buying from brokers or exchanges, both offering ways to purchase various types of cryptocurrencies is one. Another is “mining” cryptocurrencies, in other terms, making “new cryptocurrencies” instead of buying them. You use your computer power to solve complex mathematical puzzles. Afterwards, you must validate your transactions on a blockchain system and add them to a public database. When you have cryptocurrencies, you do not own any fiat money. You can still make payments in cryptocurrency, but this transaction is not a transfer of funds but a unit of digital assets from one to another. Making transactions in cryptocurrency is not a niche hobby either. According to Bank Rate, the total value of all existing cryptocurrencies is $919 billion as of 2022. Returning to the story of FTX, the company was the third-biggest cryptocurrency exchange by volume before its demise. With the company’s failure, reports suggest that cryptocurrency is not inherently “decentralized” anymore, as there are leaders who make decisions and impact the market at exchange platforms. Big intermediaries like FTX attracted more users and investors and expanded the cryptocurrency market. In the wake of FTX’s collapse, some suggested that the government should intervene to settle the situation as the company is “too big to fail”. Despite the recent turmoil, the popularity of cryptocurrencies continues to this day. Will it remain as popular as now in 2023?
<urn:uuid:1a5fd53b-f110-4ade-9a4a-06ea1900cca9>
CC-MAIN-2024-38
https://4imag.com/cryptocurrency-for-beginners/
2024-09-18T20:46:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00651.warc.gz
en
0.956637
591
2.921875
3
Once a file is opened for use it is referred to by its file number. Therefore, each data and index file must be assigned a unique number by the programmer. Data files, index files and index members all draw file numbers from the same pool; therefore, data files and indexes cannot have the same number assigned. An index member is an index stored in the same physical file with other indexes. The first additional index member in an index file is given a keyno one greater than the keyno of the host index. The second index member in an index file is given a keyno two greater than the host keyno, and so on. For example, if data file 0 has 4 total indexes (1 primary index and 3 members), the total consumed file numbers would be 5 and would be numbered 0-4 (the primary index would be file number 1). The maximum number of c-tree files that can be simultaneously opened is controlled by #define MAXFIL found in ctopt2.h, located in the ctree/include directory. MAXFIL also controls the largest file number that may be used. To adjust the define from its default value of 1024, 2048 for client libraries, place the following define in ctoptn.h to override the MAXFIL define. #define ctMAXFIL 150 /* set the max number of files to 150 */ Multi-user, non-server implementations using a dummy lock file have additional file numbering constraints. See the Multi-User Concepts chapter of this manual for further details.
<urn:uuid:c77d1beb-9d65-4190-9e5d-69f2a73d529a>
CC-MAIN-2024-38
https://docs.faircom.com/doc/ctreeplus/30581.htm
2024-09-18T21:45:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00651.warc.gz
en
0.929217
313
2.96875
3
by David Bloxberg, Senior Global Marketing Manager, VIPRE Security Group Cybersecurity Awareness Month, established twenty years ago, continues to address the growing threats in the digital landscape. Launched by the U.S. Department of Homeland Security and the National Cyber Security Alliance in October 2004, this annual event aims to educate organizations and individuals about cybersecurity essentials. Over the past two decades, the initiative has expanded its scope and impact, becoming a global movement highlighting cybersecurity’s critical role in our daily lives. Experts emphasize that security awareness training, including continued training year to year, is essential for any organizational plan. While Cyber Security Awareness Month serves as a crucial reminder of the importance of training, more is needed. Continuous education and vigilance are necessary to maintain robust cybersecurity defenses throughout the year. The Impact and Evolution of Cyber Security Awareness Looking back over the past 20 years, we see a period defined by evolving cyber threats. The early 2000s saw the rise of the internet and the dawn of e-commerce, which brought unprecedented opportunities alongside new security challenges. As technology evolved, so did the sophistication of cyber attacks. The threat landscape has continually shifted from simple viruses and malware to complex ransomware and state-sponsored hacking, necessitating a proactive and informed approach to cybersecurity. Cyber Security Awareness Month has played a pivotal role in fostering a culture of security awareness. It has highlighted the importance of basic cyber hygiene practices such as using strong passwords, recognizing phishing attempts, and updating software. Moreover, it has brought attention to the need for advanced measures like multi-factor authentication, encryption, and incident response planning. Reflecting on the evolution of technology and cybersecurity is essential as we look back on 20 years of Cyber Security Awareness Month observances. This reflection allows us to appreciate progress, recognize ongoing challenges, and prepare for future threats. By understanding the past, we can better equip ourselves to protect our digital world in the future. This milestone is a celebration and a call to action for continued vigilance and innovation in cybersecurity. Evolution of Computing Machines and Related Security Risks Early 2000s: Dominance of Desktop PCs In the early 2000s, desktop PCs reigned supreme, featuring single-core processors and limited storage and memory capabilities. Internet connectivity was sparse, with many households relying on dial-up connections. The primary security risks during this period stemmed from burgeoning internet usage and limited cyber security awareness among users. Worms, like the infamous ILOVEYOU launched in 2000, spread rapidly due to the lack of sophisticated antivirus software and user knowledge. Email was a common vector for these threats, with users often unknowingly opening malicious attachments. The limited processing power and memory of desktop PCs made them vulnerable to performance degradation and system crashes caused by malware. Additionally, the rudimentary state of firewalls and intrusion detection systems exposed many systems to network-based attacks. This period highlighted the critical need for improved cyber security awareness to protect users and their devices from growing threats. The ILOVEYOU worm was one of the first major incidents. It targeted Microsoft Outlook users, infecting at least 10% of internet-connected systems within hours and causing up to $15 billion in damages. In response, Microsoft released an Outlook update to combat the worm’s effects, including blocking unsafe attachments and warning users if a program attempted to send emails on their behalf. Mid-2000s to Early 2010s: Rise of Portable Computing The mid-2000s to early 2010s witnessed significant advancements in computing technology. Multi-core processors became mainstream, offering enhanced performance and efficiency. Laptops and portable computing devices gained popularity, providing users with mobility and flexibility. Internet access expanded, with faster broadband connections becoming widely available. This era also saw the early adoption of cloud computing, revolutionizing data storage and processing. The security landscape evolved in response to these changes, driven by increased cyber security awareness. The increased use of portable devices introduced new risks, such as theft and physical loss. Mobile computing also meant that sensitive data was often stored on devices that could easily be misplaced or stolen. The rise of broadband internet brought about faster and more complex cyber-attacks. Phishing attacks became more sophisticated, and social engineering tactics improved, making it easier for cybercriminals to deceive users. The early stages of cloud computing introduced concerns about data privacy and security. As businesses began migrating their data to the cloud, questions arose regarding the security measures employed by cloud service providers. Data breaches became a significant concern, highlighting the need for robust encryption and secure authentication mechanisms. This period underscored the importance of cyber security awareness in mitigating new risks and protecting sensitive information. 2010s to Present: The Age of Ubiquitous Connectivity From the 2010s to the present, the proliferation of smartphones and tablets has transformed the computing landscape. IoT (Internet of Things) devices have further expanded the digital ecosystem, connecting everything from household appliances to industrial machinery. Significant advancements in cloud computing have enabled seamless data storage and processing. Developing AI and machine learning technologies has introduced new possibilities and challenges. At the same time, the growth of edge computing and 5G networks has further accelerated connectivity and data transfer speeds. The security risks in this period are multifaceted and complex. The widespread use of smartphones and tablets has made mobile security a critical concern. Mobile malware, app-based threats, and vulnerabilities in mobile operating systems pose significant risks to users’ Personally Identifiable Information (PII) and Protected Health Information (PHI). IoT devices, often designed with minimal security features, have become prime targets for cyber attacks. Compromised IoT devices can launch large-scale attacks, such as Distributed Denial of Service (DDoS) attacks. While offering numerous benefits, cloud computing has also introduced new security challenges. Data breaches and cloud misconfigurations have exposed sensitive information, emphasizing the need for comprehensive security measures, including data encryption, access controls, and continuous monitoring. AI and machine learning technologies have both defensive and offensive implications. While they enhance threat detection and response capabilities, cybercriminals are also using them to develop more sophisticated and targeted attacks. The emergence of 5G networks and edge computing has improved data processing and connectivity and expanded the attack surface. Securing interconnected systems demands a holistic approach, integrating device security, network security, and strong cybersecurity policies. Changes in Cybersecurity Practices Over the Last 20 Years The Early 2000s: Basic Foundations of Cyber Security Awareness In the early 2000s, cyber security awareness was in its infancy, with essential tools forming the primary defense against digital threats. Antivirus software, firewalls, and simple encryption methods were the cornerstones of cybersecurity practices. Antivirus software relies heavily on signature-based detection techniques to identify and eliminate known viruses and malware. Firewalls acted as gatekeepers, filtering network traffic to prevent unauthorized access. Encryption methods, though essential, were used to protect sensitive data during transmission and storage. Despite these measures, cyber security awareness training among the general public and organizations was limited, even though there were significant conversations and efforts before Cyber Security Awareness Month was established to bolster organizational security. For example, this November 2000 Information Security Awareness Program Proposal by Michael E. Whitman, Ph.D. for Kennesaw State University, outlines a proposal for an training program designed to enhance security knowledge, compliance, and accountability among employees, students, and faculty through targeted training and engagement efforts. Dr. Whitman, who holds a Ph.D. and certifications in CISM and CISSP, serves as the Executive Director of the Institute for Cybersecurity Workforce Development and is a Professor of Information Security at Kennesaw State University in Georgia. The Mid-2000s to Early 2010s: Evolution and Increased Cyber Security Awareness The mid-2000s to early 2010s marked a period of significant evolution in cybersecurity practices, driven by growing cyber security awareness. The emergence of more sophisticated malware, such as spyware, rootkits, and advanced worms, required more robust security measures. Intrusion detection systems (IDS) were developed to track network traffic to detect suspicious activity and potential threats, providing deeper insights into network behavior and detecting anomalies. Increased cyber security awareness led to a greater focus on network security, with organizations investing in advanced network protection solutions. The idea of defense in depth, which promotes using multiple layers of security controls, gained popularity. Security information and event management (SIEM) systems began analyzing data from various sources, improving threat detection and response capabilities. This period also saw the recognition of cybersecurity as a critical profession. The demand for skilled cybersecurity professionals grew, and specialized cybersecurity training programs and certifications were established. As a result, organizations were able to develop more comprehensive security strategies and improve their overall security posture, driven by enhanced cyber security awareness. 2010s to Present: Advanced Cybersecurity Practices and Cyber Security Awareness From the 2010s to the present, advancements in cybersecurity practices have been closely linked to increasing cyber security awareness. Advanced threat detection solutions, such as Endpoint Detection and Response (EDR), have significantly improved cybersecurity measures against sophisticated attacks. These solutions offer real-time monitoring, threat hunting, and automated response capabilities, significantly enhancing organizations’ ability to mitigate threats promptly. The adoption of zero-trust security models reflects heightened cyber security awareness. These models assume threats can exist inside and outside the network perimeter, leading to strict access controls and continuous verification of user identities, minimizing lateral movement within networks. Today, multi-factor authentication (MFA) is a standard security practice that adds a layer of protection beyond traditional passwords. MFA mandates that users provide multiple verification forms, such as passwords, fingerprints, or a one-time code, making it harder for attackers to compromise accounts. Ransomware and advanced persistent threats (APTs) have driven increased cyber security awareness. Ransomware attacks encrypt critical data and demand ransom payments for its release, while APTs involve prolonged, targeted attacks by sophisticated adversaries. These threats have highlighted the need for comprehensive backup strategies, incident response plans, and ongoing threat intelligence. Regulatory and compliance requirements have furthered cyber security awareness. Regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have imposed stringent data protection and privacy obligations on organizations. Compliance with these regulations requires robust security measures, including data encryption, access controls, and regular security assessments. So, the evolution of cybersecurity practices over the past 20 years has been profoundly influenced by growing cyber security awareness. From essential antivirus software and firewalls in the early 2000s to advanced threat detection, zero-trust models, and regulatory compliance in the present day, the field of cybersecurity has continually adapted to address emerging challenges and protect against increasingly sophisticated threats. Impact of Technological Advances on Cybersecurity Increased Attack Surface Technological advances have significantly increased the attack surface for cyber threats. With the proliferation of devices and endpoints, including smartphones, tablets, IoT devices, and wearable technology, there are more entry points for potential cyber-attacks. Each device connected to a network presents a potential vulnerability, making securing interconnected systems increasingly complex. As more systems become integrated, ensuring that every link in the chain is secure becomes a daunting challenge, requiring comprehensive strategies to protect against a wide array of threats. Cyber security awareness among users and organizations is crucial to effectively managing this expanded attack surface. Advancements in Defense Mechanisms Cybersecurity has evolved with advanced defense mechanisms in response to the growing attack surface. Artificial intelligence (AI) and machine learning have been game-changers in threat detection. These technologies can process large volumes of data in real time, detecting patterns and anomalies that may signal a cyber threat. AI-driven systems can evolve and learn from emerging threats, offering a flexible defense against constantly changing cyberattacks. Automation in cybersecurity processes has also enhanced protection. Automated systems can quickly respond to threats, reducing the time between detection and mitigation. This speed is crucial in preventing or minimizing the impact of attacks. Additionally, advancements in encryption and data protection techniques have strengthened defenses against unauthorized access and data breaches. Modern encryption methods ensure that sensitive information remains secure, even if intercepted by malicious actors. Cyber Security Awareness: Challenges and Opportunities Despite these advancements, balancing innovation with security remains a significant challenge. As technology continues to advance, new vulnerabilities inevitably emerge. Ensuring that security measures keep pace with technological innovation is a constant struggle. Cyber security awareness is pivotal in addressing these challenges, as informed users and organizations are better equipped to implement adequate security measures. Additionally, the cybersecurity skills gap poses a significant challenge. The swift advancement of cyber threats has surpassed the growth of a skilled cybersecurity workforce. Bridging this gap demands investment in cyber security awareness training, equipping professionals with the skills to tackle sophisticated threats. However, these challenges also present opportunities. The increasing importance of cybersecurity has led to greater awareness and investment in the field. Organizations recognize the need to prioritize cybersecurity, drive innovation, and develop more effective security solutions. Future trends and potential developments in cybersecurity include the continued integration of AI and machine learning, further automation of security processes, and the advancement of quantum computing, which promises to revolutionize encryption techniques. As we look to the future, the ongoing evolution of technology will shape the cybersecurity landscape. Continuous adaptation and innovation are essential. Leveraging advanced technologies and addressing the cyber security awareness skills gap can develop robust defenses. Cyber security awareness training and proactive strategies will be crucial for a secure digital future.
<urn:uuid:dd5c5109-5017-4e92-9bf1-76136a1ba69d>
CC-MAIN-2024-38
https://inspiredelearning.com/blog/20-years-cyber-security-awareness/
2024-09-20T04:29:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00551.warc.gz
en
0.940611
2,748
2.75
3
The explosion of connected devices including Internet of Things (IoT), Internet of Medical Things (IoMT) and Operational Technology (OT) is transforming businesses and improving efficiencies. However, the rise of these connected devices now introduce new challenges to governance, risk and compliance. There’s a good reason for this. These connected devices are not designed with security in mind, often run vulnerable and outdated operating systems, and cannot support endpoint security agents. The ability of connected devices like IoT to send and receive data, and connect to the Internet mean that they can be attractive avenue for threat actors to infiltrate the network. Finally, more increased interconnectedness means that rapid response and automated remediation is required to quickly shut down lateral movement of threats from device to device. Here are some considerations for governance, risk and compliance of connected devices: - Know what devices are actually on the network You can’t secure what you don’t know about. Therefore, it is vital to gain visibility into every connected device on your network. This includes IT, IoT, IoMT or OT devices. It includes ephemeral assets that may go offline at any time and then reappear in a new physical and network location. High-fidelity information is critical to truly understand and classify these devices for example, make, model, operating system, location, applications and more. Are there mission-critical devices? Are there vulnerable devices? Understand the risk profile for these devices, from manufacturing recalls, medical device advisories and vulnerabilities to devices that are running outdated device operating systems. The one caveat with the discovery process is that it must be via a passive approach so that it does not impact the operations of these sensitive IoT devices. - Identify risks and baseline behaviors Once you know what devices you have, you need to know what risks it is bringing to the network. Risks can range from vulnerabilities, exploits and recalls to weak passwords and certificates. In addition, you also want to understand device behavior patterns. Mapping communications patterns and base lining device behavior is crucial to identifying anomalous behaviors such as a rogue or infected device communicating to a bad domain. This can be accomplished using AI and machine learning, as devices have very deterministic communications patterns based on their functions. - Bring Devices with orphaned users into compliance In the Ordr Rise of the Machine report that profiled risks and adoption of about 12 million connected devices across more than 500 deployments, 55% of deployments found that they had devices with orphaned users. Devices with orphaned users are devices with users that have left an organization or changed roles within an organization. Devices with orphan accounts retain the same rights as when they were associated with an active user, and therefore may be a gateway to privilege escalation and lateral movement. Therefore, as part of a robust GRC strategy, security teams need to ensure that all devices are being utilized only by current users and those with appropriate privileged access. - Consider Zero Trust policies With all devices accounted for, identified and categorized, and risks and communications patterns understood, IT and security teams can architect the appropriate Zero Trust policies based on the appropriate level of trust and least privilege access. The concept of “trust” for connected devices should be determined based on the following attributes such as what the device is, what operating system it is running, its compliance status, where it is connecting from and whether it is behaving as expected. Both macro and flow-based micro-segmentation policies are critical to enable Zero Trust, for example a broad-based policy to prevent consumer IoT devices like Alexas from accessing the corporate network, to monitoring and segment mini-cluster “cells” of IT, IoT and Operational Technology devices for a specific function within manufacturing. Using the security guidelines above, organizations can embrace connected devices while ensuring that they address compliance requirements and govern risks appropriately.
<urn:uuid:9e0859d8-dcd0-4013-84e2-a46602556ffa>
CC-MAIN-2024-38
https://apac.grcoutlook.com/governance-risk-and-compliance-in-the-age-of-connected-things/
2024-09-09T05:16:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00651.warc.gz
en
0.951487
792
2.515625
3
In the realm of data storage, object storage has emerged as an efficient and scalable solution. This article aims to provide a comprehensive understanding of what object storage is, how it works, and its benefits. What is object storage? Object storage, also known as object-based storage, is a strategy that manages and manipulates data storage as distinct units, called objects. These objects are kept in a single repository and are not nested as files in a folder, unlike traditional file storage systems. Each object includes the data, associated metadata, and a globally unique identifier. How does object storage work? The functioning of object storage is quite different from traditional file or block storage. Instead of using a hierarchical structure, it uses a flat address space. When data is stored, it’s bundled with customizable metadata and a unique identifier. This identifier is used to access the object, making it easier to locate and retrieve data across various locations. Object storage vs file storage vs block storage In object storage, data gets treated as objects. Each object contains the data, metadata, and a unique identifier. The flat address space of object storage allows for easy scalability, making it suitable for managing large volumes of unstructured data. Moreover, the metadata attached to each object facilitates advanced data management and analytics. File storage, on the other hand, organizes data in a hierarchical structure of files and folders. This system works well for human-readable data and shared files. However, when dealing with large volumes of data, the hierarchical structure can become complex and difficult to manage. Block storage splits data into uniform-sized blocks. Each block carries a unique identifier, but unlike object storage, it does not carry metadata. Block storage performs excellently with structured data sets and is commonly used in Storage Area Network (SAN) environments where speed is a priority. The choice between object storage, file storage, and block storage depends largely on the specific data requirements and the intended application. Benefits of object storage Object storage offers several advantages over traditional storage methods: - Scalability: Object storage can scale out to store massive amounts of data, making it an excellent option for businesses expecting rapid data growth. - Metadata Management: With object storage, each object’s metadata can be customized, allowing for a high level of detail that can enhance data analysis and management. - Data Durability: Object storage systems often include built-in data protection mechanisms such as redundancy and erasure coding, enhancing the durability and reliability of your data. - Cost-Effective: Object storage is typically more cost-effective than traditional file or block storage, particularly for storing large amounts of data. Object storage use cases Object storage is versatile and can be used in various scenarios. It is particularly useful in cloud storage, where data is distributed across multiple servers. Other use cases include data archiving and backup, big data analytics, and web content hosting. The Power of Object Storage Object storage is a powerful solution in the data-centric world of today. It offers a scalable, reliable, and efficient method for managing and storing large amounts of data. Whether you are dealing with big data analytics or cloud storage, object storage can add significant value to your operations. Understanding its workings and benefits can help you make informed decisions about your data storage strategy.
<urn:uuid:beac6d16-b26a-4d05-bfae-956a462132e1>
CC-MAIN-2024-38
https://www.ninjaone.com/it-hub/it-service-management/what-is-object-storage/
2024-09-10T10:33:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00551.warc.gz
en
0.914
678
3.203125
3
It is widely known that the COVID-19 pandemic had a ripple effect on many daily operations across the globe; manufacturing problems caused shipping delays, shipping delays resulted in retail shortages, and retail shortages created a lack in food items and raw materials for consumers. A study conducted by McKinsey & Company confirmed that 73 percent of supply-chain executives encountered issues in their supplier base, with 75 percent facing problems with distribution and production of materials. Likewise, 85 percent of respondents said they experienced trouble with inefficient digital technologies in their supply chains. The COVID-19 pandemic has created a long-lasting effect on the supply chain and logistics industry. Not only has this resulted in supplier and production problems, but has also produced labor shortages, a lack of equipment availability, and increased inflation. The current climate of supply chain can appear highly unpredictable, but Internet of Things (IoT) can aid in mitigating the domino effect of global bottlenecks. Inefficiencies in the digitization of the supply chain can generate a slower, more flawed production in the overall picture. Here are some logistic challenges businesses might be facing currently and in the coming years: Whatever hurdles you may run into, there are IoT solutions to mitigate all obstacles. The cold chain monitoring market is one that relies on certain temperature-controlled technology to deliver goods from point A to point B. Rapid advancements in IoT have allowed this industry to grow exponentially. According to Future Market Insights, this market is anticipated to grow to a value of $17.8 billion by 2031. IoT technology monitors and regulates the cargo for sensitive goods, such as pharmaceutical drugs and food products. This is accomplished through real-time sensor readings, and companies can track the pallets based on current conditions. Real-time monitoring can reduce complications in the supply chain process. The later a problem is noticed or addressed, the timelier and more expensive it becomes to resolve it. Another advantage to IoT in cold chain monitoring is the impact of sustainability it creates. Preventing larger scale problems can have a large impact on the environment: lesser risk of damaged and spoiled goods results in lesser waste associated with tarnished cargo. Global logistics can seem tricky, especially if you have high-value shipments traveling multi-modal. Alleviating damage, theft, or loss is crucial when optimizing your supply chain, and gaining better visibility can help guarantee the safety of your parcels. Powerful IoT technology uses GPS location information to track any shipment anywhere in the world. If an incident were to occur, the robust systems notify your team instantly, so you are not left worrying about the whereabouts and conditions of your inventory. KORE offers a multitude of comprehensive solutions that best fit your supply chain management needs. From critical asset tracking and cold chain monitoring to multi-modal transportation and inventory management, we provide the peace of mind needed to preserve and protect your assets. To discover more about real-time monitoring and visibility services, download our datasheet, “Container Tracking Solutions”. Stay up to date on all things IoT by signing up for email notifications.
<urn:uuid:3367b076-f016-40fb-8337-80065c3824f2>
CC-MAIN-2024-38
https://www.korewireless.com/news/how-iot-is-transforming-the-supply-chain
2024-09-11T17:33:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00451.warc.gz
en
0.95101
634
2.5625
3
What is Secure Collaboration Secure collaboration is a strategy to protect copyrighted digital content and media from being stolen, shared, and sold illegally. The strategy can include software, watermarks, and other technologies to limit its unauthorized spread. Organizations use technology and technological systems to limit and prevent the illicit use of copyrighted material, including its illegal sharing, downloading, and distribution over telecommunication networks. Fortra’s secure collaboration solution encrypts and controls access to sensitive files wherever they go. Taking a Zero Trust approach to file sharing, collaboration with anyone – external or internal – is always quick and secure, with the option to revoke access instantly at any time. Secure collaboration tools allow the user to maintain controls over files, such as: - When and how a document or content can be accessed or viewed - How long the document or content can be viewed or accessed - Who can edit, save, or otherwise modify the content - Whether a file can be shared or forwarded to a third party - Which actions can be performed or allowed on the content, like copying, printing, or taking screenshots - How many times said operations, like printing, is allowed What makes your We can protect that advantage. Fortra’s secure collaboration solution encrypts and controls access to sensitive files wherever they go. Taking a Zero Trust approach to file sharing, collaboration with anyone – external or internal – is always quick and secure, with the option to revoke access instantly at any time. Secure Collaboration vs. Digital Rights Management Digital rights management (DRM) grew out of the need to address the explosive growth in piracy after the advent of the internet, especially in the film and music industries. Digitization leads to reduced costs of reproducing copyrighted materials, along with an increase in large-scale distribution of pirated digital assets. As the world has become more decentralized, the variety of intellectual property and sensitive content needing to be shared online as exploded. With this increase in collaborative needs, nearly everyone within an organization has a need to collaborate externally. With this, DRM has evolved into secure collaboration, which invokes a more modern and user-friendly approach. Secure collaboration tools put data and content controls into everyone’s hands, typically without the reliance on an organization’s IT team. Fortra’s Secure Collaboration protects your sensitive files, everywhere they travel. In today’s highly collaborative, cloud-based and mobile-centric work environment, Vera provides simple, flexible, transparent data security, enabling businesses of all sizes to protect any file, on any device, anywhere it travels.
<urn:uuid:aac8eeb6-5237-4ebe-87e5-25e7ced1d3ae>
CC-MAIN-2024-38
https://www.fortra.com/solutions/data-security/secure-collaboration
2024-09-15T11:15:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00151.warc.gz
en
0.921351
537
2.640625
3
The rapid evolution of wearable technology has been a fascinating journey, but the advancements we’re poised to see in the near future promise to redefine human-computer interactions and elevate user experiences to an unprecedented level. As current trends move beyond smart glasses, the integration of artificial intelligence (AI) into wearable devices is set to be the driving force for this next wave of innovation. This shift towards AI-driven wearables opens up a world of possibilities, from health monitoring to augmented reality, all while raising important questions about data privacy and security. AI-Powered Earbuds: A Lighter Alternative to Smart Glasses One of the most noteworthy developments in this sector is the advent of AI-powered earbuds. These devices offer more than just audio playback; they come equipped with cameras designed to capture images and record videos, enabling users to enjoy a seamlessly integrated augmented reality experience. What sets these earbuds apart is their lightweight design, weighing in at roughly 50 grams. This makes them a more convenient and potentially more affordable alternative to smart glasses, which often face production and economic challenges that limit their widespread adoption. Moreover, these AI-powered earbuds epitomize the trend of making human-computer interactions more natural and intuitive. By directly translating visual and auditory experiences into actionable information, these devices can offer real-time augmented reality applications that are both practical and immersive. Imagine navigating a new city with turn-by-turn directions being whispered into your ears or receiving instant feedback on your golf swing—all without having to manipulate a screen or don unwieldy eyewear. This hands-free, heads-up mode of interaction represents an exciting leap forward in the usability and functionality of wearable technology. Despite these promising features, the integration of AI into such small devices brings about its own set of dilemmas. Concerns about data privacy, for example, become particularly salient as these earbuds are capable of capturing and processing detailed personal information. Ensuring robust security measures will be critical for gaining user trust and encouraging widespread adoption. Beyond privacy, ethical considerations also come into play. Questions about the potential for misuse, such as illicit recording or unauthorized surveillance, must be addressed through stringent guidelines and informed public discourse. The Rise of Smart Clothing While the attention of the tech world often focuses on high-profile gadgets like earbuds and glasses, smart clothing represents a less-publicized but equally significant frontier in wearable technology. These high-tech textiles come embedded with sensors that can monitor a range of health metrics, from heart rate to body temperature, providing real-time data collection that is as unobtrusive as it is informative. This makes smart clothing particularly appealing for applications in health monitoring and sports performance optimization. The potential for smart clothing is immense. In the healthcare sector, for example, such garments could offer continuous, real-time monitoring of patients’ vital signs, enabling more proactive and personalized treatments. Athletes and fitness enthusiasts, too, can benefit from this technology. Smart clothing can deliver insights into body performance metrics that were previously difficult to measure outside of a lab setting, helping users to fine-tune their training regimens for optimal results. The everyday consumer is not left out either, as these textiles can offer simple alerts for things like hydration levels or postural alignment, contributing to general wellness. Nevertheless, the widespread adoption of smart clothing faces challenges that need to be addressed. One of the primary concerns is durability—how well can these garments withstand the wear and tear of daily use, including regular washing and physical strain? Additionally, integrating sensors into fabric while maintaining comfort and style is no small feat. These hurdles, while significant, are not insurmountable. With continued research and development, the promise of smart clothing could very well be realized, providing valuable benefits across various sectors. Data Privacy and Security: The Double-Edged Sword As wearable technology advances, the issue of data privacy and security becomes increasingly paramount. With devices collecting more and more detailed personal information, the potential risks of data breaches and unauthorized access grow. Ensuring robust data protection measures is essential for earning user trust and facilitating broader acceptance of these innovations. To address these concerns, companies must invest in advanced encryption techniques and secure data storage solutions. Regulatory bodies may also need to step in, formulating guidelines and standards that protect consumer data without stifling innovation. Transparency with consumers about how their data is collected, used, and protected is another crucial step in building trust and encouraging the adoption of wearable technologies. However, privacy concerns extend beyond just data security. Ethical issues also come into play, particularly around the potential for surveillance and misuse of personal data. Consumers and regulators alike must grapple with questions about who has access to the data collected by these wearables and how it is used. Establishing clear ethical guidelines and ensuring that companies adhere to them will be key to navigating these complex issues. Pros and Cons of AI-Powered Wearables AI-powered wearables offer numerous advantages that promise to revolutionize the way we interact with technology. Among these benefits are personalized user experiences, enhanced functionality, and improved decision-making abilities, thanks to AI’s capability to learn from user interactions and analyze vast amounts of data. These strengths make AI-powered wearables appealing across a wide range of applications, from healthcare to everyday consumer use. On the other hand, the rise of AI in wearables is not without its drawbacks. Ethical concerns are a significant issue, as the integration of AI into everyday life raises questions about the potential for misuse and the impact on human skills. For example, over-reliance on AI for decision-making could lead to a decrease in critical thinking skills or a diminished understanding of one’s health and well-being. Dependency on AI-powered wearables also poses risks. As users become increasingly reliant on these devices for information and feedback, the potential for malfunction or inaccuracies becomes a significant concern. Ensuring that AI systems are reliable and robust is crucial to avoiding unintended consequences and maintaining user trust. Conclusion: An Optimistic Yet Cautious Outlook The rapid evolution of wearable technology has been a remarkable journey, and the advancements we’re on the brink of seeing promise to revolutionize how we interact with computers, pushing user experiences to unprecedented heights. As we progress beyond smart glasses, the integration of artificial intelligence (AI) into wearable devices is poised to be the cornerstone of this new wave of innovation. AI-driven wearables offer vast possibilities, from continuous health monitoring to immersive augmented reality experiences. This pivot also raises crucial issues around data privacy and security, as the more integrated these devices become in our lives, the more critical it is to safeguard the personal information they collect. AI’s role in turning wearable tech from mere gadgets into indispensable tools for daily life underscores the importance of thoughtful regulation and innovation. Whether it’s predicting health issues before they become severe or enhancing daily productivity in ways previously unimagined, the future of wearables is set to deeply influence our personal and professional lives.
<urn:uuid:7be28fea-8800-4946-89a1-d274db28ada8>
CC-MAIN-2024-38
https://mobilecurated.com/devices-and-hardware/whats-next-for-wearable-tech-beyond-smart-glasses-to-ai-textiles/
2024-09-19T01:59:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00751.warc.gz
en
0.937332
1,440
2.53125
3
On a Windows machine, there are multiple ways of taking a screenshot. However, there are plenty of users who are still unsure of how to do so. Today we will review a few ways Windows users can take a screenshot on their PC. Snipping Tool Since 2002, Microsoft has implemented the snipping tool into its Windows machines. The tool is simple to use and allows users to capture their screen! However, Microsoft hides this tool from plain sight, leaving the tool undetected to new users. Navigate to the Start Menu Type “Snipping Tool” Select “New” and begin capturing your screen! Once you have selected “New”, a white filter will appear over your screen. Click and drag your mouse across the area you wish to capture. Microsoft has confirmed that in a future update, the Snipping Tool will be getting a new home! Snip & Sketch, very similar to the Snipping Tool, has an improved user interface (UI) and a built-in shortcut to activate it on Windows 10 machines (Windows Key + Shift + S). However, it is important to note that Snip & Sketch is still in development, therefore bugs may still be present in the software. To launch Snip & Sketch, either use the shortcut (Windows Key + Shift + S) or search for in the Start Menu (Figure 1).Figure 1Figure 2 Using the shortcut will allow you to immediately begin taking a screenshot. Searching for the program in the start menu will bring you to a home screen for Snip & Sketch (Figure 2). If you choose to search for Snip and Sketch in the Start Menu, the steps are the same. Simply choose “New” and begin to capture the desired area of your screen. Third-Party Apps While the built-in screen capture tools are nice, there are some third-party applications that I find to be much more reliable and easier to use. For example, Greenshot is a free, lightweight, program that is great for screenshots. In fact, its how I built this guide! Download Greenshot from their website: https://getgreenshot.org Once downloaded, either navigate to your system tray (“ ^ “ icon on the bottom right of your screen) and locate the Greenshot application. A shortcut is also available after installing Greenshot. Simply press the “PrntScr” key on your keyboard to enable immediate screen capture. You may have to press and hold the Function Key “Fn” on some keyboards in order to activate “PrntScr”. Once you have click on the Greenshot application in your system tray, a menu will appear with a list of options. Select “Capture region” to begin your screen capture. If any of the above confuses you, please do not hesitate to reach out to us with questions! We are more than happy to help! We have techs standing by to assist you with the day-to-day operations of your machine! If you are curious as to how we can assist your company to grow, click here! Microsoft is pulling the plug on its widely loved operating system (OS), Windows 7, in January 2020. Therefore, it is important that you know how to upgrade to their latest OS, Windows 10. Why should you upgrade? Well, when Windows 7 gets discontinued, Microsoft will no longer offer support or update the OS, leaving you vulnerable to exploits and other security risks. Those of you that are worried that Windows 10 will have a steep learning curve, fret not! Windows 10 is very similar to its older brother Windows 7. I have to admit, there are a few differences between the two, such as Windows 10’s revamped settings panel, and the new start menu. However, if you are worried that Windows 7 and Windows 10 would be polar opposites like Windows 7 and Windows 8, rest easy, nothing could ever be that bad again. This guide aims to ease you through upgrading your OS. Once you finish with these steps, you’ll be well on your way to Windows 10! Check List Minimum Requirements for Windows 10 If you are worried you may be on this list, think about the following question. Did you purchase your computer after 2014? If you answered yes to this question, chances are your machine is more than capable of running Windows 10. For those of you running older machines, consider purchasing a new computer. Many of them come preloaded with Windows 10, and besides the upgraded OS, you will also be greeted with high speeds, better resolution, and overall better user experience. Processor 1GHz or Faster CPU or System on a Chip (SoC) RAM 1GB for 32-bit or 2GB for 64-bit Hard Drive Space Existing installations: 16GB for 32-bit or 20GB for 64-bit. Clean installation or new PC: 32GB or larger Graphics DirectX9 or later with WDDM 1.0 driver Display Resolution 800 x 600 Network Wi-Fi or Ethernet connection Before upgrading it is also important that you have the latest version of Windows 7 installed on your machine as well. Be sure to check for updates on your machine and to download the “Windows 7 Service Pack 1” if you do not have that already installed on your machine. Also, and it pains me to say this to you if you have waited this long to upgrade to Windows 10, it is no longer free. However, online retailer Newegg will make the upgrade expense less painful for you by offering both the Home edition and Pro edition of Windows 10 at a discounted price! The cost of a new Windows 10 license goes for $109.99 (Home) and $149.99 (Pro) on Newegg. Furthermore, make sure you have an up-to-date backup or system image of your computer before upgrading. Microsoft has certainly improved in its upgrade process, but it is better to be safe than sorry. Follow this guide on Bleeping Computer to learn how to create a backup of your Windows 7 computer! One last thing, I promise. Before you start, it is recommended that any anti-virus software you have installed be removed as this can sometimes cause interference with this upgrade. Make sure that all peripherals are also unplugged from the computer while it is updating, as these too can sometimes cause errors. Upgrading Download and Install the Media Creation Tool Make sure you have your Windows 10 license activation code ready Double-click on the MediaCreationTool.exe to begin the Windows 10 upgrade setup Accept the licensing agreement Select “Upgrade this PC now” option and the necessary files will begin downloading to begin the upgrade. Once finished, click Next Enter your valid Windows 10 product key (Home or Pro) Select Next Accept the licensing agreement Select “Keep personal files and apps” option Select Install Are you wondering what Hammett Technologies can do to continue to help your company grow? Our team of trained professionals works diligently to ensure that your network continues to run smoothly and securely! Click here to learn more about what we do! Chances are, if you have had a Windows PC, you have noticed that Microsoft loves to pre-load them with software you will never touch. This bloatware takes up unnecessary space and is a general annoyance to look at! Today, we will go over three in-house methods and one 3rd-party software method to uninstall programs from your computer! Using Control Panel The tried and true method of removal is an oldie but goodie! To get to the Control Panel all you must do is click on the Start Menu and type “control panel” and press enter! You should be created to a window like this……or this: *You can switch between both views depending on your preference using the “View by:” option in the upper right-hand corner of the window. Now that we have cleared up any confusion, look for the button labeled either “Programs and Features” and select it. Or if you are sorting by “Category” find the button labeled “Programs” (1). Directly beneath it you will see “Uninstall a program” (2), which if selected will take you to the same place. Once there, you will be greeted with a full list of all the programs currently installed on the PC. Clicking on one of the programs will reveal a list of options along the top bar, one of which will allow you to uninstall the selected program! If you are curious as to what “Powerful Uninstall” is, we will touch base on that when we reach the 3rd-party uninstall application section. Using Windows Settings A quicker, and just as good of a method for removing a program from your computer is using the relatively new Windows Settings. To view Windows Settings, open your Start menu the same way we did before, but this time locate and select the gear! Once selected, you should be greeted with “Windows Settings”! Now look for the “Apps” button and select it. Once selected, you will once again be shown a full list of all installed programs on your computer! Simply click on one and the uninstall option show appear! Using the Start Menu The quickest, yet most restrictive way of removing a program is through the start menu. This method only works for programs that you have pinned to your start menu. To use this method, first, open your start menu and right-click on any application (on the left-hand column or the squares on the right)! Third-Party Software There is plenty of third-party software out there that can also assist you in uninstalling programs from your machine, especially those pesky programs that always seem to stick around no matter how hard you try to get rid of them. One of my favorite programs is IObit Uninstaller. Completely free and easy to use, the program works similar to the first two methods shown earlier but ensures complete file deletion. If you are having any trouble understanding or following the guide, send us a message! We are more than happy to assist you in uninstalling a pesky program. If you are a business owner, click here to find out how we can help your business grow and stay secure! Despite Microsoft’s claims that Windows 10 would be the “last version of Windows”, Microsoft has officially announced that it will be retiring Windows 10 October 14, 2025. From then on out, Windows 10, Home and Pro versions, will no longer receive new updates or security fixes. That being said, it may be possible the Microsoft allows businesses to pay for extended support for Windows 10. When Windows 7 Microsoft offered businesses extended support for Professional and Enterprise. With Windows 10’s retirement only 4 years away, Microsoft has begun teasing their newest version of their operating system (OS). While the rumor circulating is that this could be Windows 11, Microsoft has yet to officially announce a name for the new operating system. However, that is subject to change as early as next week, June 24th, where Microsoft is set to showcase their new OS, revealing “the next generation of Windows.” We will keep you updates as this story progresses. Update: 6/25/21 Well, its official. Microsoft has officially announced Windows 11; a free updated to all licensed users of Windows 10. Therefore, if you have not yet upgraded to Windows 10, now may be the time to jump ship. The good news is that there is still a free way to upgrade to Windows 10, if you have not yet left Windows 7 or Windows 8. While I recommend watching the official announcement video, allow me to give you a brief overview of all the changes that Microsoft is bring with the latest version of Windows. For starters, the Start Menu and Taskbar have undergone visual and performance changes. Live tiles are being replaced with Widgets, which now have there own predefined space on your computer. Along with this, Microsoft has also focused on performances increases, allowing for Windows to respond quicker to user input. This is just a portion of what Windows 11 will offer. For the full breakdown, click here to watch the Windows 11 team breakdown all the new features of their new operating system. Windows 10 has become quite an impressive operating system of the years. This continued improvement has benefited all users across the board, but it important to perform maintenance on the system. By checking your system for updates regularly, you can ensure that all your peripherals and programs continue to operate properly and keep your system running smoothly! The added benefit of regularly checking for updates is that you also make sure your system remains patched for the lastest exploits and bugs, helping to keep you out of the reach of hackers and other cybercriminals. Checking for Updates The process of checking for an update is painless, even for a novice user! 1. Open the Start Menu 2. Locate the in the Start Menu and click it. 3. Once in Windows Settings, locate Update & Security and select it. 4. You’ve made it! Select Check for Updates to make sure your system is up to date! If you are still confused about how to update your Windows 10 computer, consider reaching out to us! Our team of trained professionals can handle any issue you encounter on your computer, whether big or small! We are more than happy to assist you with updating your machine to ensure you always have the latest and greatest build of Windows 10! Wondering what we can do to help your company grow? Click here to find out more! It’s finally time to say goodbye to our old friend. In a few months, January 14, 2020, to be exact, Windows 7 will officially no longer be receiving security patches and updates from Microsoft. Therefore, if you are one of the many still calling Windows 7 your home, it may be time to think about moving to Windows 10. Why is this Important to Me? Many of you are probably thinking, “Why should I worry about moving to a new operating system?”. The answer is security. When Microsoft pulls the plug on the extended support (January 14, 2020) that means Windows 7 will no longer receive any critical updates. Updates that would fix security holes and exploits. This means that the longer you wait to move to Windows 10, the more at risk you are of an attack. Why Not Move to Windows 8? If you are looking for an Operating System similar to Windows 7, you should look no further than to Windows 10. Windows 10, while there are differences between them, is more similar to Windows 7. Windows 8, on the other hand, is, for lack of a better term, a mess. The desperate attempt to mix the mobile and PC platform was a disaster and will ultimately leave you with a sour taste wishing for anything else. The other reason to make the jump to Windows 10 and not 8 is because Windows 8 will also cease support soon. In January 2023 the extended support for Windows 8 will end, and with it will come the same security risks of Windows 7. As we said earlier, for those of you looking to fill the void left from your goodbye to Windows 7, Windows 10 is there. If you find yourself needing assistance in migrating yourself or your company to Windows 10, please give us a call! We will be more than happy to assist you in the transition to Windows 10! To learn more about what we can do to assist your company’s growth, click here! Is your PC slow to startup? This is a common issue for many users, and the fix is more straightforward than many imagine. When it comes to Windows, applications, for seemingly no reason, set themselves up to launch when your PC is booting. While there are specific programs that you would want to launch at startup, such as antivirus software, many programs that do launch at startup are not needed, and depending on the size of these programs, the speed at which your PC boots can be significantly affected. Microsoft is aware of this, however, and has offered a remedy for this issue for some time now. Windows offers the user the ability to customize what application launch at startup, allowing them to disable and enable which program will run when the computer is first started. To begin customizing your startup applications, you can either go through Task Manager or Settings. Task Manager will offer you a bit more information, but both offer the same end goal: making your PC boot faster. Using Windows Settings to Disable Startup Applications As I stated earlier, those of you that go through Windows Settings to customize your PC’s startup application will have a more basic experience but will ultimately achieve the same end goal: a quicker startup. In order to navigate to this menu, follow these steps: 1. Locate your Start Menu: This will be in the bottom left-hand corner of your screen 2. Locate settings “.” 3. Upon clicking the gear, you will be taken to the “Windows Settings” page. From there, locate and select “Apps.” 4. Locate and select “Startup.” If you have made it this far then take a second to accept the round of applause because you have successfully navigated to the correct page! All right, that’s enough celebrating. From here, you will be able to select which apps to wish to enable and disable at startup. You may notice that under the “On/Off Switch,” there is an “Impact Indicator.” This is a measurement of the approximate impact the application will have on the startup. When deciding what applications to disable first, look at the ones that have the most substantial impact on startup first because they yield the most significant performance increases if disabled. Using Task Manager to Disable Startup Applications If you are looking for a little more information regarding your system’s boot time and applications running at startup, using the Windows Task Manager is the best place to be! It allows you to quickly research applications you are unfamiliar with, making it easier to decide which apps can be disabled and which are better left alone. To get to the Task Manager, right-click on any empty space on the taskbar. In the popup menu, click on “Task Manager” (third from the bottom). * If your menu appears like this… …click on “More details.” The result should look something similar to this: Once you have the Task Manager, navigate to Startup, which should look something like this: From here you can see all the applications that launch when your computer starts. On the surface, Task Manage appears to be quite similar to Windows Settings. However, if you right-click on an application in Task Manager, you can gain further insight into what the application is. A right-click allows you to disable/enable an application, navigate to its file location, search online for the program for more information, and inspect the application’s properties. Adding a Boost to Startup Now that you know how to disable startup applications get to work! If your PC takes a long time to boot, the culprit may be a few application, with a high impact, launching when your computer first starts. However, make sure you research the application you are disabling before you do so. Some applications, like the “Sound Blaster Control Panel,” is an application I use for better audio control on my computer. For my convenience, I leave the application on, even though it has a moderate impact on startup! Make sure you understand what you are disabled before you do so, or your PC may encounter slight errors when booting. If you have any questions, do not hesitate to reach out to us!
<urn:uuid:a71aff9c-69ec-4d6a-b5c9-11049c0e6854>
CC-MAIN-2024-38
https://www.hammett-tech.com/tag/windows/
2024-09-20T07:44:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00651.warc.gz
en
0.935325
4,023
2.59375
3
Educational 'Technology' Across the Ages Educational 'Technology' Across the Ages(click image for larger view and for slideshow) Twenty-five years ago, when computer instruction in math was just breaking out into general use, Rand Corporation reported that instructional software was excellent for supervising math fact drills and teaching students basic procedures, but much less effective at teaching beginning algebra and everything beyond it. Rand's project found that elementary algebra is the first level at which a major part of math success is choice of strategy. Take, for example, this very simple quadratic equation: 6x2+12x+48 = 3(4x+48) This has at least six workable strategies for a solution: a fast in-the-head method, an excruciatingly slow process of resolving to normal format and grinding the answer out of the quadratic formula, and at least four others. The best strategy for this problem might not be right, or even possible, for another. [ Is an education data warehouse something to fear? Read Hope Battles Fear Over Student Data Integration. ] In algebra, and even more so at higher levels, the ability to decide which strategies are available and probably best for each problem pretty much defines proficiency. But the Rand report found that in 1988, most educational software taught nothing about choice of strategy, though the same report demonstrated that a strategy-teaching program was possible, even then. In the decade that followed, educational psychologists, notably Jane Healy, showed that modern childhood environments failed to stimulate executive function. It was another, bigger piece of the same problem Rand had found. Executive function is the part of the mind that plans, follows, assesses and re-plans a pathway through a complicated process. It's the difference between following a recipe and cooking from scratch, painting by the numbers and painting, or running a checklist and fixing a motor. It's essential for all applied math above the most basic level, as well as for critical thinking and everyday reasoning. Developing executive function by giving students a rich experience of choosing and using problem-solving strategies might be the single most important thing that required math courses do for students. How far along are we in dealing with this problem, a quarter century after Rand found it? The bad news: Most algebra-learning software out there is still just textbook/lecture material -- although with animation and graphics -- and the self-evaluation is mostly just checking an answer for right or wrong. That's what's easiest to write and program, and many companies and programmers have never gone further. The good news: Some of the best software now teaches parts of strategy and executive function. Parents and teachers shopping for this kind of software can choose from three categories. I call them hinters, executors and steppers: -- Hinters: These offer a strategy hint with each problem. They at least make students aware that there are strategies, and that your choice of them matters. Among the "hinters" I examined, I thought MathTutor offered the best suggestions. -- Executors: These go a step further by asking the student to input a problem from a textbook, handout or other program, and then choose a strategy from a list. The software then follows that strategy to write a perfect show-your-work homework answer. Online, Webmath has a pretty friendly interface, although it's put together from many different sources and therefore varies between detailed exposition of steps for some situations and simple, less-than-helpful demonstrations of others. A more consistent but sometimes less-thorough executor, available as a download or CD, is Bagatrix's Solved!. This program demonstrates how to attack algebra by a user-specified method for a user-specified goal. -- Steppers: These not only enable strategy selection but also let students verify each step sequentially, encouraging them to try on their own rather than just copy perfect homework. Softmath's Algebrator is a delightfully well-designed example of this. The good-bad news: There is still very little software that specifically teaches the most important aspect of developing executive function: how to choose strategies. Hinters are generally limited to one hint and to problems they have on hand, while executors and steppers make all the choices except the initial choice of strategy for the student, and offer little guidance for that first one. A good human math tutor sitting at the student's elbow could use any good hinter, executor or stepper to teach how to choose strategies, but right now, the student still needs that tutor for this crucial step. Yet there's no reason this instructional software shouldn't already be common. A quarter of a century ago, Rand's demonstration program taught some strategy, although imperfectly. Today's commercial chess and backgammon teaching programs, running on cheapie tablets, do a better job of coaching on different strategies than math software does. For years, math teachers have written problem sets where a student must choose and justify a strategy; that's what my college calculus teacher did. All that's missing is the inspiration to envision the program, the will to carry it out, and then letting teachers know it's available. Designers and entrepreneurs, get busy! This column was originally published on UBM's Educational IT site. About the Author You May Also Like Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE) September 10, 2024Radical Automation of ITSM September 19, 2024Unleash the power of the browser to secure any device in minutes September 24, 2024
<urn:uuid:d6bb0d77-5316-47d9-aaf4-10e09b7e61e2>
CC-MAIN-2024-38
https://www.informationweek.com/it-sectors/problem-of-math-educational-software-needs-solution
2024-09-08T04:33:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00851.warc.gz
en
0.947586
1,157
3.421875
3
Zero Trust (ZT) is an information security framework that adheres to a "never trust, always verify" principle with respect to network access, even by once-trusted users and devices. In this model, trust isn’t automatically or implicity granted, it is continually and rigorously evaluated. When applying a zero trust model, a private network would seek to have absolute knowledge and control over the users and devices attempting access. This requires stringent policies and processes — as well as zero trust architecture (ZTA) — which NIST defines as a “cybersecurity plan that utilizes zero trust concepts and encompasses component relationships, workflow planning, and access policies." This starts with defining the environment to be protected, cataloguing relevant users, devices and workflows, setting forth prevention policies while matching solutions to them, followed by granular network monitoring. Technologies that contribute to achieving a zero trust-like standard include well-defined, robust identity and access (IAM) resources. These would most certainly include, at a minimum, multi-factor authentication (MFA), as well as practices like micro-segmentation and granular perimeter enforcement. Zero trust is an ambitious goal to meet, some say unrealistic, especially in light of many more workers accessing private networks virtually. At a minimum zero trust introduces (or reintroduces) questions about the inverse relationship between security and usability. This may well be the case if the solutions that comprise the necessary architecture are legacy ones. If pursued recklessly, then, zero trust may set the enterprise backward in its longterm pursuit of convenience alongside security. “Our company is transitioning to a zero-trust security model. There are many requirements and moving parts to achieving this vision, but it’s a top priority for our IT and Infosec Leadership.”
<urn:uuid:2a1c7806-8b51-408b-a811-6992d2a53bc9>
CC-MAIN-2024-38
https://www.hypr.com/security-encyclopedia/zero-trust
2024-09-10T13:46:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00651.warc.gz
en
0.921833
371
2.515625
3
The dictionary defines emergency as “an unexpected and sudden event that must be dealt with urgently.” It defines preparedness as “readiness for action.” So, emergency preparedness is being ready for sudden, unexpected events. Unless there’s a crystal ball handy to predict the future, how does one prepare for the unexpected? The key is to expect the unexpected and to focus on the symptoms vs. the causes. The first task should be to identify the potential hazards that a site is reasonably expected to be exposed to based upon geographical location. A site in Chicago should consider ice storms and blizzards whereas a site in Miami should consider hurricanes and hail. A site in southern California should consider earthquakes and wildfires whereas a site in Kansas City should be concerned with severe winds and tornados. There are good reasons that site selection and due diligence are given the importance they are when considering where to build or purchase a critical facility. CATEGORIZING THE HAZARDS Next is to categorize the hazards or “events” into two categories. These would be events that typically occur with 24 hours or longer advance notice and those that could have less than an hour advance notice. Hurricanes can be predicted days before they reach a given location, whereas tornados may give little to no advance notice whatsoever. Most disasters that occur with 24 hours of advance warning also tend to affect a large region. These events can incur wide spread destruction and disruption of critical off-site services that can take days or weeks (or in extreme cases, months) to restore such as the electric power grid, municipal water systems, communication systems, and roads and bridges. On the other hand, most disasters that occur with little or no warning typically affect a much smaller area. Tornados; severe thunderstorms; wind, hail, and localized flooding; and hazardous gas or material spills are good examples. As destructive as these events may be, the relatively small area affected allows for a much quicker response and recovery of affected critical services; typically within 24 to 48 hours. However, some events such as a major earthquake have the potential to occur without notice and affect wide areas. So far, the hazards discussed are somewhat static in nature, in that hurricanes, earthquakes, tornados, and river floods can be assessed and planned for in advance and chances are the threat doesn’t change much over time. Some hazards are more dynamic. The risks associated with disaster events such as transportation accidents releasing hazardous materials, construction related hazards (both on-site and off-site), and terrorist attack-related events should be re-evaluated on a periodic basis or when the likelihood of initiating conditions change. Trying to create unique procedures and processes for each and every situation is not necessary. This is where the symptoms must be treated, not the cause. The first step is to identify which systems, resources, and services are critical to sustaining operations, and then determine how these are exposed to risk by the events identified above. Some obvious criticalities are power (off-site utility and on-site generation), water (domestic “city” water and on-site storage and/or wells), and of course staff (on-site staff but also the fuel oil supplier’s staff, etc.). LONG-TERM VS. SHORT-TERM Back to the two categories: long-term events with advance notice vs. short-term events with little or no warning. Let’s discuss power as an example. The first step should be to assess the reliability of the utility service. Is there one feeder or two? Are they from separate off-site substations or the same? Is the site close to the point of generation or way down the line? Are the feeders overhead or buried under ground? Regardless, most sites will provide some on-site generation to allow site power autonomy when the utility power eventually fails. For on-site power generation, ensure that at least a minimum amount of fuel oil is available to carry the load for the short-term event. This is frequently set at 24 hours with the expectation that events that last longer allow time to contact a fuel oil delivery service to have additional fuel oil provided when necessary. There is a shortcoming with this strategy. What happens when the event affects a larger region? Large scale competition ensues as everyone with a diesel generator starts calling for emergency fuel oil deliveries. What if the fuel delivery service doesn’t have power to pump the fuel into the tanker trucks? What if the roads are flooded between the fuel oil supplier and the site? What if the delivery truck driver stays home handling his own personal disaster? What if the fuel oil delivered is of poor quality? Part of good emergency planning is to eliminate as many “what ifs” and take as much control of the situation as possible. In this case, there are a number of proactive steps that can help. First is to have a formal service level agreement (SLA) in place with a reputable fuel oil supplier who also provides delivery service (eliminate the middle man if possible). The SLA should include guaranteed delivery including priority over other potential clients. Better still is to have multiple SLAs just in case a potential supplier becomes incapable of meeting the contracted commitments. These separate suppliers should be in separate locations with independent paths to the site so one road or bridge closure doesn’t impact all deliveries. For potentially large, and therefore, long-term events, consider pre-staging extra tankers on-site so runtime before the first delivery is extended. This means having a designated pad to locate the tankers that affords the ability for on-site staff to transfer the fuel oil to the permanent bulk oil storage tanks. This also means the tankers should include transfer pumps or the site has to have its own suitable capabilities. Imagine how frustrating it would be to have the needed fuel oil on-site without the means to make use of it! And in case of poor fuel quality, each generator should have 100% capacity, redundant duplex fuel oil, and water separator filters. All of this great planning will be for naught unless there are trained and competent personnel on site for the duration of the event. This takes serious advance planning as well. If the event is of such magnitude and severity (e.g., Hurricanes Katrina and Sandy) as to last more than a few days then it should be assumed site staff will have family, homes, and other personal concerns that compete with the site for attention. Even the most dedicated and loyal employee has to be able to care for his home and family. It becomes imperative that employers address these concerns to the extent possible in advance. The site should be stocked with provisions sufficient for staff to essentially live on site when required. Safe transportation should be provided to shuttle staff home and back (e.g., four-wheel drive trucks with chains during blizzards). Hotel accommodations should be made for essential staff’s family especially if special needs come into play. ESTABLISH STANDARD OPERATING PROCEDURES With regard to planning for the truly unexpected, unanticipated emergency that affords little or no advance warning, that’s where standard operating procedures that include emergency operations and response coupled with staff training and drills provide the best protection. Emergency procedures aren’t just for equipment or systems. They should include “severe weather/high wind” procedures. This could include having security monitor the emergency weather service for warnings and watches. Activation of the procedure could include establishing a “stand-down” posture where all critical work is halted or deferred until the situation clears. Consider opening a “war room” where site operations management monitors the site infrastructure, places calls to vendors and contractors to be ready to mobilize if required, and dispatching on-site staff to police the site and ensure doors are closed and sealed (and diked if prudent), potential wind-blown projectiles (discarded sheet metal, plastic, etc.) is secured from blowing into outdoor louvers, cooling towers, etc., and that roof- hatches are locked and secured. Other such procedures could include “flooding/roof leak,” “HAZMAT event,” “severe outdoor contamination (wildfire, dust, ash, fumes, etc.),” and other viable events. It is important to note of course that in many of today’s critical operations, mission critical redundancy is provided by IT redundancy (virtual redundancy). Some sites have low-latency, mirrored-redundant processors in sites within relatively close proximity. If both sites are in jeopardy, operations must be transferred to a remote site that is not in the same geographical area. Business continuity plans should be in place and tested annually to transfer all critical operations to the remote site until the event has passed and normal operations restored. A good practice is to have a flood response container that includes a wet-vac, plastic tarp, absorbent socks/booms, flashlights, duct-tape, clean rags, etc. Larger sites may have multiple kits staged in strategic locations. The contents should be inventoried and checked routinely as a standard preventive maintenance task to ensure availability when needed. Similarly, other pre-packaged emergency response kits should be maintained as appropriate. Most have a HAZMAT spill response kit, but you can also have kits for emergency lighting (small generator, approved gasoline container, light “trees,” etc.), and outdoor contamination kit (activated carbon filters, temporary pre-filters, or a temporary means to seal outdoor intakes and operate the site HVAC in a total recirculation mode). Finally, whatever preparations and planning are put in place should be supported by formal staff training and real-life drills. These plans must be tested and verified as safe and effective before being relied upon. Emergencies are by their very nature high stress events. Staff should have confidence that the plans and procedures work based on practice and demonstration. Effective disaster planning and training builds a more effective operations staff and strengthens the interdepartmental communications between facilities, IT, and other business units as a whole. The site staff develops a bigger picture and how the facility systems work together and how to protect the enterprise. This planning allows for a more effective response to day-to-day operations and to the truly unpredictable emergency event. The author would like to acknowledge Ethan Thomason, vice president at Primary Integration, for his help with this article.
<urn:uuid:c4d9a09c-d9b5-4ef1-a66a-83b82a89400a>
CC-MAIN-2024-38
https://www.missioncriticalmagazine.com/articles/85941-data-center-emergency-preparedness
2024-09-20T09:46:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00751.warc.gz
en
0.951826
2,152
3.765625
4
Federal Trade Commission enforces security and privacy practices to safeguard U.S. consumers and businesses The U.S. Federal Trade Commission was created on September 26, 1914, when President Woodrow Wilson signed into law the Federal Trade Commission Act. The FTC opened its doors about six months later, on March 16, 1915. The FTC’s primary charters are the Federal Trade Commission Act of 1914 and the Clayton Antitrust Act of 1914. The mission of the FTC is to protect the public from deceptive business practices and unfair methods of competition “through law enforcement, advocacy, research, and education.” The FTC is organized into eight divisions: Privacy and Identity Protection, Consumer & Business Education, Advertising Practices, Marketing Practices, Financial Practices, Consumer Response & Operations, Litigation Technology & Analysis, and the all-important Enforcement division—because regulations don’t mean much without some sharp teeth. Scope of FTC Governance The FTC plays a vital role in ensuring the security and privacy of personally identifiable information that is collected, processed, and stored by organizations in virtually every area of commerce. Among the organizations governed by the FTC Act are those dealing in alcohol, tobacco, appliances, automobiles, clothing, textiles, jewelry, finance, franchises, real estate, mortgages, non-profits, and certain other commercial enterprises. The FTC role in cybersecurity and privacy is similar to the HHS Office for Civil Rights, which enforces compliance with the HIPPA Security and Privacy Rules. FTC Role in Protecting Privacy and Identity The FTC Division of Privacy and Identity Protection oversees “issues related to consumer privacy, credit reporting, identity theft, and information security,” according to its webpage. It enforces the statutes and rules within its jurisdiction, engages in outreach and policy development, and educates consumers and businesses about emerging privacy, credit reporting, and information security issues. This division researches and reports on privacy and security issues, and provides online assistance for victims of identity theft. Following are four laws enforced by the Privacy and Identity Protection division. - FTC Act, Section 5. Prohibits unfair or deceptive acts or practices, including deceptive statements and unfair practices involving the use or protection of consumers' personal information. - Gramm-Leach-Bliley Act. Requires financial institutions to ensure the security and confidentiality of customer information, notify customers of their information practices, and provide customers an opportunity to instruct institutions not to share their personal information with certain non-affiliated third parties. - Fair Credit Reporting Act. Ensures the accuracy and privacy of information maintained by credit bureaus and other consumer reporting agencies and gives consumers the right to know what information is being shared about them with creditors, insurance companies, and employers. - Health Breach Notification Rule. Requires certain businesses that are not governed by HIPAA rules to notify their customers of any data breaches affecting unsecured, individually identifiable electronic health information. An Active Enforcement Process The FTC uses a variety of tools to protect the privacy of customer data. Its primary method is to bring enforcement actions “to stop violations of the law and require companies to take affirmative steps to remediate their unlawful behavior.” Remediation requirements may include implementation of comprehensive privacy and security programs, expert independent assessments every two years, compensation to consumers, return of illegal profits, deletion of illegally obtained consumer information, and other remedies. Two primary actions form the backbone of the FTC enforcement process: the Administrative Complaint and the Final Order. - Administrative Complaint. The FTC issues an Administrative Complaint when it has “reason to believe” that the law has been or is being violated, and it appears to the Commission that a proceeding is in the public interest. Complaints are often the result of input from consumers or businesses who have been victimized by scams or other fraud. - Final Order. When the FTC issues “a Consent Order on a final basis, it carries the force of law with respect to future actions. Each violation of such a Final Order may result in a civil penalty of up to $50,120.” In some cases, before being finalized the order may be modified as new facts emerge or a situation evolves. In all of its privacy and data security work, the FTC goal is to “protect consumers’ personal information and ensure that consumers have the confidence to take advantage of the many benefits of products offered in the marketplace.” How Victims Get Their Money Back One of the FTC’s roles is to enable refunds to consumers who have been deceived or defrauded. As just one of countless examples, in April 2023 the FTC announced a $1.1 million consumer refund. The court ruling in this case authorized the FTC to send 41,934 checks, totaling more than $1.1 million, to consumers who were victimized by bogus “free trial” offers for tooth whiteners and other products from RevMountain LLC, Anasazi Management Partners, and 59 related corporate defendants. Although the average value per check was slightly more than $26.40, the refund signals that no fraud or deceit of a U.S. consumer is too small to be penalized. In 2022, the FTC sent the first payments to more than 224,000 distributors of AdvoCare products who were defrauded in an illegal pyramid scheme operated by AdvoCare. The initial payment totals $149 million, for an average check value of $665.15, with more reportedly on the way. As the FTC Refunds chart indicates, in 2022 alone almost two million individuals had cashed their FTC payments. Active Pursuit of Violators The FTC takes its protective role very seriously and has opened hundreds of privacy and data security cases in the past few years. Below are just two examples of the ten total FTC Administrative Complaints—related specifically to data privacy and security violations—that were resolved or modified in 2023 and 2022. First Case Ever Brought Against a DNA Testing Company On June 16, 2023, the FTC announced that genetic testing firm 1Health.io had failed to protect the privacy and security of DNA data it was entrusted with. The company had been known as VitaGene, Inc. before changing its name in 2020, after an independent researcher exposed the company’s poor security. The FTC Complaint charges that the company (1) left sensitive genetic and health data unsecured, (2) deceived customers about their ability to have their data deleted, and (3) retroactively expanded the types of third parties it shares individual’s data with to include supermarket chains and nutrition and supplement manufacturers, without notifying consumers who had previously shared personal data with the company or obtaining their consent to share such sensitive information. Under the terms of the settlement described in the FTC’s proposed Final Oder, 1Health must meet these requirements: - Pay a $75,000 penalty (which the FTC will add to its budget to support various consumer refunds); - Strengthen protections for genetic information and instruct third-party contract laboratories to destroy all consumer DNA samples that have been retained for more than 180 days; - Ensure that any company who purchases all or parts of 1Health’s business agrees, by contract, to adhere to these same provisions; - Notify the FTC about incidents of unauthorized disclosure of consumers’ personal health data; and - Implement a comprehensive Information Security Program addressing the security failures cited in the Administrative Complaint. The required remediation actions address cybersecurity, privacy, and identity protection—issues that fall clearly within the domain of the Federal Trade Commission and its Enforcement division. Recurring FTC Cases Against Twitter In May 2022, the FTC charged Twitter, Inc. with deceptively using account security data for targeted advertising, citing that the company asked users to provide phone numbers and email addresses to protect their accounts, but then allowed advertisers to use this data to target specific users, at Twitter’s profit. This deception violated a ten-year-old FTC order, from 2011, that explicitly prohibited the company from misrepresenting its privacy and security practices. A modified order and proposed settlement were announced in June 2023, in which Twitter would be required to pay a $150 million penalty and be banned from profiting from data it collects deceptively. Among numerous other requirements, the proposed settlement also mandates that Twitter implement a Privacy and Security Program “to protect the privacy, security, confidentiality, and integrity of the data it collects, maintains, uses, discloses, or allows access to.” At a high level, the Privacy and Security Program requirements include: - Document in writing the content, implementation, and maintenance of the Program; - At least once every calendar quarter, provide the written program and any updates to the board of directors, governing body or, senior Twitter officer responsible for the Program; - Designate a qualified employee(s) to coordinate and be responsible for the Program; - At least once every 12 months, and promptly following the resolution of a security incident (within 90 days of discovery), assess and document the internal and external risks to the privacy, security, confidentiality, or integrity of data that could result in its unauthorized collection, maintenance, use, disclosure, alteration, destruction, or provision of access, or the misuse, loss, theft, or other compromise of the data; and - Design, implement, maintain, and document safeguards that control the material internal and external risks outlined in D above. Each safeguard must reflect the volume and sensitivity of data that is at risk, and the likelihood that the risk could be realized. The settlement includes pages of highly detailed requirements supporting these five mandates. This 2022 charge remains open and the case is ongoing. In light of recent ownership changes at Twitter, and the resulting organizational cleansing process, it will be interesting to learn how this case is finally resolved. Any agreed resolution will be posted on the FTC website. The FTC role in cybersecurity and data privacy is a crucial one, not just for U.S. consumers but also for U.S. businesses. The Federal Trade Commission works steadily and quietly investigating consumer complaints against deceitful or fraudulent businesses. The FTC files charges in the form of Administrative Complaints and settles violations in its Final Orders. Of vital importance, the FTC is empowered to impose civil monetary penalties upon violators and also to mandate remedial actions that strengthen their cybersecurity and privacy safeguards for individually identifiable consumer information. Businesses who are subject to the FTC Act, the GLBA, and certain other laws should familiarize themselves with the details of those regulations to ensure they are adequately protecting customer data. A security risk assessment is always the best way to begin. Learn more about cybersecurity by becoming a sponsor of Cybersecurity Awareness Month. This October marks the 20th year the National Cybersecurity Alliance has promoted cybersecurity in this manner. Join 24By7Security and thousands of other organizations in supporting this vital initiative!
<urn:uuid:f64edea3-2927-402c-969f-c365c66edca1>
CC-MAIN-2024-38
https://blog.24by7security.com/the-ftc-role-in-cybersecurity-and-privacy
2024-09-11T23:11:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00651.warc.gz
en
0.945453
2,239
3.453125
3
By: Manish Bhuptani, Shahram Moradpour (February 10, 2005) From the Inside Flap RFID Field GuidePreface As the number of devices attached to the Network has grown exponentially, so has the value of the Network and the benefits of being attached to it. First, thousands of mainframes and mini computers shared business data. Then it was millions of PCs; then tens of millions of mobile phones and handhelds—spawning even more high-value networked services. Today shared applications and services are routine, but the revolution is just beginning. Soon, billions of devices—each with their own digital heartbeat—will connect to the Network. Many will utilize a single, powerful technology: Radio Frequency Identification (RFID). RFID tags will be found embedded in everything from cereal boxes to prescription medicines to parts of an aircraft, and a variety of other machinery. These tags, when in proximity of the right type of sensors, will broadcast information about the objects they are embedded in—dimensions, whereabouts, identification number, history of temperature they were exposed to, and many other static and dynamic characteristics. Many sensors located in hospitals, manufacturing plants, stores, or automobiles will collect this data, aggregate it, and route it to various humans—as well as decision-support systems. The benefits derived from offering services based on such information will be tremendous. Businesses will run more efficiently and consumers will experience better and innovative services. For example, instead of a grocery store losing sales because of consumers not finding meat in stock, the RFID tags in the meat packages being bought will tell the store’s in-house sensors that the shelves are more than half empty, triggering a re-order to the supplier. The supplier, armed with the latest information about the location of his meat shipments (thanks to RFID-based pallets used in trucks connected to the central facility via a GPS system) will direct the nearest available shipment to the store. As the truck carrying the goods is getting unloaded at the loading dock, the RFID tags in those boxes will alert the store’s inventory system, which in turn will alert the stocking clerk to get ready to stock the shelves. The tags will also have the data about the temperature the boxes were exposed to in transit. If the refrigeration system in the truck was not working properly, exposing some of the packages to higher than recommended temperature, the tags will help the store clerk identify and separate out packages containing spoiled goods. The time saved due to automatic detection of low stock levels and corresponding delivery means that the grocery store would not run out of meat—increasing profits. Detection of possibly spoiled goods means that the customers would not have to suffer the consequences, averting a potential health disaster and liability for the store. That is the promise of RFID. But how much will ultimately prove out, and how much will be revealed as hype? How can a simple RFID tag make all this possible? What should you, the reader, be doing to embrace this phenomenon of an “RFID-enabled world”? How does this RFID technology work? What types of applications are possible and who is adopting the technology? What are the drivers and barriers to adoption? What is the next step for an organization trying to figure out how to proceed forward with an RFID deployment? We are sure many such questions have come to your mind by now. These are the questions we hope to answer in this book. In our current roles, we get a bird’s eye view of many new technologies and new applications of existing technologies. Our jobs as Market Developers for Sun Microsystems have exposed us to the latest software being developed at the tiniest software start-ups to the business and IT needs of the largest Fortune 500 companies. We have seen many technology innovations that were high on promise and low on substance. We have also met many vendors who flocked to capitalize on those innovations, only to fail, as there was no sizable revenue or business model. At the same time, we have met many customers who routinely use cutting-edge technology as a competitive weapon to strengthen their business. As we worked with companies that promised to apply RFID to solve complex business problems and customers who looked at this technology to help them leapfrog the competition, we realized that some companies selling or implementing RFID haven’t sufficiently addressed these questions—placing themselves and their customers at risk of failure. At the same time, some early adopters were gaining valuable insights and benefits from the deployment of this technology. We asked ourselves how a company looking at understanding and implementing this technology could make an informed decision and take action. This book should provide the answer. This book is not a theoretical treatise on competitive advantage, although it does point out examples of how companies can gain competitive advantage from RFID deployments. Nor is it a technical manual providing code samples, although it does go into fairly detailed technical discussion of the fundamentals of RFID. This book is a field guide for the practitioner. A practitioner could be a business person, a technical person or a person wearing both hats. It could be a senior executive trying to separate reality from the hype surrounding RFID and wondering if this technology can give him a leg up on the competition. Or, it could be a plant manager trying to figure out sourcing and production issues involved in applying RFID tags to an item in production. In this book, all of them will find real-life examples of RFID deployments, issues related to people, processes and technology, and tips for making an RFID deployment successful. The book is organized into three sections. The first section explains RFID technology by providing its history, its components, and a perspective on what it could do for you. Since no technology can succeed and proliferate unless it helps businesses meet one or more of their primary economic needs—reducing cost, increasing revenue, and providing competitive advantages, we also provide some examples of RFID usage and its benefits to businesses and end users in this section. The second section explains how you can leverage RFID in your organization. RFID standards, an RFID analysis and deployment framework, cost-benefit considerations and RFID vendor landscape are explained here. It outlines a holistic approach to doing an RFID project that can harness this complex technology for achieving real business benefits. Although one might think that putting an RFID tag on a box is not a complex task (which, by the way is true—it takes only 10 seconds for an assembly worker to put a tag on a box and pass it along, an activity known as “slap-n-ship”), unless that process has been thought through, you are not likely to see a lot of benefits from tagging an item. The challenge is not in applying the tag to an item, but in re-thinking existing business processes or creating new ones to fully leverage the powerful, real-time data collection capability offered by RFID. The RFID also brings with it a new set of challenges. For example, how to process all the data generated by billions of tags in the supply chain; how to filter the processed data; and how to integrate the filtered data into existing systems and processes to increase benefits. The framework and tools we provide in the second section help you think through such issues pertinent to your environment. The third section looks at the path ahead of us. It explains how external factors, such as mandates, legislation, regulations, political interest, and consumer concerns (such as security and privacy) can affect a technology’s proliferation. It also provides a high level view of the trends surrounding RFID deployment—from trends in tag design to invention of new business models. The book is designed to cater to various types of practitioners. Some may be interested in reading the entire book first to get a comprehensive understanding the technology. Others may simply want an answer to their specific questions. To balance the needs of both types of readers, we have put in a section at the beginning of each chapter titled “Five Questions You can Expect to Get Answered in This Chapter.” Advanced readers will find these questions quite useful in determining the type of information covered in the rest of the chapter. For example, if you want to find out about the different types of tags and the frequencies they operate at? Turn to Chapter 3. Confused about RFID standards between U.S., Europe, and China? Turn to Chapter 4 for clarification. Want to educate your CFO on why he should care about RFID? Turn to Chapter 7 for discussion of short-term and long-term benefits of RFID, and advice on developing a cost-benefit analysis. Want to learn about the emerging trends in RFID? You will like the discussion in Chapter 11. We hope that you walk away from this book with a better appreciation for the technology as well as a practical understanding of how to make RFID work in your business. Shahram Moradpour |
<urn:uuid:efcba86f-70d0-42f2-a10e-19b6d238fde3>
CC-MAIN-2024-38
https://www.itworldcanada.com/rfidfieldguide
2024-09-13T05:47:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00551.warc.gz
en
0.95288
1,830
2.84375
3
Ransomware attacks have become increasingly prevalent since 2005, and since then, they’ve had crippling effects on the attacked companies resulting in massive consequences. Today, the nature of the Ransomware attacks is colossally complex and advanced, causing catastrophic side effects to millions of users around the globe. Cybercriminals have become so proficient in their attacks as they now have access to sophisticated tactics and advanced technology, making it easier to obscure their identity, and complicating the encryption process even further, demanding huge amounts for Ransomware removal. This Ransomware removal is a huge monetary burden for the companies. The attackers take advantage of their anonymity and hide behind the systems, without fear of being reprimanded. This has turned cybercrime into an extremely lucrative business. The attackers are able to earn billions every year through this offense. However, we cannot deny the fact that it is a challenging task to catch and prosecute these hackers due to their anonymity. The universal nature of the problem has alerted the law enforcement and security agencies who are diligently working to tackle the issue in an effort to alleviate the level of crime. It is no doubt that not just Ransomware removal, but tracing the attackers is an obstinate and extensive task and requires a lot of in-depth investigation and collaboration from the security agencies. However, it’s not impossible to catch these hackers. It might seem to be an overwhelming mission, but at the end of the day, no matter how complex the attacks may become, there is still room for mistakes, as hackers are also humans. A minute slip-up by the hands of the hacker can reveal traces of evidence for the security agencies. It is these minor errors that play a significant role to identify and catch the criminals. For example, something as trivial as a slight spelling error in an email or a financial transaction can notify a criminal activity. Another cue can be revealed by the type of malicious software that is used to attack the system which can serve as evidence for the security officials to track down the derivations of the attack. In many cases, such as that of the Bangladesh Bank Attack of 2016, it is easier to decipher the codes and decryption keys, if they are identical, when the same software has been used in other incidents as well. Security officials have come up with an effective model to track down hackers and catch them known as Honeypots. A network-attached system is set up, which might appear to be a legitimate system to the attackers, and thus a potential target for an attack. However, these decoy computer systems contain applications that can track down the attack. Hackers are enticed by this and might end up infecting these systems, thereby revealing the insights of the hackers and their operations. These dummy attacks are very beneficial in providing substantial information that otherwise is very difficult to acquire. It cannot be refuted that catching hackers is a very challenging task, but if conscientious efforts are put in by the authorities to investigate the crime and reach its roots, nothing is unachievable.
<urn:uuid:f4abce68-cb1e-4d35-8b6a-1f4556b1ce69>
CC-MAIN-2024-38
https://university.monstercloud.com/cyber-security/ways-to-catch-cybercriminals/
2024-09-14T11:44:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00451.warc.gz
en
0.968666
619
2.703125
3
I came across an interesting post on twitter the other day (https://twitter.com/suffert/status/567486188383379456) that depicts a sidewalk with a sign indicating what wasn’t allowed on the sidewalk. You have seen these before: NO bicycles, skateboards, rollerblades, roller skates, scooters. In the information technology sector, this is known as a black list; a list that defines what is NOT allowed or permitted. You can see black lists all over the place, input validation, output encoding, etc. The other type of list that we are more commonly seeing is a white list; a list that defines what IS allowed indicating that everything else is NOT allowed. While writing this post I was drawing a blank on where I have seen thin in the physical world and it wasn’t until I was talking to a colleague about this that I realized I had the perfect example: Handicap parking. Handicap parking signs are meant to say that only people with that designation can park there and everyone else is prohibited. In technology, we are seeing it a lot more for input validation and output encoding because it is usually a smaller list compared to a black list. Lets compare the two and see what pros and cons exist. Honestly, they both provide good protection when properly defined. Depending on the data, a black list can actually be a strong control. For example, if we have a system that has special escape sequences to identify its control characters. While simplified down (and I know there are more characters than this) SQL uses the (‘) apostrophe as a control character. It is that delimiter to determine what is data and what is command. If SQL only had one control character (the apostrophe) then a black list would be sufficient. Put the apostrophe into the black list and any time that character appeared you could reject it, or escape it. Unfortunately, it is rare that the list will be that small. Using the example of SQL, what happens if in the future the update is released and now the (-) dash is a special character, or the (#) hashtag? Now the list has to be updated and re-deployed and during that time before deployment the application could be vulnerable. A white list defines exactly what is good and puts everything else up for question. For this example lets take a first name field and look at input validation. If the field is defined as only (a-z) characters then it is easy to set up a white list using a regular expression to say only the letters (a-z) will be accepted. Every other character will be rejected. A regular expression for (a-z) is much simpler than trying to record every other character out there into the black list. What if you forget one? In this case you really don’t forget any because it is such a limited set. In the example I gave earlier with the handicap parking, the sign is simple: One designation that is allowed. What if the sign used a black list? Can you imagine the number of prohibited items there would be? Another example is in output encoding to protect against HTML context cross-site scripting. I created a document a few years ago showing the different encoding methods in .NET (http://www.jardinesoftware.com/Documents/ASPNET_HTML_Encoding.pdf). Looking at this, there are five characters that are encoded using a black list build into .NET (<,>,”,&,’). This list defines what will get output encoded when using the HTMLEncode method. These are some of the most common characters used to perform cross-site scripting. What if a new character is found to be a problem? This method won’t cover it. With a white list we could say encode everything except for (a-z). Now if a new special character is determined to be a problem it is already encoded for us. EFFECT ON USER You wouldn’t expect much effect on the users if all you are doing is saying what is and isn’t allowed based on the use of the data. However, lets go back to the initial example that started all of this, the twitter post. Setting up the black list was most likely fairly simple. Here are some common problem items we see, lets just prohibit them. Of course then someone comes along on a unicycle and while probably shouldn’t be there, are not in violation of the sign. So it appears as a “Good Enough” solution that shouldn’t inhibit any valid users. I posed the question on what the white list would look like. The first response I got back was “unassisted movement only” from a friend of mine, Tim Tomes. Seems like a pretty good idea, I am not sure I would have thought of unassisted movement, but lets dig a little deeper. What about a wheelchair or crutches? The point here is that with a white list, if it is too narrow, it could effect the ability for valid users to use the system. In this case, just using “unassisted movement only”, while a great first draft, would have prohibited anyone in a wheel chair from using the sidewalk. The point is that because a white list will prohibit anything in the list, it must be scrutinized and tested much more to ensure that it is exactly what is needed. Unlike a black list where there can be a control after the black list to continue limiting down items, if it is blocked by the white list there is no way to still have it later on. I like both black lists and white lists and I believe they both have their place. It is important for you to analyze what your situation is to determine what the best course of action will be. In some cases a black list will be exactly what you are looking for, in others the white list will be the right fit. WE often get this feeling that we have to make blanket statements like “White lists are better so only use those.” Situations are different, the lists are different and you want to use the one that best fits your needs. Take a moment to determine what the pros and cons are to each and select the best fit.
<urn:uuid:7e8ab5bf-af98-4b81-b973-544d1af0eb13>
CC-MAIN-2024-38
https://www.developsec.com/2015/02/25/black-lists-and-white-lists-overview/
2024-09-14T12:08:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00451.warc.gz
en
0.947811
1,300
2.53125
3
This chapter is a brief introduction to the ScriptEase scripting language and processor, including installation instructions. Thank you for purchasing ScriptEase! You are now the owner of the most powerful scripting language available today. You have joined the growing number of computer users who are taking control of their applications with the power of ScriptEase. ScriptEase:Desktop has three main components: the ScriptEase language; the ScriptEase processor, the engine that interprets and runs a ScriptEase script; and the IDE debugger, to help you catch and correct errors in your scripts. The ScriptEase language was designed to meet the need for a fast, portable and powerful language that is easy to learn and use. ScriptEase is fast because the most meticulous aspects of programming (memory management, declarations and garbage collection) are handled automatically, so you don't need to worry about them. It is less picky about syntax than other programming languages. Also, ScriptEase is interpreted as it runs, so you don't have to compile, debug and recompile your code before seeing the results of your work. Since it is the interpreter that does the work of running the script, ScriptEase scripts themselves are independent of any operating system and are fully portable from one platform to another. A script written for OS/2 can run on a UNIX machine, for example. The exceptions to this rule involve functions that interact and define behavior specific to a particular system. A script that uses the Windows routines to create and move windows about the screen obviously will not work on DOS or UNIX, because these systems don't use windows. The ScriptEase language is as powerful as the C language on which it is based. It uses the same commands, operators, and syntax, so if you know the basics of programming, you'll be able to learn ScriptEase in minutes. If you know nothing about programming at all, ScriptEase is the perfect place to start. The next two chapters describe the ScriptEase language in detail and give a brief tutorial. The ScriptEase processor includes a command shell, with a prompt from which you can navigate through your file directories and directly launch scripts and commands. The processor can also be invoked from a batch or REXX file, or by double-clicking on a script's icon, depending on which operating system you're using. The IDE debugger lets you watch the script move through the processor line by line, as it is being interpreted. you can keep track of the values of the script's variables in the watch window, and set breakpoints to suspend operation of the debugger if a certain line is reached or condition is met. If there is an error in your script, the debugger helps you quickly track, find, and correct it. ScriptEase is available for the following operating systems: Windows 95 and NT, Windows 3.x, OS/2, DOS, UNIX, Linux, Aix, FreeBSD, Sun OS, and Solaris. Installing ScriptEase is an easy two-step process. First, regardless of which operating system you use, run the program INSTALL.BAT on the ScriptEase installation disk. This program will prompt you for which versions of ScriptEase you wish to install, and then copy the appropriate files to your disk. To run INSTALL.BAT, put the installation disk in drive A: and type the following from the command line (press ENTER after each line): To complete the installation, run the version of ScriptEase that you installed in step one. The first time any version of ScriptEase is run, it will run its installation script and modify itself for the appropriate operating system. Included on the ScriptEase installation disk is a large collection of sample scripts and utilities written in ScriptEase. When you install ScriptEase, these libraries will be copied to the same directory as the ScriptEase processor. These utilities may be used as is, or modified to suit your own needs. You can take them apart and study them as examples of how to use the commands and functions of the ScriptEase languages. Additional scripts may be found on the Nombas website, If you have written any scripts or utilities you feel may be useful to other people, please email them to us and we will make them available to all on our website. The "mail us" button on the web pages will take you to a page where you can copy and paste your script and then email it back to us. Select "Upload ScriptEase Script" as the recipient of the message. At the top of the scripts in the sample libraries is a peculiar form of comment consisting of three tildes and a number after the comment indicator (//~~~2 or ::~~~1, e.g.), followed by a brief description of the script. This comment is used internally by Nombas and has no bearing on the functioning of the script The extension for a ScriptEase script is .cmm. The processor recognizes all files ending in .cmm as scripts and will try to interpret them if asked to do so. When a script is run from the ScriptEase environment (either from the ScriptEase shell's command line or by explicitly calling the ScriptEase processor from the OS command line), the .cmm extension is implicitly understood and does not need to be written out. Cmm stands for C minus minus. It is the original name for the ScriptEase language, which is based on the C language, minus the time consuming and meticulous aspects that make C difficult to learn and use. All of Nombas' products are based upon the ScriptEase language. In addition to ScriptEase:Desktop, Nombas produces ScriptEase:WebServer Edition, which lets you use ScriptEase as a CGI scripting language; and ScriptEase:Integration SDK, which allows developers to include ScriptEase as a macro language in their own applications. For more information on these and other offerings from Nombas, visit our website at www.nombas.com.
<urn:uuid:c88a2993-6ee2-4f22-b009-32ebb6cfe6d6>
CC-MAIN-2024-38
http://brent-noorda.com/nombas/us/desk3/manual/chapters/0-intro.htm
2024-09-18T04:56:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00151.warc.gz
en
0.928526
1,253
2.65625
3
What Is the Nslookup Command and How Can You Use It to Improve DNS Security? Last updated on April 25, 2023 Nslookup is a command-line tool that helps you perform DNS queries. The Name Server Lookup (nslookup) command helps server administrators check DNS records. By using it they can find out data like domain names, IP addresses, the ports in use, and timeout. Computer OSs like Windows, macOS, and most Linux distributions have it as a built-in tool. So it might be ready to use on yours too. Online nslookup tools also allow you to see all the DNS records for a website. They might be more comfortable to use since you can do all the checking in a browser. But they might not be as safe as the one you have running on your computer. What Is Nslookup Used For? Server admins use the nslookup command to troubleshoot DNS issues and test their networks. But it can also be used for security reasons. Threat actors frequently use DNS spoofing in their phishing attacks. They purposely misspell a domain name and add or omit a punctuation mark in order to lure the victims to a forged website. A regular user might not notice the difference between, let`s say, instagram.com vs. innstagram.com. Nslookup can also help avoid DNS cache poisoning. With this attack, criminals place fraudulent data and distribute it to the DNS recursive servers, pointing to a fake authoritative server. In this case, hackers distribute data to caching resolvers pointing to a fake authoritative server. Common DNS Data You Can Check with Nslookup Check the domain’s NS records. View MX Records. MX records are responsible for the email exchange. You can check them to see if all the mail servers are functional. Make a reverse DNS lookup. This enables you to check whether an IP address is related to a domain or not. Check the Start of Authority (SOA) Records. Here you can find authoritative information about the domain and the server: the admin`s email, serial number, refresh interval, etc. View all DNS records. This one shows all the available DNS records. After you see them you`ll be able to do specific lookups for different types of DNS records. Find information about a certain name server. You can use nslookup to find out if a certain DNS server is active and responds on time. Check out Pointer Records. This helps you to verify if an IP address belongs to a domain name by launching a reverse DNS query. Query a non-default port. View debugging information. How Do You Use the Nslookup Command? You can use the nslookup command in two modes: interactive and non-active. For the interactive mode: type just the command name, nslookup. The displayed prompt will let you launch several server queries. Let`s say you type a domain name, like heimdalsecurity.com. After it displays some information about the server and address, it will put up another prompt. This enables you to add an option in a separate line. If you want to terminate interactive mode, just type exit. For the non-interactive mode: type nslookup [options] [domain-name]. This mode only lets you issue single queries. As I said above, you can also use online tools to check DNS records. See below a top 5 list of nslookup online tools: DarkLayer GUARD™ offers an amazingly fast response time and a low OS footprint. It successfully spots and stops hidden threats using AI. It works on any Windows device, is compatible with any antivirus, and doesn`t need to scan code or audit system processes to detect and block malware. Your perimeter network is vulnerable to sophisticated attacks. Heimdal® Network DNS Security Is the next-generation network protection and response solution that will keep your systems safe. No need to deploy it on your endpoints; Protects any entry point into the organization, including BYODs; Stops even hidden threats using AI and your network traffic log; Complete DNS, HTTP and HTTPs protection, HIPS and HIDS; DNS security best practices are often overlooked, but it`s time that companies change this. In a digital world, you can`t avoid using the DNS, a protocol that was written years ago. Most important, it was created without any care for cybersecurity. Threat actors have of course learned to leverage this in their favor. The best thing you can do is to join the number of organizations that decided to enforce DNS security and tackle malware and ransomware attacks before they happen. You can check DNS records with the nslookup tool we talked about, for starters. But besides that, don`t let your DNS security become an issue. Make sure you use the best security product on the market to protect your data. Livia Gyongyoși is a Communications and PR Officer within Heimdal®, passionate about cybersecurity. Always interested in being up to date with the latest news regarding this domain, Livia's goal is to keep others informed about best practices and solutions that help avoid cyberattacks.
<urn:uuid:289c5a29-a8cc-476d-b546-056c6314f8b1>
CC-MAIN-2024-38
https://heimdalsecurity.com/blog/nslookup-command/
2024-09-18T04:26:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00151.warc.gz
en
0.904767
1,095
2.65625
3
Our endpoint devices serve as gateways to the digital world but also open us to the threat of malware. Even when you take security precautions and run antivirus software on your computer, it’s still possible to get infected. When it happens, malware must be dealt with quickly before it spreads and damages your network. Learn the symptoms to look for and how to get rid of malware so you can restore your computer systems to its optimal state. A look at cyber threats in 2023 Cyber threats continue to evolve as we move through 2023. Malicious actors are becoming more sophisticated, using advanced techniques to breach defenses without detection. Humans are still the biggest target for attackers, with email phishing, spear-phishing, and social engineering accounting for the majority of attacks in 2023. Other malware and advanced attacks made up the remainder. What makes 2023 interesting is that artificial intelligence (AI) has entered the scene, both as a defender and an invader. Malware now incorporates AI, enabling it to infiltrate systems more effectively. Security professionals are working against the clock to deploy intelligent solutions that can recognize malicious patterns before they cause damage. Types of malware Understanding the different types of malware is essential for effectively addressing security concerns. Successfully removing malware requires a good understanding of the distinct methods bad actors can use to compromise your systems. Named for their biological counterparts, viruses embed themselves within clean files and propagate to infect additional files. Their uncontrolled spread can severely impair a system’s core functionality, leading to the deletion or corrupted files. Named after the deceptive Greek tale, trojans masquerade as legitimate software but execute malicious activities upon activation. Instead of causing direct harm to the system, they often create backdoors for other malware to infiltrate. A remote access trojan (RAT) is software that, when installed on the victim’s device, allows unauthorized users to control the device remotely. This malware encrypts critical data, essentially locking digital doors and demanding payment for decryption, analogous to someone changing locks and requesting payment for keys. While generally less harmful than other types of malware, adware inundates devices with unwanted ads, potentially slowing down computers and serving as a conduit for more severe malware. To effectively combat these intruders, you need not only an understanding of their operation but also the ability to identify the specific malware affecting a machine. Signs of a malware infection Malware is designed to be subtle, making it challenging to detect. Be aware of the signs that your computer may be infected so that you can be ready to take action. Slow device performance, crashes, or pop-ups Frequent system slowdowns and crashes are often indications something isn’t right. Malware programs run background processes that hog your computer’s resources, leading to: - Protracted boot times - Delayed response from software applications - Unexpected freezing or the dreaded “blue screen of death” Pop-up ads can also be an indication that you have adware hiding in your system. A simple click could trigger a threat. Unauthorized access to personal data or systems Unfamiliar files on your desktop or unexplained changes in system settings could be a result of malware. Other symptoms include: - New programs launching at startup without your consent - Altered passwords hindering access to your accounts - Email contacts receiving messages you did not send Inexplicable increases in network activity or data usage An abrupt rise in network activity can indicate unauthorized communications between your device and malicious servers, indicating data transmission or downloading of harmful components onto your machine. Watch for: - Significant spikes in internet usage reflected on network tools - Reduced bandwidth availability for legitimate tasks - Unusually high data transfer volumes with no user-driven cause 5 steps for removing malware If your computer is acting up and you suspect a malware infection, you can reclaim control with five crucial steps to completely remove malware from your PC. 1) Disconnect from the internet to limit the spread First, disconnect from the internet. Cutting off communication from external networks contains the threat within your device and prevents malware from sending further data to malicious actors or downloading additional harmful payloads. - If using Wi-Fi: Click on the network icon on your taskbar, select your connection, and hit “Disconnect.” - For wired connections: Simply unplug the Ethernet cable from your PC. 2) Switch to safe mode Next, enter Safe Mode to load only the drivers necessary for your operating system and to keep potential viruses from loading. - Hold down Shift while choosing Restart via the Start menu. - Once rebooted, select Troubleshoot > Advanced Options > Startup Settings. - On the Startup Settings page, click Restart. - After another restart, choose Enable Safe Mode by pressing F4 or 4. 3) Run antivirus software scans After making sure your antivirus software is fully updated, run a full scan. Full scans take longer than quick scans but are required to remove the malware that’s present on your system. 4) Uninstall suspicious applications, processes, extensions, or plugins Remove unwanted applications that appeared just before or during the attack. Here’s how: - Search for “Control Panel” in Windows and navigate to “Programs,” then “Programs and Features.” - Sort the list by date of installation and review anything installed without your authorization or around when issues began occurring. In addition, check your browser extensions for unwanted additions: - In each browser settings menu look under ‘Extensions’ or ‘Add-ons.’ - Carefully consider whether you recognize each one as legitimate – any doubt calls for removal. If you’re not sure about an application or extension, consult online forums or resources to find out what to do before ending them. 5) Restore system settings and files Once you’ve finished cleaning, you can restore system settings and files: - Access Update & Security in Settings. - Select Recovery. - Select Reset this PC to reinstate original settings potentially overwritten by malicious software without wiping personal files if chosen accordingly. If available, use System Restore to return configurations to previous snapshots taken automatically at various points known as restore points. Quick tips to prevent malware Removing malware — and avoiding it in the first place — requires a structured, proactive approach to cybersecurity. Here are some quick tips to protect your computer from malicious threats. - Stay updated: Malicious actors frequently exploit outdated software vulnerabilities, so ensure all software — especially your operating system and antivirus program — is up-to-date with the latest security patches. - Use strong passwords: Use complex passwords that combine numbers, symbols, and both upper and lowercase letters, and avoid using easily guessable passwords. You can use a reliable password manager to create and store strong passwords for you. - Be cautious with email attachments: Many malware infections start with an email attachment. If an email seems suspicious or is from an unknown sender, do not open attachments or click on any links within. - Install real-time antivirus software: Choose robust antivirus software that offers real-time scanning to catch threats before they settle in. - Regularly backup your data: Use external drives or cloud storage services to back up your important files regularly. This can save you if you get infected. Effective endpoint protection with built-in tools It takes a great deal of effort to restore your system after a malware infection. The best defense against malware is a good offense: don’t get infected at all. NinjaOne makes it easy to patch, secure, harden, and back up devices to protect your endpoints. Learn more about NinjaOne’s solutions for endpoint protection and security.
<urn:uuid:c8408936-e81f-47a0-a607-b07c3e96623f>
CC-MAIN-2024-38
https://www.ninjaone.com/blog/5-steps-for-removing-malware-from-your-computer/
2024-09-19T11:44:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00051.warc.gz
en
0.896513
1,625
2.71875
3
Here are some of the most frequently asked Smart City living questions: Table of Contents ToggleWhat is meant by a Smart City? The depth and breadth of technologies implemented under the smart city model make it difficult to offer a precise definition. However, the meaning of a smart city is generally accepted to be an urban area that leverages technology to provide services and solve problems. Data is collected using different types of electronic methods and sensors. It is then analyzed using special tools, and the insights gained used for operational improvements in traffic movement, garbage collection, crime management, utility supply, environmental management, and the management of social services. Information and communication technologies allow city officials to monitor the city in real-time and interact with the community. Smart cities improve the citizens’ quality of life and drive economic growth. What are the features of a Smart City? For a city to be regarded as smart it must possess the following features or characteristics: - Fulfilling Citizens’ Needs: education, health care, housing, infrastructure needs, and digital equality. - Infrastructure & Resources: delivering enhanced key services to citizens and businesses reliably and cost-effectively. - Jobs & Competitiveness: improving the city’s competitiveness, economic growth, creating jobs, and retraining programs. - Security: protection against cyber-attacks and natural disasters. - Smart Planning and Citizen Support: “intelligent” data analysis and broad community involvement. - Sustainability/Circular Economy: managing environmental change, urbanization, and coping with population growth and climate change. - Technology and Artificial Intelligence (AI): Use of smart technology to support community needs. What are the important features of a smart city? The most important feature of a smart city is the use of technology and artificial intelligence to run the city. This aspect is what gives it the “smart” moniker. The use of tech and AI ensures efficient infrastructure use, the effective engagement between officials and citizens, and provides a learning framework that fosters quick adaptation and innovation to changing circumstances. Do we need Smart Cities? This is the wrong question to ask. It isn’t a matter of whether we need smart cities; the real question is whether we can do without them. Today’s cities grapple with a wide range of problems. These problems include never-ending traffic snarls, runaway pollution, high crime levels, high energy consumption, unemployment, inadequate or overstretched social services, and a myriad of other challenges. Broadband communications systems, cybersecurity concepts, and smart city planning are the key to making 21st-century urban living better. Are Smart Cities worth it? To the extent that smart cities can solve contemporary urban problems, they are worth the investment. However, smart does not automatically mean better, more livable, or more secure. For example, living in a smart city might curtail our privacy and potentially threaten democratic core values such as freedom, liberty, and the pursuit of happiness. The reason is that technology has consequences such as loss of privacy, hackers, and techno-terrorist attacks. In smart cities, the already wide divide between rural and urban populations, culture, and politics has the potential to become wider with dangerous implications. Human beings must work hard to make our cities places where we would want to live and raise our families. Technology is not a panacea. It provides an improvement in the quality of life only with proper planning and clear thinking. What is the difference between a Smart City and normal city? There are stark differences between a smart city and a normal city. Probably the key distinguishing feature of a smart city is the presence of connected objects. In a smart city, objects are more than meets the eye. For example, what may appear to be a simple lamp post may also be a weather sensor and traffic camera that’s connected to the Internet. It may also use smart lighting that auto-adjusts based on natural light. In a smart city, the Internet of Things – the idea that 5G Internet will make it possible to connect a vast range of devices – creates a wide range of possibilities. Secondly, smart cities have engaged citizens. The citizens build the city by participating in data collection through their devices. It is the power of data that leads to cities becoming smart cities. Smart cities also have streamlined transportation systems. Users can consult real-time information about public transport, and transportation routes are optimally planned. Environmental friendliness and sustainability are additional hallmarks of smart cities. Smart cities are administered by following energy-efficient policies resulting in massive annual savings. These are just some of the things that differentiate smart cities from normal cities. How do Smart Cities aim to be sustainable? The concept of sustainability in a smart city refers to using intelligent planning and management to conserve the natural environment, manage natural resources prudently, and save on energy costs. Sustainability is a critical principle because of the challenges posed by rapid urbanization. According to the United Nations Environmental Program (UNEP), 66% of the global population will be residing in urban areas. This will put tremendous pressure on existing infrastructure, natural resources, and drive-up energy needs, hence the need for sustainable management. Smart cities leverage technology to solve these problems. What would a future smart city look like? A future smart city is a scene straight out of a science fiction movie. Some of the technologies that will define cities of the future include: - Advanced Cybersecurity - Artificial Intelligence and Super Automation - Driverless Transport - Human-Machine Interfaces - Internet of Everything - M2M Communications and Pervasive Broadband Mobile - Smart Energy Grids - Talking and Serviceable Bots - Telecity Architecture and Virtual Companies - Telework, Tele-education and Tele-Health Services Which city is known as Smart City? Several cities are considered to be leading the smart cities initiative. One of these is Singapore, a city-state in South Asia. According to Juniper Research, Singapore ranks at the top for four smart indices, namely, mobility, health, safety, and productivity. This makes it a leading contender for the smart city title. It is the second-most densely populated city in the world and has an aging population. Confronted by these facts, the government sought ways to improve productivity in an advanced economy. Sensors linked to aggregation boxes collect information throughout the city. Vehicular and human traffic data is sent to analysts for action and as input in service delivery. Broadband is widely available with Internet penetration one of the highest in the world. The government plans to install energy-efficient intelligent lighting on all roads and install solar panels on about 6,000 buildings by 2022. How many Smart Cities are there? Smart cities are popping up on all continents and in all parts of the world. It is not possible to state an exact because this is a rapidly evolving area. Suffice it to say that almost all countries have smart city initiatives ongoing or in the planning stages. What is the aim of Smart Cities? There are several things we can say for sure. The future urban world will be rife with significant change, as will be seen in the following areas: - Better resource management. - Social, economic, and cultural changes. - Human-machine interfaces will be critical to security and progress. - Lifelong education and retraining will become a way of life. - Multiple job changes and careers will be commonplace as we cope with super-automation. - People will live longer. - Environmental protection and prudent management of natural resources. What makes a Smart City smart? The use of technology to collect data and solve problems is at the core of what makes a city smart. However, although smart technology is critical, what makes a smart city is re-envisioning its design and function to produce a better quality of life and living standards for its citizens. A smart city provides a community with the following: - Improved health care and educational opportunities. - Higher security against natural and human-made disasters - Social and political stability and freedom. - Economic prosperity and thriving businesses - Better housing. - Seamless transportation, communications, networking, energy, and all other critical utilities. What is a Smart City example? There are dozens of smart city examples from around the world. In the United States, you have Boston, New York, Columbus, Dallas, Denver, Pittsburgh, and San Francisco. Examples in Europe would include Olso, Amsterdam, Barcelona, London, and Copenhagen. Key examples in Asia and Oceania would include Singapore, Hong Kong, Seoul, Melbourne, Tokyo, Wuxi, and Yinchuan. Which is the Smartest City in the world? Several cities can be said to be the trailblazers in implementing smart city concepts. The smartest city tag depends on the scoring criteria. For example, according to IESE Cities in Motion Index, London was the smartest city in 2020 for a second consecutive year. Other researchers rank Singapore as the smartest city in the world. A high population density has forced the Singaporean government to fast-track smart city initiatives. Other cities that can claim the crown include Dubai, Oslo, Copenhagen, Boston, Amsterdam, New York, Barcelona, and Hong Kong. How does a Smart City work? Smart cities rely on connected devices and sensors. Devices include smartphones and other Internet-enabled mobile gadgets, electronic devices, vehicles, connected home appliances, and just about any device with an Internet connection. Sensors are installed in various places around the city to collect data such as foot and vehicle traffic, weather information, crime incidents, energy consumption, other utility usages, and much more data. This data is analyzed in real-time by city officials and citizens to make decisions such as traffic routes and security deployments. Historical data reveals trends that inform infrastructure planning decisions and resource management. What are the Smart Cities in the world? New cities are continuously joining the ranks of smart cities. According to the International Institute for Management Development (IMD) 2020 Smart Cities Index, 109 cities worldwide are implementing technology across five key areas: mobility, health and safety, governance, activities, and opportunities. In so doing, they are mitigating the shortcomings of urbanization and can be side to be smart. The top ten 2020 rankings by IMD are as follows: - Taipei City - New York Where is the first Smart City? There is no consensus on which city was the first smart city. Los Angeles was the first city to conduct a massive data collection project in 1974. But at that time, there lacked ubiquitous computing and networking capabilities and data analytic tools. Though there is no consensus, Santander in Northern Spain is likely the first truly smart city. The city has had over 20,000 censors distributed across the city since 2009. These sensors measure everything from soil moisture to traffic data. What are the four pillars of Smart City? The four pillars of a smart city are insights drawn from the developmental roadmap of leading smart cities from around the world. They include: - Network connectivity: an IoT-enabled infrastructure with a robust network of devices and connected applications. - Effective mobility: this could be achieved in several ways, such as through intelligent transport systems, shared mobility, mobility as a service, and so on. - Cyber resilience: the ability to strike a delicate balance between efficiency and data privacy. - City engagement: Involvement by citizens in smart city initiatives.
<urn:uuid:79d775f4-52b5-4ab2-a1e7-f2f0a6a5ca6a>
CC-MAIN-2024-38
https://itchronicles.com/smart-city/smart-city-living-questions/
2024-09-20T13:40:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00851.warc.gz
en
0.93436
2,366
3.234375
3
While we could speculate that document management began with cave paintings, and slate drawings, but realistically, the first form of document management dates back to when language was still being formulated. From mud slates in the Sumerian civilization, all the way to modern cloud-based document management here is the history of document management. About 6500 years ago, people of Ancient Sumer (modern-day south-central Iraq) had developed techniques of cultivating land to such a degree that they started building sustainable cities around it. Their population would reach into the tens of thousands. This invited a lot of visitors from the world over, who would use the city’s stores to keep their livestock, supplies, etc. while they were visiting. Recordkeepers would mark these supplies and items down on mud tablets. This system was quite functional until they realized that record-keeping presented a storage challenge of its own (after a few hundred items, the sheer number of tablets became unsustainable. We’ll skip over the part of history about how this led to the formation of language (this brief but entertaining video is a great way to learn more about that). Eventually, humanity realized the benefits of writing on leaves as it made storage easier, then we had paper and it seems like humanity stopped right there. We’ve been dependent on paper for documentation for centuries now and it has depleted our forests, our environment, and is one of the major contributors to global warming, which is why we support making your company carbon neutral. The filing cabinet Up until the late 1880s, document management was the act of storing papers in a room as best as one could, followed up by taking a good guess at where to look when it was time for retrieval. Edwin Sebels introduced the filing cabinet, and that made things significantly easier to sort. You could sort documents in any order of your choosing and retrieve a securely kept, safe, dry document ensuring it would last longer. This solution was very viable for medical, legal, and financial consultants. While that made things a touch easier, but paper production was not in any position to slow down and the longer a document was in storage the harder it became to access. However, that all changed when the world started going digital. With the advent of modern computing, electronic data management (EDM) started reducing some major pain points with filing cabinets. The future of file management becomes: - File lasts as long as the server - Retrieving a file is as easy as writing a search string - Large volumes of data barely take up any space - Virtually no environmental impact With the advent of computing, electronic document management (EDM) started to address the curse of office spaces being swarmed with documentation. This required a server and a client, the server was where documents were centralized, and the client was where these documents were accessed. This relationship has since remained relatively intact; however, the nature of a server and client has vastly evolved. However, this digital bred a new set of problems. - Too much content in a centralized location exposes it to the risk of damage, loss, or accident. - Computing cost’s part and parcel - Hard to locate specific files - Bound by connectivity - No particular structure The widespread use of PCs, coupled with networking meant that users were able to keep, maintain and share documents in a distributed environment. This was a definite relief for the paper industry, but it birthed numerous new generations of issues that haven’t been addressed until Folderit came along, e.g., no version control, no audit trail, and lack of assured security. Electronic Document Management Systems Eventually, EDMS became its own thing and all the issues associated with digital storage found a practical solution. With EDMS, dependency on paper immediately became outdated, as Folderit makes document handling and access ubiquitous across the entire planet. You can access your documents anywhere on the planet using any internet-connected device. This process keeps your documents secure using 256-bit bank-level encryption, while all transfers take place using SSL encryption, ensuring that your system is inaccessible to unauthorized users. To keep things safe from calamity, data is stored in a purpose-built warehouse and kept safe in three separate areas so you always have a secure backup. You can even collaborate with others using Microsoft Office 365 subscription (separate subscription required). There are audit trails, versioning, metadata, and custom metadata, approvals, advanced OCR-based search, and a whole host of other solutions that makes Folderit a go-to solution for going paperless in 2021.
<urn:uuid:8fa2ffe5-3110-4197-b9d4-6081b5498c1f>
CC-MAIN-2024-38
https://www.folderit.com/blog/a-brief-history-of-document-management/
2024-09-20T13:49:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00851.warc.gz
en
0.96541
945
2.96875
3
Blockchain is a digital ledger technology that enables secure, transparent, and decentralized storage and transfer of information. It uses cryptography and a consensus mechanism to ensure that transactions are validated and recorded accurately. The technology was first introduced as a component of Bitcoin, but has since been adapted and used in different industries and applications. Blockchain privacy involves maintaining the confidentiality and safety of data stored on a blockchain. This can encompass personal information, financial dealings, and other confidential records stored on the blockchain. It is achieved through encryption, secure private keys, and other security procedures that deter unauthorized access and manipulation of the data. Types of Blockchain There are several forms of blockchain technology, each with its own specific features and attributes. The most common types include: Public Blockchains: Also known as permissionless blockchains, public blockchains are accessible to anyone who wants to participate in the network. Anyone can join the network, validate transactions, and create new blocks. Public blockchains are typically considered the most secure due to their large network of nodes and advanced consensus mechanisms. Some examples of public blockchains are Bitcoin, Ethereum, and Litecoin. Private Blockchains: Also called permissioned blockchains, private blockchains are designed for specific organizations or groups. They are more centralized than public blockchains and require permission for participants to join the network. Private blockchains are often used by organizations that need to maintain control over the information stored on the network. Two examples of private blockchains are Hyperledger and Corda. Consortium Blockchains: Consortium blockchains are similar to private blockchains, but are governed by a consortium of organizations rather than one entity. Consortium blockchains provide a balance between security, transparency, and decentralization, and offer a compromise between public and private blockchains. Examples of consortium blockchains include R3 and Energy Web Foundation. Hybrid Blockchains: Hybrid blockchains are a combination of public and private blockchains, providing the benefits of both. They can be either permissioned or permissionless and allow organizations to tailor the technology to their specific needs. Hybrid blockchains are commonly used in industries such as finance and healthcare, where security and privacy are important. In summary, blockchain technology is a versatile tool with the potential to revolutionize various industries and applications. The different types of blockchain technology provide organizations with the flexibility to choose the best solution for their needs, whether it be it public, private, consortium, or hybrid. From below given statistics, we can see that worldwide spending on blockchain solution is increasing. This results in increase in users and applications of blockchain solutions resulting in the need to Thus, applications of blockchain solution are increasing and also the users, thus need to also increase the security and privacy to build safe environments. Formating Blockchain technology has the potential to bring change to various sectors due to its decentralized and secure nature. Nevertheless, just like any other technology, it is not exempt from security risks and challenges. Enhancing the security of the blockchain system can be achieved by implementing the following measures: Encryption: This involves converting plain text into a coded format, making it inaccessible to unauthorized users. Encrypting the data stored on the blockchain ensures the privacy and confidentiality of the data. Cryptographic Hashing: This refers to the process of transforming data into a unique digital signature, also known as a hash, through a mathematical algorithm. Hashing assures the integrity of the data stored on the blockchain, as any alterations to the data would result in a different hash. MultiSig Addresses: These are blockchain addresses that demand more than one signature to authorize a transaction, boosting the security of the blockchain system. This requires multiple parties to sign off on a transaction before it can be carried out. Decentralization: This involves distributing data and processing power across multiple nodes, making it challenging for a single entity to control the network. Decentralizing the blockchain network guarantees stability and security of the system as there is no singular point of failure. Frequent Audits and Upgrades: Regular audits and upgrades are essential in identifying and fixing security vulnerabilities in the blockchain system. This helps to maintain the system’s security and resistance to attacks. Two-Factor Authentication: This adds an extra layer of security by requiring users to provide two forms of authentication before accessing the blockchain system, such as sending a password and an OTP to a user’s mobile device. Safe Private Keys: Private keys are utilized to access and authorize transactions on the blockchain. Ensuring the safety of private keys is vital for the security of the blockchain system, as a breached private key can result in the loss of assets stored on the blockchain. Threats & Mitigations Some of the common threat issues in blockchain technology and the mitigation techniques include: 51% Attack: A 51% attack occurs when a group of nodes controlling more than 50% of the network’s computing power manipulate the network’s consensus mechanism. Mitigation: To reduce this threat, blockchain networks should have a decentralized network structure that makes it difficult for a single entity to control a majority of the network’s computing power. Smart Contract Vulnerabilities: Smart contracts are automated programs that run on the blockchain, but vulnerabilities in the code of smart contracts can be exploited by attackers to steal assets or disrupt the network. Mitigation: To tackle this threat, blockchain developers must ensure that their smart contracts undergo thorough security audits and testing. Phishing Attacks: Phishing attacks are a type of social engineering attack that aims to obtain sensitive information such as login credentials or private keys. Mitigation: To counteract this threat, blockchain users should be educated on how to identify phishing attempts and avoid giving out their personal information or private keys. Data Tampering: Data tampering is the unauthorized modification of data stored on the blockchain. Mitigation: To address this threat, blockchain networks can use cryptographic algorithms such as hash functions to preserve the integrity of the data stored on the blockchain. Network DDoS Attacks: A Distributed Denial of Service (DDoS) attack is an attack where a large number of nodes simultaneously send requests to a network, causing it to become overwhelmed and unavailable to other users. Mitigation: To mitigate this threat, blockchain networks can implement anti-DDoS measures such as rate limiting, IP blocking, and traffic filtering Despite being a secure technology, blockchain is still susceptible to security threats and challenges. To ensure the stability and security of the blockchain system, it is necessary to understand the various types of threats and their respective mitigation techniques. This can be achieved through education, the implementation of security measures, and the use of secure network structures. 1. How Blockchain Technology Can Help Enhance Your Cybersecurity? Blockchain technology has the potential to significantly improve cybersecurity and reduce the risk of cybercrime. The decentralized and immutable nature of blockchain makes it difficult for hackers to attack and manipulate the data stored on the platform, which enhances the security of online transactions. Here are some of the key ways that blockchain can help enhance cybersecurity. Immutable Ledgers The ledger of transactions on a blockchain is immutable, meaning once a transaction is recorded, it cannot be altered or deleted. This makes it difficult for hackers to tamper with the data, reducing the risk of fraud and other forms of cybercrime. The immutable nature of blockchain ensures that data is secure and trustworthy, which is particularly important for sensitive information such as financial transactions and personal data. One of the biggest advantages of blockchain technology is its decentralized architecture. Unlike traditional systems, which are centralized and have a single point of control, blockchain operates on a decentralized network, making it more difficult for hackers to carry out successful attacks. In the event that one node in the network is compromised, the rest of the network can continue to operate normally, reducing the risk of widespread disruption. Smart contracts are self-executing contracts that automatically enforce the terms of an agreement between parties. By eliminating the need for a third party to oversee transactions, smart contracts reduce the risk of fraudulent activity and enhance cybersecurity. Smart contracts provide a secure and transparent way to execute transactions, ensuring that the terms of the agreement are met and reducing the risk of fraud and other forms of cybercrime. Secure Identity Management Blockchain technology can be used to securely manage digital identities, making it difficult for criminals to steal and misuse them. By storing identity information on the blockchain, it becomes nearly impossible for hackers to access or manipulate it. This enhances the security of digital identities, reducing the risk of identity theft and other forms of cybercrime. Supply Chain Security Blockchain can also be used to secure supply chains by providing an immutable record of the origin and movement of goods. This can help prevent counterfeiting and other types of fraud, as well as improve transparency and accountability in the supply chain. By tracking the movement of goods from origin to destination, blockchain provides a secure and transparent way to manage supply chains, reducing the risk of cybercrime and enhancing the overall security of the supply chain. Blockchain technology offers significant benefits for enhancing cybersecurity and reducing the risk of cybercrime. With its decentralized architecture, immutable ledgers, secure identity management, smart contracts, and supply chain security, blockchain provides a powerful tool for organizations to secure their online transactions and protect against cybercrime. By leveraging these benefits, organizations can reduce the risk of cybercrime and improve the security of their online transactions, protecting their sensitive information and assets. 2. Exploring the Interplay Between Blockchain and Cybersecurity The combination of blockchain technology and cybersecurity can lead to a significant improvement in the security of digital transactions and data storage. We will look, how both blockchain and cybersecurity can help each other: - Improving Cyber Security through Blockchain Blockchain technology provides a decentralized platform for data storage and transactions, which makes it more secure and resistant to cyber-attacks compared to traditional centralized systems. The absence of a single controlling entity (centralized) eliminates the risk of a single point of failure, reducing the likelihood of a successful cyber-attack. It can be improved by adopting other fields such as: - Increased Trust through Smart Contracts: The use of smart contracts in blockchain technology can help improve cybersecurity by reducing the risk of fraud and tampering. Transactions are automatically executed based on pre-defined rules, reducing the need for manual intervention and increasing the trust in digital transactions. Tamper-Proof Record Keeping: Blockchain technology provides an immutable, or unalterable, record of all transactions. This tamper-proof record keeping can help improve cybersecurity by providing a secure and transparent record of digital transactions, making it easier to detect and prevent fraudulent activity. Increased Transparency: Blockchain provides a transparent and traceable record of transactions, which can help improve cybersecurity by making it easier to detect and prevent fraudulent activity. The transparency of blockchain transactions makes it easier for organizations to track the flow of digital assets and detect any anomalies that may indicate a security breach. Improving Blockchain through Cybersecurity Measures Cybersecurity can play a critical role in improving the security of blockchain technology. Cybersecurity experts can identify potential vulnerabilities in blockchain systems and implement security measures such as: - Encryption: The use of encryption is crucial to prevent unauthorized access and protect against cyber-attacks. Advanced encryption algorithms, such as AES and RSA, can be used to secure blockchain transactions and protect sensitive information. Secure Key Management: The secure management of private keys is essential to ensure the security of blockchain transactions. The safe storage of private keys, either through hardware wallets or other secure solutions, is critical to prevent theft or loss. Multi-Factor Authentication: By adding an extra layer of security, multi-factor authentication can improve the safety of blockchain transactions. This can include the use of biometrics, passwords, and smart cards to confirm the identity of users before accessing sensitive information. Software Updates and Patches: Regular software updates and patches are important to keep blockchain systems secure and prevent potential vulnerabilities. By staying up-to-date with the latest security updates, organizations can ensure the continued stability and security of their blockchain systems. By working together, blockchain and cybersecurity can enhance trust in digital transactions, reduce the risk of fraud, and improve the overall security of the digital world. 3. Exploring the Role of Blockchain in Cybercrime Prevention The application of blockchain technology in the field of cybercrime prevention has several benefits and can play a crucial role in improving the overall security of the digital world. The following are some of the ways blockchain can be utilized to prevent cybercrime: Digital/Evidence Identity Management Digital/Evidence Identity Management with blockchain is a system that manages digital identities and their associated evidence using blockchain technology. This system securely stores and manages digital identities on a decentralized blockchain network that is resistant to tampering. By storing digital identities on a decentralized blockchain, personal information can be protected from cybercriminals who might attempt to steal or manipulate this information. This provides a secure and efficient way to manage digital identities. Traffic Violation Monitoring System A Traffic Violation Monitoring System in blockchain is a system that uses blockchain technology to keep track of traffic infractions such as speeding, running red lights, or reckless driving. The information regarding these violations is securely stored on a decentralized, tamper-proof blockchain network that can be accessed and verified by authorized entities like law enforcement, insurance companies, and government organizations. By utilizing blockchain, this system offers a transparent, accurate, and secure way of recording and monitoring traffic violations, improving road safety and reducing the number of accidents on the road. Malware Prevention in blockchain involves using blockchain technology to stop the transmission and harm caused by malware in digital systems. In this system, the data and software related to the blockchain network are securely kept and managed on a decentralized, tamper-resistant blockchain network, making it harder for malware to penetrate and harm the system. The use of consensus methods and cryptographic techniques in blockchain also offers a secure and dependable method for verifying software, making it easier to identify and prevent malware attacks. Thus, making it more challenging for cybercriminals to access sensitive information or carry out malware attacks. Cybersecurity Information Sharing Blockchain can help organizations and government agencies to share information about cybersecurity threats in real-time. This allows for a more effective and timely response to potential cyber threats. Secure and Transparent Transactions The decentralized ledger of blockchain ensures that transactions are recorded in a transparent and secure manner, making it difficult for cybercriminals to manipulate transactions and steal sensitive information. Blockchain can help prevent fraud in various industries such as banking, e-commerce, and supply chain management by providing a secure and transparent record of transactions. It can also make it simpler to monitor and follow fraudulent activities due to its transparency and unalterable nature, giving a lasting and verifiable record of all transactions. By offering a secure and open platform for transactions, blockchain has the ability to greatly enhance the security and efficiency of various industries and stop fraudulent activities. By creating decentralized networks, blockchain can make it more difficult for cybercriminals to carry out DDoS (Distributed Denial of Service) attacks, which aim to disrupt the availability of online services. The integration of blockchain and cybersecurity can help organizations take a proactive approach to enhancing their cybersecurity and protect against potential cyber-attacks. By implementing blockchain solutions, businesses can secure digital assets, protect against cyber threats, and build trust in the digital world. As the demand for secure online transactions continues to grow, the use of blockchain in the field of cybercrime prevention is becoming increasingly important. 4. Exploring the Potential of Blockchain for Improved Cybersecurity The integration of blockchain technology with cybersecurity has the potential to greatly improve the security of the digital world in the future. Here are some of the most promising use cases for blockchain in cybersecurity: Safe Identity Management Blockchain has the potential to create secure, decentralized digital identities which can reduce the chances of identity theft and fraud. This can guarantee that only authorized individuals have access to sensitive information and systems, increasing overall cybersecurity. Decentralized Data Storage By using blockchain, data can be securely stored on a decentralized ledger, making it much less vulnerable to cyberattacks and data breaches. This will ensure sensitive information stays secure and protected from unauthorized access. Improved Cyber Threat Intelligence Blockchain can be used to store and share real-time information about cyber threats, allowing organizations to respond to potential threats more quickly and effectively. By keeping a secure and transparent record of all transactions, blockchain can help organizations prevent and identify potential cyberattacks. Secure Supply Chain Management Blockchain can be used to secure and track the supply chain, making sure that sensitive information stays protected. This will reduce the risk of cyberattacks and improve overall cybersecurity. Improved Incident Response Blockchain can be utilized to store and share real-time information about cybersecurity incidents, allowing organizations to respond to potential threats quickly and effectively. One potential application of blockchain for improved cybersecurity is in the creation of secure networks. This can be especially useful for protecting sensitive information such as personal and financial data, as well as intellectual property. By using blockchain technology, organizations can create secure networks that are resistant to hacking and tampering, ensuring the protection of sensitive information. Another potential use of blockchain is the creation of smart contracts. These self-executing contracts can be programmed with specific terms and conditions, which are automatically executed once certain conditions are met. This can greatly improve cybersecurity by reducing the risk of human error and ensuring that contracts are executed exactly as intended. The logs can be generated by various systems, applications, and devices, and are stored on a blockchain network. The network provides a secure and decentralized platform for storing and accessing the logs, making it difficult for malicious actors to tamper with or manipulate the logs. This makes log management through blockchain a useful tool for auditing, compliance, and cybersecurity purposes. Blockchain technology can also be used to improve the security of Internet of Things (IoT) devices. By creating secure networks of IoT devices using blockchain technology, organizations can protect against cyberattacks and ensure the security of connected devices. This can have far-reaching implications for industries such as healthcare, where connected devices play a crucial role in the delivery of care. The potential of blockchain technology for improved cybersecurity is vast. From creating Safe Identity Management to improving the security of IoT devices, to revolutionizing identity and access management, the possibilities are nearly endless. As the technology continues to evolve, it is likely that we will see even more innovative uses for blockchain in the realm of cybersecurity. What the Latest Research Says About Blockchain Privacy and Cybersecurity Blockchain technology has gained immense popularity in recent years due to its secure, transparent, and decentralized nature. It has the potential to revolutionize various industries, but at the same time, it is not immune to security threats and challenges. This has led to a growing interest in the latest research on blockchain privacy and cybersecurity. Studies have shown that public blockchains, such as Bitcoin and Ethereum, are susceptible to various privacy attacks, such as linkability and deanonymization. These attacks can compromise the confidentiality and privacy of users, making it important for public blockchains to implement privacy-preserving techniques. One such technique is the use of zero-knowledge proofs, which allow transactions to be verified without revealing any sensitive information. Another is the use of privacy-focused cryptocurrencies, such as Monero and ZCash, which use advanced encryption methods to enhance the privacy of transactions. In terms of cybersecurity, research has focused on the potential vulnerabilities of blockchain systems and how to improve their security. For instance, studies have shown that smart contracts, self-executing code used in blockchain applications, can contain vulnerabilities that can be exploited by attackers. This highlights the importance of regularly auditing and updating smart contracts to ensure their security. In addition, research has also shown that the use of decentralized applications (dApps) built on blockchain platforms can also be vulnerable to security threats, such as cross-site scripting (XSS) and SQL injection attacks. To mitigate these risks, researchers have proposed the use of secure coding practices and the implementation of security standards for dApps. Another area of focus in blockchain security research is the protection of private keys, which are critical for accessing and authorizing transactions on the blockchain. Research has shown that poor key management practices, such as storing keys on unsecured devices, can lead to the theft of assets stored on the blockchain. To address this issue, researchers have recommended the use of secure hardware wallets, multisig addresses, and two-factor authentication (2FA) to protect private keys. Blockchain technology holds enormous potential for various industries, but at the same time, it is not immune to privacy and security risks. The latest research in the field of blockchain privacy and cybersecurity provides valuable insights into the potential vulnerabilities of blockchain systems and how to improve their security. By adopting these findings, organizations can ensure the security and privacy of their blockchain systems and maximize the potential benefits of this revolutionary technology.
<urn:uuid:bffeb43f-a525-464c-ac3c-4f2544d7961a>
CC-MAIN-2024-38
https://www.infopercept.com/blogs/blockchain-privacy
2024-09-20T14:46:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00851.warc.gz
en
0.921439
4,263
3.34375
3
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') Microsoft Internet Explorer 8.0 Beta 2 relies on the XDomainRequestAllowed HTTP header to authorize data exchange between domains, which allows remote attackers to bypass the product's XSS Filter protection mechanism, and conduct XSS and cross-domain attacks, by injecting this header after a CRLF sequence, related to "XDomainRequest Allowed Injection (XAI)." NOTE: the vendor has reportedly stated that the XSS Filter intentionally does not attempt to "address every conceivable XSS attack scenario." CWE-79 - Cross Site Scripting Cross-Site Scripting, commonly referred to as XSS, is the most dominant class of vulnerabilities. It allows an attacker to inject malicious code into a pregnable web application and victimize its users. The exploitation of such a weakness can cause severe issues such as account takeover, and sensitive data exfiltration. Because of the prevalence of XSS vulnerabilities and their high rate of exploitation, it has remained in the OWASP top 10 vulnerabilities for years.
<urn:uuid:351accc9-a76b-4f0b-96bd-f2ac03961fa6>
CC-MAIN-2024-38
https://devhub.checkmarx.com/cve-details/cve-2008-5555/
2024-09-08T14:10:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00151.warc.gz
en
0.896156
222
2.71875
3
The humble office copier has changed more than you might imagine. It isn’t so humble anymore. What began as a simple device for making duplicates has turned into a multifaceted tool that an office can’t function without. This article delves into the office copier’s past, present, and peers into its future, highlighting how it has become an indispensable part of workplace efficiency and teamwork. The Origins of the Office Copier The story of the office copier starts as far back as the late 19th century. It was a time when copying documents was manual and tedious. However, the real breakthrough came in the mid-20th century with the introduction of the Xerox 914 – the first office copier capable of duplicating documents on plain paper. This innovation marked a turning point, replacing labor-intensive copying methods with mechanical duplication. 1980s: Birth of Multi-Functionality The path of the office copier reflects shifts in technology and workplaces. Post-1980s, office copiers evolved into multi-function printers (MFPs), integrating photocopying, faxing, scanning, and network printing. This change meant offices could now rely on one device for multiple tasks rather than several bulky and inefficient machines. MFPs paved the way for digital document management, easy storage, retrieval, and information sharing. Meanwhile, network connectivity allowed multiple users to access the device and collaborate. Shift to Digital Going from analog to digital was a major leap for office copiers. Digital technology brought faster copying, higher quality outputs, and a reliability boost. These copiers offered clearer, more accurate copies and color fidelity for detailed documents and marketing materials. The ability to connect with computers and networks introduced functionalities like direct printing, network scanning, and emailing directly from the copier. This connectivity opened doors for future innovations in copier technology and office productivity. What’s Trending in Today’s Copiers The office copier is keeping up with the times and even pushes ahead of the curve. Adapting to Remote Work Modern office copiers now cater to both traditional office settings and the increasing trend of remote work. They are more compact, making them a perfect fit for smaller home offices, yet they don’t compromise on functionality. These multi-purpose machines have the same range of capabilities as their larger office counterparts, including high-quality printing, scanning, and copying. This way, regardless of where professionals work, they can access top-notch document management tools. Augmented Reality and Interaction Office copiers are tapping into augmented reality (AR) to revolutionize user interaction. This tech upgrade goes beyond traditional buttons and screens, offering a more interactive and intuitive experience. With AR, using a copier becomes simpler, whether for selecting functions or troubleshooting. This shift shows a keen focus on making everyday office tools smarter, more user-friendly, and more adaptable. With the rise in online security threats, the need for robust protection in office copiers has never been more pressing. Manufacturers are now equipping copiers with advanced security features to safeguard sensitive information. These include encrypted communications, secure user authentication, and protection against unauthorized access. In an age where data breaches are a serious concern, these security measures protect confidential documents, both during transmission and in storage. Smart Management Tools Office copiers are on the road to turning into smart management tools. These future machines will do more than just copying and printing. They will provide valuable insights into business processes. By leveraging data analytics and connectivity, copiers will be able to track usage patterns, optimize resource allocation, and even predict when the copier might need maintenance. This evolution will position copiers as central components in the intelligent management of office resources, helping businesses stay efficient and technologically advanced. Choosing the Ideal Office Copier Selecting the best office copier means pinpointing what matches your office. Not all copiers have the same capabilities, so you’ll need to prioritize features like the ones listed below. When choosing an office copier, consider whether a basic model will suffice or if a multifunctional device is more suitable. A straightforward copier is ideal for simple, everyday copying tasks. However, if your office requires additional capabilities like scanning, faxing, and network printing, a multifunctional device can provide all these features in one unit. This integration can streamline tasks, reduce the need for multiple devices, and save space in the office. Volume and Speed Assess the average volume of documents your office processes daily. If your office copies in high volumes, get a copier with a higher capacity and faster output rate. This ensures that your workflow is not hindered by frequent paper refills or slow copying speeds. A copier that matches your volume and speed requirements will enhance overall productivity and prevent bottlenecks in document processing. In an interconnected work environment, you should pick a copier that integrates seamlessly with your existing network and systems. Ensure the copier is compatible with your office’s operating systems and network infrastructure. This compatibility allows for smooth operation, easy access to copier functions from multiple devices, and efficient handling of print jobs from various sources within your network. A network-compatible copier can significantly ease the workflow, enabling employees to print and scan documents directly from their computers or mobile devices. With the increasing emphasis on data security, selecting a copier with robust security features is essential, especially if your office handles sensitive or confidential data. Look for copiers with user authentication, data encryption, and secure network connectivity. These features help protect sensitive information from unauthorized access and potential security breaches. In an age where data security is paramount, investing in a copier with solid security measures is a wise decision to safeguard your business’s confidential information. Office Copiers Are Here to Stay The office copier has come a long way from being a simple duplicating device. It’s now a multifunctional tool integral to the smooth operation of modern offices. As technology evolves, so will the capabilities and roles of office copiers, making them even more indispensable in our daily work. For businesses in New York, Totowa, Cherry Hill, Edison, and Ft. Washington looking to enhance their office efficiency with the latest office copier technology, Docutrend has the solutions. Explore our website or visit our contact page for more details.
<urn:uuid:ff426f42-3925-4fc2-bee1-b3e0b5e6ee24>
CC-MAIN-2024-38
https://www.docutrend.com/blog/office-copier-innovation-beyond-duplication-to-business-integration/
2024-09-09T19:07:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00051.warc.gz
en
0.905162
1,316
2.671875
3
Uncovering Blind Spots: The Crucial Role of NDR in Zero-Day Exploit Detection Understanding Zero-Day Exploits Within the realm of cybersecurity, zero-day exploits pose a unique challenge to consumers and vendors alike; how do you identify and mitigate the risks of an unknown vulnerability in real time? These threats exploit vulnerabilities in software, hardware, or protocols that are not yet known to the vendor or the broader cybersecurity community. Known vulnerabilities can be patched or mitigated once identified, while zero-day vulnerabilities leave organizations vulnerable to exploitation until effective countermeasures are developed and deployed. The term "zero-day" refers to the fact that organizations have zero days of prior awareness or preparation for these threats before they are actively exploited by malicious actors. Here’s how damaging they can be: - This lack of prior knowledge leaves organizations susceptible to significant risks, including unauthorized access to systems, exfiltration of sensitive data, disruption of critical operations, and more. - Further, zero-day exploits often target widely used software applications, operating systems, or network protocols, amplifying their potential impact across diverse industries and sectors. - Malicious actors have even compromised supply chains, infiltrating trusted vendor networks or tampering with software repositories to introduce malicious code or backdoors into otherwise legitimate software packages. By compromising the integrity of vendor-certified software, attackers can exploit unsuspecting consumers who trust the authenticity and security of these products by injecting a zero-day threat further up the supply chain (more on supply chain attacks here). As a result, zero-day exploits pose a pervasive and ever-evolving challenge, necessitating continuous vigilance, proactive defense strategies, and rapid response capabilities. Understanding the nature of zero-day threats, their potential consequences, and the limitations of traditional security measures is essential for organizations seeking to mitigate the risks posed by these complex vulnerabilities. In the subsequent sections, we will explore the inherent limitations of traditional security measures in combatting zero-day exploits and examine the crucial role of Network Detection and Response (NDR) solutions in uncovering blind spots and bolstering organizational defenses. The Limitations of Traditional Security Measures Traditional security measures, while effective against known threats, often fall short when confronted with the complexities of zero-day attacks. Signature-based antivirus software, for instance, relies on a database of known malware signatures to identify and block malicious files. However, since a zero-day threat lacks an existing signature, it can often evade detection by these systems. Similarly, perimeter-based firewalls inspect network traffic based on predefined rules, which may overlook sophisticated zero-day exploits that exploit protocol-level vulnerabilities or masquerade as legitimate traffic. These limitations highlight the inherent challenge of relying solely on static, rule-based defenses in an environment where threats are dynamic and rapidly evolving. Traditional security measures are reactive by nature, relying on the detection of known patterns or signatures to identify and respond to threats. However, zero-day exploits operate outside the realm of known patterns, making them particularly elusive and difficult to detect using traditional methods. As a result, organizations relying solely on traditional security measures find themselves vulnerable to zero-day attacks, facing significant blind spots in their defenses that malicious actors can exploit for their gain. The Role of NDR in Zero-Day Threat Detection Network Detection and Response (NDR) solutions have emerged as a critical component of modern cybersecurity strategies. Unlike traditional security measures that focus on perimeter defense or endpoint protection, NDR solutions operate at the network level, providing organizations with real-time visibility into their network traffic and identifying anomalous behavior indicative of a zero-day attack. NDR solutions leverage advanced analytics, machine learning, and behavioral analysis techniques to detect deviations from normal network behavior that may signal the presence of a zero-day exploit. By continuously monitoring network traffic and analyzing patterns and anomalies, ExeonTrace, the Swiss-made NDR solution, can detect zero-day exploits as they emerge, enabling organizations to respond swiftly and effectively to mitigate potential damage. Furthermore, NDR such as ExeonTrace offers organizations the ability to conduct retrospective analysis, allowing them to investigate past network activity to identify indicators of compromise or suspicious behavior that may have gone unnoticed. This proactive approach to threat detection and response enables organizations to stay ahead of emerging threats and minimize the impact of zero-day attacks on their systems and data. In the subsequent sections, we will delve deeper into the capabilities of ExeonTrace, exploring how it leverages advanced analytics and real-time monitoring to detect and mitigate zero-day exploits, and examine how it can enhance organizations' overall cybersecurity posture. Leveraging Advanced Analytics for Real-Time Detection ExeonTrace utilizes a sophisticated array of advanced analytics tools to bolster real-time threat detection capabilities, enabling organizations to preemptively identify and neutralize zero-day threats. Among these tools, machine learning algorithms do the heavy lifting, adept at parsing through immense volumes of network traffic to discern subtle anomalies that might signify the presence of an emerging zero-day exploit. The Power of Machine Learning These algorithms undergo training on historical data to establish a baseline of normal network behavior, allowing for the swift detection of deviations indicative of potential security breaches. Once the baseline is established, the tool continues to monitor historical behavior under a rolling time period, ensuring a constantly updated awareness of new potential threats. By continuously monitoring network activity and cross-referencing data from diverse sources, ExeonTrace can pinpoint zero-day exploits with remarkable accuracy, minimizing false positives and enabling organizations to uncover blind spots that may have previously been hidden. Alongside machine learning, ExeonTrace leverages complementary analysis methodologies such as behavioral analysis and anomaly detection. By scrutinizing network events, application interactions, and system activities, these techniques can flag irregularities that may signify an ongoing zero-day attack. This multifaceted approach not only enhances detection capabilities but also affords organizations a comprehensive understanding of their network environment. Through the strategic application of advanced analytics for real-time threat detection, ExeonTrace can provide organizations with the means to maintain a proactive stance against the evolving threat landscape. In the next section, we will examine exactly how ExeonTrace could identify an emerging zero-day threat. NDR/ExeonTrace Applied: Ivanti Connect Secure VPN Exploited In early 2024 Ivanti Connect Secure VPN faced severe security challenges due to multiple vulnerabilities (CVE-2023-46805 authentication bypass, CVE-2024-21887 command injection), and two additional vulnerabilities discovered later (CVE-2024-21888 privilege escalation, CVE-2024-21893 server side request forgery), one of which was a critical zero-day exploit. These vulnerabilities led to unauthorized access and data theft from a broad range of global organizations, including those in aerospace, banking, defense, government, and telecommunications. Zero-day exploits such as these are commonly discovered in production environments around the world, leaving organizations with a limited response time to address these critical security breaches. If an affected organization were to be using an NDR solution such as ExeonTrace, it would have a chance to detect these unknown exploits by analyzing real-time network traffic and deviations from behavioral baselines. If we deconstruct the Ivanti Connect Secure VPN attack chain, we can examine how an NDR solution could identify and report this in real-time. Ivanti Connect Secure VPN Exploitation Timeline: 1. Gaining Access In the initial step, attackers exploited the CVE-2023-46805 vulnerability, allowing them to bypass authentication mechanisms on the VPN appliance. This action would be measurable, and seen as unexpected successful logins or access attempts from unusual IP addresses, possibly at atypical times, both of which have well defined baselines, allowing an NDR solution to potentially initialize an anomaly alert even at this early stage. Next was the exploitation of CVE-2024-21887 – after gaining access, this exploit allowed the attackers to execute arbitrary commands on the appliance. This would generate unusual outbound connections from the VPN, and would trigger unexpected system changes. New outbound connections initiated by devices within the network to external servers is a closely monitored metric by NDR solutions, as this is often an initial step in an attack chain to maintain communication with the compromised systems via a command and control server. 2. Post-Access Behavior Once the attackers had established access, their next step was to establish a foothold, using a Perl script to remount the filesystem. This Perl script was a sequence of commands that changed permissions or settings to allow previously restricted activities, such as executing files, or writing to areas that were previously protected, to be possible. This step facilitated the deployment of additional malware, and provided newly elevated privileges. These steps could be caught by detecting new network traffic patterns as each step could possibly deviate from the established baselines. An effective NDR solution may also flag atypical commands run with elevated privileges, such as the initial remounting of the filesystem, which is unlikely to be part of the regular administrative routines. These new privileges enabled the deployment of Thinspool - a shell script dropper that allows malicious scripts to be delivered and executed on the target system, writing files in restricted directories or attempting to hide its activities. ExeonTrace’s baselining detections could determine if Thinspool initiated any services that are new or uncommon in the environment, especially any activity on an unusual port, or if it received connections from outside the network. 3. How ExeonTrace Spots Threat Indicators Thinspool acted as the initial dropper for the Lightwire/Wirefire web shells, scripts which are placed on a web server that enable further persistent access to the target system. Measurable network behavior metrics which may change at this step include increased HTTP(S) traffic, and communication with unusual endpoints to receive commands on. Exeontrace’s analyzers would notice abnormal patterns in request volume, signaling potential web shell activity. Other analyzers also examine access requests through the proxy that do not follow typical authentication or usage patterns and could detect the attacker using a web shell to communicate with the compromised server. These proxy analyzers would detect anomalies in the volume and nature of proxy requests, additionally increasing the threat score and anomalies alerts triggered. The final malware tool used was a passive backdoor known as Zipline, used to intercept network traffic. The malware receives encrypted commands and can remotely perform malicious activities without being easily detected. However, an increase of traffic to specific ports and services may be present, particularly an observable increase in encrypted traffic that does not follow the typical patterns of legitimate encrypted communication channels. This example demonstrates the complexity and sophistication of the attacks leveraging the Ivanti vulnerabilities. The malicious actors utilized a mix of custom malware, script-based manipulations, and classic exploitation techniques like command injection and authentication bypass, much of which can go undetected without real-time traffic analysis. However, due to the broad network view and precise machine learning algorithms, even small deviations from baseline data can be detected by ExeonTrace. Individually, these might not raise alarms, but together, they form a pattern indicative of a security event happening in the network in real-time, eventually exceeding alerting thresholds and providing real-time insights into an ongoing security event. These rapid response capabilities are critical in the case of zero-day exploits, where every second counts in preventing data breaches or system compromise. The latest “Perfect 10” CVSS: Palo Alto Networks Firewalls Impacted by Zero-Day Vulnerability Security analysts have recently discovered a critical zero-day vulnerability in Palo Alto Networks’ firewall systems. This vulnerability, identified as CVE-2024-3400, has been actively exploited since at least March 26, 2024. The flaw, which scores a maximum of 10 on the Common Vulnerability Scoring System (CVSS), allows unauthenticated attackers to remotely execute arbitrary code with root privileges on affected systems. Palo Alto Networks has confirmed that this vulnerability impacts Pan-OS versions 10.2, 11.0, and 11.1. To exploit the flaw, the telemetry functions and either the GlobalProtect Gateway or GlobalProtect Portal (or both) must be operational on the firewall systems. Fortunately, Cloud Firewalls (NGFW), Panorama Appliances, Prisma Access, and older Pan-OS versions (9.0, 9.1, 10.0, and 10.1) remain unaffected. In response to this critical issue, Palo Alto Networks has promptly released hotfixes for CVE-2024-3400 on April 15. These hotfixes are available for Pan-OS versions 10.2.9-h1, 11.0.4-h1, and 11.1.2-h3. Administrators are strongly advised to install these updates promptly or temporarily deactivate the telemetry feature until a comprehensive update can be implemented. Strengthening Security Posture with Proactive Defense In the relentless arms race against zero-day threats, NDR serves as an indispensable tool in fortifying an organization's security posture. By integrating NDR capabilities into their cybersecurity arsenal, organizations gain a potent tool for preemptively identifying and preventing nascent threats before they can inflict harm. By embracing a proactive defense posture supported by NDR, organizations can stay one step ahead of zero-day exploits and safeguard their critical assets from exploitation. In conclusion, the evolving landscape of cybersecurity, marked by the persistent threat of zero-day attacks, demands a proactive and multifaceted approach to defense. Traditional security measures, while valuable, exhibit inherent limitations in combating the dynamic nature of zero-day threats, necessitating the integration of NDR solutions such as ExeonTrace. By leveraging advanced real-time analytics, diverse data sources, and retrospective analysis capabilities, NDR solutions empower organizations to detect, mitigate, and preemptively respond to zero-day exploits in an unprecedented way. Yet, as the cyber threat landscape continues to evolve, with adversaries employing increasingly sophisticated tactics, the future of cybersecurity remains uncertain. How will emerging technologies, regulatory frameworks, and collaborative efforts shape the next phase of cyber defense? We’ll keep you updated. Customer Support Engineer
<urn:uuid:c1dfa472-0532-41cb-a704-05ad808a4c80>
CC-MAIN-2024-38
https://exeon.com/blog/how-to-detect-zero-day-exploits
2024-09-12T01:54:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00751.warc.gz
en
0.920358
2,937
2.5625
3
The external-cavity laser has moved to the forefront of DWDM tunable-laser technology through the use of micro-electromechanical systems. CINDANA TURKATTE, iolon Inc. Today's data-hungry telecommunications environment relies increasingly on the ability to transfer large amounts of data over great distances in very little time at very small cost. The full potential of the Internet cannot be realized until network bandwidth can be optimized to handle the intense demands of the data-rich, selectively provisioned applications demanded by the Internet economy. DWDM has emerged as an answer to the bandwidth bottleneck. Until recently, each wavelength in a wavelength-multiplexed system required its own fixed-wavelength source laser to produce a signal at the desired frequency. Add to that the fixed-wavelength lasers needed for each switch and add/drop multiplexer, plus the lasers needed for backup and sparing, and the telecom industry was facing huge inventory, complexity, and cost problems to fully implement DWDM. Consequently, tunable lasers have become a quasi "Holy Grail" of optical networking, with the promise of supporting flexible wavelength-routing capabilities and realizing the all-optical network. If a single tunable laser can replace an array of fixed-wavelength lasers, then the optical network will realize step function increases in flexibility and capacity, with massive reductions in network complexity, wavelength contention, and investment in inventory.Tunable lasers have been under development for over a decade, but in the past, tunable-laser technology did not meet the basic optical performance parameters, such as output power and wavelength spectral performance, necessary for telecom applications. Recent advances in tunable-laser technologies have allowed tunable lasers to approach the performance levels of their fixed-wavelength cousins. There are several new technologies that allow a single tunable laser to provide a multitude of wavelengths, which previously would have required many multitudes of fixed-wavelength lasers. There are several distinct applications within DWDM that have varying requirements for tunable lasers. There are three main dependencies affecting performance requirements for the transport markets: distance (access versus metro versus long haul), channel spacing (100, 50, and 25 GHz), and data rate (2.5, 10, and 40 Gbits/sec). In addition to the transport applications, tunable lasers will also enable switching and optical add/drop multiplexing applications that will drive a different set of performance requirements largely dependent on output power, tuning time, and tuning range.Given the variety of applications with differing needs, it has become clear that no tunable-laser technology is optimized for all of these DWDM requirements. The choice of laser technology will be determined by many factors, including output power, line width, relative intensity noise (RIN), tuning range, tuning time, and stability. It is important to understand the relative strengths and weaknesses of the different tunable-laser technologies against these requirements when considering what type of laser to implement for a given optical-network application. Tunable lasers can be produced using a variety of laser structures, each with its own advantages and disadvantages. Five of the main basic laser structures are distributed feedback (DFB), distributed Bragg reflector (DBR), sampled-grating DBR (SGDBR), vertical-cavity surface-emitting laser (VCSEL), and external-cavity lasers (ECL). DFB lasers are simple structures that work by using an internal grating to change the wavelength of operation (see Figure 1). They are tuned to Inter national Telecommunication Union (ITU) grid wavelengths by changing the temperature of the medium, either through drive-current changes or a temperature-controlled heat-sink, which changes the refractive index of the internal waveguide. With improvements in thermo-electric coolers, the temperature can be precisely controlled to produce a stable, well-defined output wavelength with an acceptable line width. In general, DFB lasers are well behaved and characterized and have proven reliable.While DFBs offer some manufacturing, performance, and operational advantages, they have the disadvantages of low output power and very limited tuning range. Their effective tuning range tends to be limited to about 5 nm because, as the tuning temperature increases, the efficiency and output power of the device decreases. To extend the tuning range, these devices can be integrated into ensemble or side-by-side laser arrays of several lasers on one chip coupled into a single output. One laser at a time is driven to select a wavelength. There are, however, some limitations even to this design. This solution is not continuously tunable, the combining mechanism is optically inefficient, and the chip size leads to yield issues. Cascade or inline laser arrays overcome the coupling loss by allowing light to pass transparently through other laser sections on the device (see Figure 2). Their main challenge is achieving mode stability for each of the laser sections. DBR lasers consist of two or more sections with at least one active region as well as a passive region (see Figure 3). The passive region contains a grating, and each end of the laser cavity has a reflective surface. The DBR uses current changes to the passive region to change the refractive index and thereby tune the laser frequency. The DBR differs from the DFB in that the active region and grating region in the DBR are separated, while in the DFB, they are combined.DBRs have some appealing advantages. One is that the tuning time is very fast. Also, like the DFB lasers, DBR lasers are relatively simple and well understood. The continuous tuning range has been improved significantly to about 40 nm; however, designs can be limited by current saturation. A drawback in using DBRs as tunable lasers is that it is difficult to control the optical-path length between the two reflectors at each end of the cavity, resulting in broad line width and wavelength instability. SGDBR tunable lasers use grating reflectors at either end of the cavity to produce a spectral comb response. The back and front sampled gratings have slightly different pitch so the resulting spectral combs have slightly different mode spacing. Tuning to a specific wavelength is achieved by controlling the current in the two grating sections so as to align the two combs at the chosen wavelength. The laser, therefore, "hops" between wavelengths. An additional contact is normally required to adjust the phase so an integral number of half-wavelengths exists. If this phase adjustment is not included, then mode stability can suffer and noise can increase.While these types of lasers offer a wide tuning range, they suffer from low output power and broad line width. Furthermore, they have complex drive requirements when compared to other laser designs. They have more electrodes, and accurate wavelength selection requires matching of numerous input currents with the appropriate electrodes. Sampled gratings with grating-assisted co-directional coupler filters add a filter to select one of the sampled-grating peaks, making it easier and cleaner to select a wavelength but at the cost of additional manufacturing complexity. SGDBRs also suffer from low output power, which can potentially be recovered with a semiconductor optical amplifier (SOA). However, SOAs are noisy devices and can cause other perturbations that will prevent them from meeting requirements of RIN and line width for 10-Gbit/sec narrow-wavelength-spacing extended-reach applications. Because SGDBR devices are grown on indium phosphide wafers, the ability of the SGDBR laser designs to meet all system noise, power, and tuning range requirements are limited to the physical characteristics that can be achieved by a single semiconductor system. In short, with all monolithic growth approaches, tradeoffs must be made between the material gain characteristics, electro-optical coefficients, and DBR current-tuning efficiencies. That is further complicated by the intricate fabrication process of the SGDBR device. The VCSEL consists of a gain layer surrounded by mirrors on the top and bottom (see Figure 4). The cavity produces light that is emitted from the top surface of the laser structure, rather than the edge like conventional diode lasers.VCSELs offer a few key advantages. One is that the emitted laser beam is circular and therefore much easier and less expensive to couple to a fiber. VCSELs demonstrate narrow line widths, show low power consumption, and can offer continuous tunability without mode hops. They also have some manufacturing advantages in that the devices can be tested at wafer level prior to dicing and packaging, which could lead to reduced manufacturing costs. The main disadvantage of the VCSEL, however, is its limited output power. This limitation is fundamental to the VCSEL design, since there is a constraint to maintain a single spatial mode of operation with a very small active region. ECLs are straightforward and well-understood laser designs. The external-cavity approach alters the beam wavelength by mechanically adjusting the laser cavity, rather than through current or temperature changes applied to a semiconductor material. The grating-based ECL shown in Figure 5 is designed in the Littman-Metcalf cavity configuration. This laser consists of a separately fabricated gain medium (a simple Fabry-Perot laser diode) and an external cavity formed of separately fabricated optical structures (a diffraction grating and mirror) integrated at an assembly step. Wavelength tuning is achieved by applying a voltage to the micro-electromechanical system (MEMS) actuator, which rotates the mirror to allow a particular diffracted wavelength to couple back into the laser diode. The gain bandwidth of the diode, the grating dispersion, and the external-cavity-mode structure combine to determine the actual wavelength of the laser output.ECLs with continuous tuning have been traditionally used in optical test and measurement equipment since they provide high power, large tuning range, and narrow line widths with high stability and low noise. Furthermore, they provide continuous tuning through the entire spectrum of the gain medium, where other common laser technologies (like DBRs) exhibit mode hops between stable points in the spectrum. However, ECLs were generally too large, costly, and sensitive to shock and other environmental influences to be used in telecommunications components. Recent technological advances, however, have brought ECLs to the forefront of optical-networking component technology. In particular, the application of MEMS to optical-component designs produces high-performance micro-optics that readily fit on standard transmitter cards and can be manufactured at competitive costs in the optical-networking industry. One key breakthrough in the development of MEMS-based ECLs is the use of deep reactive ion etching (DRIE) techniques to fabricate the MEMS actuators. DRIE techniques allow the cost-effective and reliable production of rigid mechanical drive structures that provide suitable force for high-speed and high-precision movement of optical elements over large linear and angular deflections. Further more, a low-cost, precision servo-control system can provide real-time error compensation, making the MEMS actuators quite accurate and insensitive to effects from shock, vibration, temperature changes, or creep. DRIE MEMS actuators, externally fabricated optics, and high-precision servo-control systems are ideal building blocks for the creation of new optical-networking components. MEMS actuators can be readily combined with externally fabricated optics (diffraction-limited lenses, high-reflectivity mirrors, wavelength-selective coatings) using precision servo-control systems to rapidly develop solutions for critical telecom-component requirements. For instance, these building blocks could be repackaged to produce tunable receivers, polarization controllers, optical monitors, variable attenuators, optical switches, and tunable filters. The DRIE MEMS ECL performs very well. Figure 6 shows a representative output spectrum of a MEMS ECL locked to the 100-GHz ITU wavelength grid with about 10-mW constant output power across a 13-nm tuning range. This design is capable of providing high-power output (products soon will be available at 20 mW) and can continuously tune across a 40-nm tuning range. Furthermore, the device exhibits narrow line width, low RIN, and excellent side-mode suppression-all while meeting the market requirements for small-module footprint. In general, tunable ECLs provide many key advantages for DWDM systems and perform quite well against the technical requirements for optical-networking components.There are several reasons why the MEMS ECL performs so well. Using MEMS to reduce the ECL geometry to the micro-optics level maintains the high-performance characteristics of traditional ECLs, while enabling excellent laser-frequency stability over ambient temperature, because the entire ECL fits on a thermo-electric cooler. Further more, the MEMS ECL solution easily meets the market form-factor requirements of small-module footprint and standard pinout configurations. By fabricating the MEMS actuator, precision optics, and laser-gain diode separately, each can be optimized without compromising between the different component functionalities. Using servo-control of actuators allows most assembly steps to be performed passively for pick-and-place automated volume manufacturing, reducing the manufacturing costs, time-to-market, and time-to-volume. Further more, it provides in-use active optical alignment, allowing the system to self-correct in the event of environmental changes, shock, vibration, or creep. ECLs do have some potential disadvantages. MEMS-based ECLs are potentially susceptible to shock and vibration. However, proper actuator designs and servo-controls can sufficiently mitigate these and other external influences. Also, it remains to be seen whether the MEMS-based ECLs can be manufactured at competitive costs to be viable for the telecommunications markets. Table 3 outlines the relative advantages and disadvantages of the laser technologies discussed. When considered against the requirements of various optical-networking applications, we see that different tunable-laser technologies are more suitable for different applications. It is clear that the ECL, enabled with DRIE MEMS technologies, is a strong candidate for many of these applications. The comparison is largely limited to the technical performance requirements for DWDM networking. It does not consider cost, availability, and reliability, which will be critical factors in deciding which laser technologies are employed in which applications, if at all. It remains to be seen which technologies will make it to market and survive. Cindana Turkatte is vice president of marketing at iolon Inc. (San Jose, CA). She wishes to thank Dr. Jill Berger, manager of optical design; Dr. John Clark, CEO; Dr. John Grade, manager of micromachining design; Dr. Hal Jerman, director of micromachining design; Eric Selvik, product-line manager; and Dr. Yongwei Zhang, manager of optical integration, all of iolon, for their contributions to this article. - M. Littman and H. Metcalf, "Spectrally narrow pulsed dye laser without beam expander," Applied Optics 17, 2224-2227 (1978). - D. Vakhshoori, P. Tayebati, Chih-Cheng Lu, M. Azimi, P. Wang, Jiang-Huai Zhou, and E. Canoglu, "2 mW CW single mode operation of a tunable 1550 nm vertical cavity surface emitting laser with 50 nm tuning range," Electronics Letters 35, pp.1-2 (1999). - B. Mason, G.A. Fish, S.P. DenBaars, and L.A. Coldren, "Widely tunable sampled grating DBR laser with integrated electroabsorption modulator," Photonics Technology Letters 11, pp. 638-640 (1999). - J.D. Grade, "MEMS electrostatic actuators for optical switching applications," OFC 2001 technical paper. - J.D. Berger, "Widely tunable external cavity diode laser based on a MEMS electrostatic rotary actuator," OFC 2001 technical paper. - J. Hong, H. Kim, and T. Makino, "Enhanced wavelength tuning range in two-section complex coupled DFB lasers by alternating gain and loss coupling," IEEE Journal of Lightwave Technology 16, pp. 1323-1328 (1998). - G. Gilder and C. Burger, "The Tunable Telecosm," Gilder Technology Report, December 2000, Vol. V, No. 12. - RHK Telecommunications Industry Analysis, "The Role of Tunable Lasers in the Emerging Optical Network Infrastructure," Optical Components, September 2000.
<urn:uuid:ea3828d8-0c77-4acf-9968-08fcd9b42474>
CC-MAIN-2024-38
https://www.lightwaveonline.com/network-design/dwdm-roadm/article/16647323/tunable-laser-technologies-vs-optical-networking-requirements
2024-09-12T01:58:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00751.warc.gz
en
0.916626
3,482
2.796875
3
A security software developer is a modern type of technologist who creates computer programmes with the goal of protecting computer systems and data. A security software developer is someone who can function well as a team and communicate effectively both in writing and verbally. At the same time, they know how to build and deploy computer programmes that are focused on protection. Cyber security software engineer A easy way to think about security software development is that it entails combining technical software writing skills with security threat analysis and product development. It’s important to have a keen eye for detail and a thorough understanding of the current threat landscape. How to Become a Security Software Developer – simple ways! Strong technology background – cyber security software engineer A background in both cybersecurity and computer programming is required for this role. This education usually begins with college courses or relevant job experience in software development or software engineering. A background in dealing with security threats is also essential for this type of role. When attempting to conceptualise future product development issues and strategies, on-the-ground experience dealing with cybersecurity threats becomes extremely useful. Working with actual security threats through corporate cybersecurity departments, consultants, or within a security operations centre is the best way to gain experience and background in the sector. Working with teams The willingness to work in a team is another essential aspect of security software development. It’s also essential to improve communication and teamwork skills while honing your technical skills and gaining experience with security threat detection and elimination. Building a professional network and gaining a reputation as a team player can also help you find potential job opportunities. What is a security software developer? Software developers must be innovative and goal-oriented, with a deep desire to create the best possible product despite numerous challenges and competing demands. Developers of security software must go one step further to ensure that the finished product is still safe and protected from external threats. This necessitates the use of a creative mind to imagine current threats as well as threats that will emerge in the near future. Security app developers are often under time constraints. They are attempting to ensure that all of the project’s objectives and components are met. Then they must make it function as intended. One of the most difficult tasks for security software developers is to strike a balance between product speed and functionality and security. In other words, implementing security measures can have an impact on the product’s user experience, so product development and engineering teams must make tradeoffs. It’s a big undertaking. Security software developer skills and experience - A bachelor’s degree in computer science, computer engineering, computer networking, electrical engineering, or mathematics is required. - Software engineering or creation is the term used to describe the process of creating software. Prior coding and development expertise is often needed. - Experience in the field of cybersecurity. Network security engineer is a good area to work in, whether as a cybersecurity engineer or a cybersecurity consultant. - Certifications and training are essential. Security or software providers deliver a variety of certifications and training, such as Cisco CCIE, CISSP, or Microsoft AZURE Security - Associate. There are numerous others. Simply obtaining these certifications will result in a pay raise. - Experience in testing and auditing. Experience testing and auditing applications for vulnerabilities (such as experience as a penetration tester) is another useful skill for security software developers. What do security software developers do? Two computer hackers assaulted a Jeep Cherokee as it was driving down the highway in 2015 for a Wired magazine experiment. The attackers were able to switch on the air conditioner, then the windshield wipers, and finally the engine, which brought the Jeep to a halt. What’s the intriguing part? The attack took place when they were thousands of miles away from the vehicle, demonstrating the strength and promise of a targeted cyber attack. The aim of this experiment is to demonstrate the weakness of internet-connected or internet-of-things (IoT) devices, even if they’re as big and complex as a Jeep Cherokee. Understanding and countering threats to connected devices is only one example of a promising career path for a security software developer. These computer interfaces, like many other IoT devices on the market, were created without much consideration for protection. Full functionality, time to market, and cost were the objectives. Security was a last-minute consideration. As a security software developer, there will be a growing number of opportunities in the coming years that will necessitate improving the security of software-based products and services. Until a product is released, security engineers must predict these types of threats and incorporate design elements to ensure its safety and security. Then they’ll be put to the test to see if they’re effective and efficient. Security software developer job description Working directly with a software development team to provide specifications, testing, and design of software components to make them as safe as possible is common in security software development. Communication with a team of developers, designers, and engineers is critical to ensuring that potential risks are identified and properly mitigated. Tech developers are often called upon to create new engineering designs, and they are also tasked with creating entire security software products from the ground up. Testing and implementing innovative technologies and protection procedures to ensure the effectiveness of the product design is an essential part of the production process. These systems would necessitate a thorough understanding of software design and coding, as well as operating with architectures that have been hardened to prevent potential attacks. You must be imaginative and have an active mind in order to think of every possible scenario in this position. At the same time, you’ll want to make sure it’s completely incorporated into your reference architecture. These machine roles are also involved in the analysis of defences and countermeasures. As part of this, security software developers can participate in red team exercises to test products for proper defences. These kinds of confrontational situations can be thrilling. Security-focused developers will need to understand attack vectors and potential attack surfaces, as well as be able to determine if these attack vectors will be exploitable through testing and red team tactics. Outlook for a security software developer This is an exciting area with a lot of room for expansion. The cost of cybersecurity attacks in the United States is estimated to reach $5 trillion by 2020. At the same time, IoT devices are rapidly growing and expanding. In the United States, there are nearly 300,000 cybersecurity job openings. By 2021, it is estimated that there will be a 3.5 million work shortage. The potential for growth is enormous. However, the qualifications for the necessary skills are often strenuous. How much do security software developers make? Compensation is really good for those willing to improve their skills and keep up with the industry’s relentless shift. Salary ranges from $73k to $110,00 on average. Furthermore, these roles often provide substantial incentives as well as other forms of compensation such as benefits, commission, and profit-sharing. Companies in the startup process can also provide additional incentives in the form of equity participation.
<urn:uuid:8cc5fc21-da37-49d6-a291-c4128cb9bf36>
CC-MAIN-2024-38
https://cybersguards.com/how-to-become-a-security-software-developer-a-complete-career-guide/
2024-09-14T13:32:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00551.warc.gz
en
0.956065
1,456
2.625
3
The rapid growth of urbanization presents numerous challenges for cities across the globe. As populations swell, cities face increasing demands on resources, infrastructure, and services. To tackle these challenges and build a sustainable future, cities are turning towards innovative technologies and strategies, giving rise to “Smart Cities.” Smart Cities leverage the power of the Internet of Things (IoT) and Artificial Intelligence (AI) to optimize urban planning, enhance citizen experiences, and promote environmental sustainability. This blog will explain the transformative potential of IoT, AI, and sustainable urban planning in shaping the Smart Cities of tomorrow. Understanding Smart Cities Smart Cities represent a transformative approach to urban living, harnessing the power of technology and data to create more efficient, sustainable, and livable urban environments. At their core, Smart Cities leverage the Internet of Things (IoT) concept, where everyday objects and infrastructure are interconnected, gathering and exchanging data through embedded sensors and devices. This constant flow of information enables city authorities to make data-driven decisions, optimize resource allocation, and enhance their residents’ overall quality of life. The key to understanding Smart Cities lies in their ability to intelligently manage and respond to the challenges posed by rapid urbanization. As more people migrate to cities seeking better opportunities, there is an increasing strain on resources, transportation systems, energy, and public services. By integrating IoT technology, Smart Cities can monitor and analyze real-time data on traffic flow, energy consumption, air quality, waste management, and more. This wealth of information empowers city planners to identify inefficiencies, predict potential issues, and implement targeted solutions. In addition to IoT, Smart Cities heavily rely on Artificial Intelligence (AI) to process and analyze the vast amounts of data generated. AI algorithms can discern patterns and trends from the data, enabling predictive maintenance of critical infrastructure, optimized traffic management, and personalized services for citizens. For instance, AI-powered traffic systems can dynamically adjust traffic signals to alleviate congestion, while AI-driven public services can offer tailored suggestions based on individual preferences. An integral aspect of Smart Cities is their commitment to sustainable urban planning. By embracing eco-friendly practices, such as mixed-use zoning, green spaces, renewable energy integration, and efficient public transportation, these cities strive to reduce their environmental impact and build resilience against future challenges like climate change. In conclusion, Smart Cities epitomize the fusion of technology and urban living, with IoT and AI as catalysts for innovation and progress. By leveraging data-driven insights and sustainable urban planning, Smart Cities aim to create a more connected, inclusive, and environmentally conscious future for their citizens. Understanding Smart Cities becomes crucial in shaping a brighter and more sustainable tomorrow as the world continues to urbanize. The Role of IoT in Smart Cities In the quest to build smarter, more efficient urban environments, the Internet of Things (IoT) plays a pivotal role in shaping the cities of tomorrow. IoT refers to the interconnection of everyday devices, infrastructure, and objects through embedded sensors and communication technology, allowing them to collect and exchange data. In the context of Smart Cities, IoT acts as a foundational framework that revolutionizes urban living by enhancing data-driven decision-making, optimizing resource management, and improving residents’ overall quality of life. One of the primary functions of IoT in Smart Cities is data collection. IoT devices, such as sensors and cameras, are strategically deployed throughout the urban landscape, gathering real-time information on various aspects of city life. These devices monitor and measure parameters like air quality, temperature, traffic flow, waste levels, energy consumption, etc. The data collected forms a vast network of information, offering valuable insights into urban patterns, trends, and challenges. City authorities can access this information through IoT’s seamless data-sharing capabilities and make informed decisions. This data-driven decision-making allows for a more proactive approach to urban management. For instance, traffic authorities can optimize traffic flow by analyzing real-time data to adjust traffic signal timings, reducing congestion and travel time for commuters. Waste management systems can also be fine-tuned based on fill levels in rubbish bins, optimizing garbage collection routes and reducing operational costs. IoT’s impact extends beyond resource optimization and operational efficiency; it also empowers citizens to actively participate in shaping their cities. Using IoT-enabled applications, residents can receive real-time updates on public transport schedules, air quality indexes, parking availability, and more. This enhanced connectivity fosters a sense of engagement and empowerment, ultimately leading to improved citizen experiences. Furthermore, IoT’s potential for predictive analysis drives preventive maintenance efforts. For instance, sensors on critical infrastructure, such as bridges and utilities, can continuously monitor structural health and identify signs of wear and tear. By detecting potential issues early on, city authorities can take proactive measures, ensuring the safety and longevity of vital urban assets. However, alongside these numerous benefits, IoT implementation in Smart Cities also brings challenges that must be addressed. Security and privacy concerns arise due to the sheer volume of collected data. Safeguarding citizen data from cyber threats becomes paramount, and transparent data handling policies are essential to earn public trust. AI’s Impact on Smart Cities AI is the driving force behind making sense of the massive amounts of data IoT devices generate. With advanced machine learning algorithms, AI can analyze complex datasets, identify patterns, and make predictions. In the context of Smart Cities, AI enables predictive maintenance of critical infrastructure, efficient waste management, and personalized city services. Predictive maintenance relies on AI algorithms to anticipate infrastructure maintenance needs, such as bridges, roads, and utility systems. This proactive approach reduces downtime and extends the lifespan of assets, leading to cost savings and enhanced safety. Moreover, AI-powered waste management systems can optimize waste collection routes based on real-time data on fill levels in rubbish bins. This reduces unnecessary collection trips, saves fuel, and minimizes emissions, promoting a cleaner and healthier urban environment. Furthermore, AI-driven city services can cater to individual needs by analyzing citizen data. For example, AI can optimize public transportation routes based on commuting patterns, improve healthcare services by predicting disease outbreaks, and even offer personalized recommendations for cultural and recreational activities. Sustainable Urban Planning Sustainable urban planning is at the heart of building Smart Cities of tomorrow. This approach balances economic, social, and environmental aspects to create resilient and livable cities for future generations. Some key principles of sustainable urban planning include: - Mixed-Use Zoning: Creating neighborhoods with a mix of residential, commercial, and recreational spaces to reduce the need for long commutes, lower traffic congestion, and promote a sense of community. - Green Spaces: Designing cities with ample green spaces and parks to improve air quality, encourage physical activity, and support biodiversity. - Renewable Energy Integration: Emphasizing adopting renewable energy sources like solar and wind power to reduce greenhouse gas emissions and combat climate change. - Efficient Public Transportation: Investing in public transportation systems to reduce private vehicle usage and promote eco-friendly mobility options. - Smart Building Design: Encouraging energy-efficient building design, such as green roofs, smart lighting, and efficient insulation, to minimize energy consumption. Benefits and Challenges of Smart Cities: The adoption of IoT, AI, and sustainable urban planning brings forth numerous benefits for Smart Cities: - Improved Quality of Life: Smart Cities enhance citizen experiences by providing efficient services, better mobility options, and increased safety. - Resource Optimization: IoT and AI enable optimized resource allocation, leading to cost savings and reduced environmental impact. - Enhanced Connectivity: The IoT devices create a seamless and connected urban environment, promoting digital inclusion and accessibility. - Environmental Sustainability: Sustainable urban planning and integrating renewable energy sources contribute to reduced carbon emissions and a greener future. However, Smart Cities also face certain challenges: - Data Privacy and Security: The abundance of data collected raises concerns about privacy and data iot security. Robust measures must be in place to protect citizen data from potential cyber threats. - Digital Divide: Ensuring equitable access to technology and digital services is essential to avoid creating a digital divide between different socioeconomic groups. - Infrastructure Investment: The development of Smart Cities requires significant investments in IoT devices, AI systems, and sustainable infrastructure, which may be a challenge for some cities. - Compatibility and Interoperability: Ensuring compatibility and interoperability between IoT devices and AI systems is crucial for seamless data sharing and integration. Smart Cities of tomorrow promise to create sustainable, efficient, and livable urban environments for billions of people. By harnessing the convergence of IoT and AI, these cities can make data-driven decisions, optimizing resource usage and enhancing city services. Integrating innovative technologies and sustainable urban planning principles, Smart Cities aim to mitigate environmental impact, improve citizen experiences, and foster innovation. However, Smart Cities also face challenges that need addressing. Ensuring data privacy, promoting digital inclusion, and making necessary infrastructure investments are crucial to ensure that the benefits of Smart Cities are accessible to all. Robust measures must safeguard citizen data from potential cyber threats while ensuring equitable access to technology and digital services is essential to avoid creating a digital divide. Cities must also invest in IoT devices, AI systems, and sustainable infrastructure to support the transformation towards Smart Cities. In embracing modern world technologies, custom application development services, and sustainable practices, cities pave the way for a brighter, more connected future. By prioritizing the well-being of citizens and the environment, Smart Cities can serve as models of urban development, offering a blueprint for a more resilient and innovative world.
<urn:uuid:9c94a74e-29a7-4c10-a420-1573e863b91f>
CC-MAIN-2024-38
https://cybersnowden.com/smart-cities-of-tomorrow-iot-ai-and-sustainable/
2024-09-14T13:32:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00551.warc.gz
en
0.902829
1,968
3.421875
3
An Essential Guide to Unleashing the Power of Generative AI What is Generative AI? Generative AI is a powerful branch of artificial intelligence that enables computers to learn patterns from existing data and then use that knowledge to create new data. In simple terms, it is the technology behind machines that can create original content, such as images, music, or even entire stories. Generative AI is experiencing unprecedented rates of adoption in just about every industry, and savvy tech companies are quickly rolling out support services for it. My company, for instance, is launching Foundry for Generative AI by Rackspace (FAIR™) to help customers responsibly adopt and utilize generative AI. The technology behind generative AI involves training a machine learning model on a large dataset of real-world content, which the model uses to learn patterns and generate new content based on those patterns. This process allows generative AI to create highly realistic and convincing content that can be used in a variety of applications, from generating realistic-looking images for video games to creating customized text for marketing campaigns. While generative AI still has limitations and challenges to overcome, it has the potential to revolutionize the way we create and consume content in the future. The Profound Impact of Generative AI on Text Generation While generative AI has made significant strides in generating various types of data, the Language Model, particularly the Large Language Model (LLM), has made a significant impact in the field of AI. The LLM is a type of generative AI that specializes in generating natural language, making it particularly useful for tasks such as language translation, summarization, and even creative writing. One of the most significant breakthroughs in the LLM’s development was the introduction of GPT-4, which has the ability to generate human-like text with a high degree of accuracy and coherence. The LLM’s ability to generate high-quality, natural language has a wide range of potential applications, including chatbots, virtual assistants, and even content creation for social media and marketing campaigns. Additionally, the LLM has shown promising results in tasks such as language translation, where it can quickly and accurately translate text from one language to another. The LLM’s success is due in part to its ability to learn from vast amounts of data and use that knowledge to generate new and coherent language. However, it is still not perfect and faces challenges such as bias and ethical concerns surrounding its use. Nonetheless, the potential of the LLM to revolutionize the way we communicate and interact with technology is immense and exciting. Foundational Models in Generative AI and LLM Integration Foundational models are the building blocks of generative artificial intelligence (AI) systems, focusing on specific tasks and aspects of AI. They provide the groundwork for the development of advanced techniques, such as LLMs. By capturing patterns, these models generate coherent content in specific domains, contributing to the advancement of generative AI. Through the exploration and refinement of foundational models, researchers gain insights into fundamental concepts, algorithms and architectures. They also help identify challenges and limitations in content generation, leading to the development of more sophisticated models. LLMs, on the other hand, leverage the knowledge and techniques acquired from foundational models to generate natural language with a high degree of accuracy and coherence. LLMs specialize in generating human-like text, making them particularly useful for tasks such as language translation, summarization, creative writing and more. The integration of LLMs within generative AI systems has revolutionized various applications, including chatbots, virtual assistants, content creation and even social media marketing campaigns. The synergy between foundational models and LLMs has significantly propelled the advancement of generative AI. Foundational models provide the groundwork and understanding necessary to develop LLMs that can generate highly realistic and convincing content. The ability of LLMs to learn from vast amounts of data and generate new and coherent language has opened up a wide range of possibilities in terms of applications and use cases. From improving customer interactions to streamlining operations, LLMs have become versatile tools for businesses and organizations looking to harness the power of generative AI. Revolutionizing Generative AI: The Groundbreaking Leap of “Attention Is All You Need” “Attention Is All You Need” by Vaswani et al. (2017) is a seminal work in the field of natural language processing and generative AI. Authors introduced a revolutionary neural network architecture called the Transformer. The Transformer model is based on the concept of self-attention, which allows it to weigh the importance of different parts of the input sequence when generating output. It consists of an encoder-decoder architecture, where both the encoder and decoder are composed of multiple layers of self-attention and feed-forward neural networks. The self-attention mechanism allows the model to capture relationships between different words in the input sequence, considering the context and dependencies between them. - The Transformer model is built around a self-attention mechanism that allows it to weigh the importance of different parts of the input sequence when generating output. This mechanism enables the model to capture long-range dependencies in the input sequence, which is essential in natural language processing tasks where context and context-based relationships between words are critical. - Compared to previous neural network architectures such as LSTM, the Transformer model does not require recurrent connections, making it faster to train and easier to parallelize. Additionally, the Transformer model can handle longer sequences of input and output with better accuracy. - The Transformer model has become a widely used architecture in many natural language processing applications, including language translation, text summarization and language modeling. - The impact of the Transformer model on the field of generative AI has been significant, setting a new standard for the state-of-the-art in natural language processing. There are several Large Language Models (LLMs) available that are designed to generate natural language. Some of the best models are: - GPT-4 (Generative Pre-trained Transformer) models by OpenAI: GPT-4, the latest language model by OpenAI, is estimated to have 170 trillion parameters. It boasts advanced features such as multimodal data handling, improved task performance, generating coherent texts and showcasing human-like intelligence. - LLaMA by Meta: It is a collection of foundation language models with varying parameter sizes ranging from 7B to 65B. These models are trained on publicly available datasets containing trillions of tokens, demonstrating that state-of-the-art models can be trained without relying on proprietary and inaccessible datasets. The LLaMA-13B model outperforms GPT-3 (175B) on most benchmarks, and the LLaMA-65B model is competitive with other top models like Chinchilla70B and PaLM-540B. All models are available for the research community. - PaLM-E and PaLM-2 by Google: is an innovative language model that combines real-world sensor inputs with language understanding. With its integration of visual and textual data, PaLM-E excels in embodied reasoning tasks and achieves state-of-the-art performance on OK-VQA. Through joint training across multiple domains, PaLM-E maintains language capabilities while establishing a strong connection between words and percepts. PaLM-2 outperforms its predecessor, PaLM-E. It offers faster and more efficient inference, allowing for broader deployment and natural-paced interaction. PaLM 2 showcases robust reasoning abilities, surpasses PaLM on BIG-Bench and other reasoning tasks, and demonstrates responsible AI practices with inference-time control over toxicity. - BERT (Bidirectional Encoder Representations from Transformers) by Google: It is a pre-trained LLM that is used for natural language processing tasks such as language understanding, question-answering and sentiment analysis. - T5 (Text-to-Text Transfer Transformer) by Google: It is a pre-trained LLM that is designed to be highly flexible and can be fine-tuned for a wide range of natural language tasks, including language translation, summarization and question-answering. - RoBERTa (Robustly Optimized BERT approach) by Facebook: It is a pre-trained LLM that is designed to improve upon the performance of BERT on various natural language processing tasks, including text classification, question-answering and named entity recognition. - XLNet (eXtreme Language understanding Network) by Carnegie Mellon University and Google: It is a pre-trained LLM that utilizes permutation-based training to improve its understanding of the relationship between different words in a sentence and is used for natural language processing tasks such as language modeling and question-answering. These LLMs have significantly improved the ability of machines to generate natural language and have a wide range of applications in fields such as chatbots, virtual assistants and content creation. Why GPT Models Have Made Huge Impact One of the reasons why GPT models has made a significant impact in the field of generative AI is its accessibility to the public. Unlike previous LLMs, which required data scientists to have the necessary hardware and expertise to run them, GPT can be accessed through OpenAI’s API, making it easy for developers, researchers and even hobbyists to experiment with it. This accessibility has led to a surge in creativity and innovation, with people using GPT to generate a wide range of content, from creative writing to chatbots and virtual assistants. This democratization of generative AI has also sparked important discussions about the ethical and societal implications of such technology, and it has brought attention to the need for responsible development and use of AI. What is ChatGPT? ChatGPT is a large language model trained by OpenAI based on the GPT-3.5 architecture. It is designed to generate natural language responses to user inputs and can be used for a wide range of applications, such as chatbots, virtual assistants and content creation. One of the advantages of ChatGPT is its ability to understand and respond to natural language inputs in a way that feels more human-like than traditional chatbots or rule-based systems. This is due to its advanced natural language processing capabilities, which allow it to understand context and generate relevant responses based on the input it receives. Another advantage of ChatGPT is its flexibility and adaptability. It can be fine-tuned for specific tasks or industries, such as customer service or ecommerce, and can be trained on specific datasets to improve its performance in those areas. This makes it a versatile tool for businesses and organizations looking to improve their customer interactions or streamline their operations. Industry Use Cases - Healthcare: Nuance Communications is leveraging GPT technology in the healthcare sector through their Nuance Mix Answers, a Copilot feature. By incorporating GPT into their conversational AI platform, Nuance Mix, they enhance the capabilities of digital and voice bots, enabling them to handle a broader range of customer questions and provide accurate, meaningful responses. This integration increases self-service levels, improves customer experience, and reduces the need for live contact center agents, thereby driving operational efficiency and cost savings in healthcare customer engagement. - Legal: TS2 Space harnesses the power of GPT-4 to revolutionize legal research. By leveraging GPT-4’s advanced text-generation capabilities, TS2 Space streamlines the legal research process, automates document generation and empowers lawyers to provide more efficient and accurate legal services. - Gaming: ROBLOX is utilizing generative AI powered by GPT models to enable users, regardless of coding experience, to create and modify in-game objects through natural language input. This innovative approach simplifies the process of building and altering game elements, making game development more accessible to a wide range of users, from individual creators to small teams. One important legal aspect to consider is content generated by LLMs or other generative AI tools may not be subject to copyright protection in the same way as content created by a human author. Using LLMs to generate content for commercial purposes could potentially lead to copyright infringement if the generated content is too similar to existing copyrighted materials. However, using LLMs to generate content for non-commercial purposes such as internal research or machine learning training data is less likely to be problematic from a legal standpoint. Nonetheless, it is important to be aware of legal implications when using LLMs or other generative AI tools. In conclusion, the remarkable progress in the field of generative AI, exemplified by the explosion of ChatGPT, has propelled this technology into the mainstream. With the advent of powerful models like ChatGPT and the advancements in foundational models and LLMs, we are witnessing a generational leap in artificial intelligence within a remarkably short span of time. The ability of machines to generate natural language responses and create content with human-like accuracy and coherence is revolutionizing various industries, from customer service to content creation. However, as we continue to harness the potential of generative AI, it is crucial to address ethical considerations and legal implications surrounding the use of these technologies. Responsible development and use of AI will be instrumental in ensuring that the benefits of generative AI are realized while mitigating potential risks. As we navigate this transformative era, the possibilities for generative AI are vast, and it is an exciting time to witness the rapid evolution of artificial intelligence. Our generative AI services are part of the Foundry for AI by Rackspace Technology. Foundry for AI by Rackspace (FAIR™) is a groundbreaking global practice dedicated to accelerating the secure, responsible, and sustainable adoption of generative AI solutions across industries. Follow FAIR on LinkedIn. How to operationalize FinOps to drive cost and cloud efficiency Introduction With the proliferation and acceleration of digital services, exacerbated by the... Spot by NetApp for FinOps Given the scale and complexity of today’s multi-cloud environments, FinOps strategies can only be...
<urn:uuid:fbad20f2-12f5-423f-bfff-5e9e6767505f>
CC-MAIN-2024-38
https://www.gbiimpact.com/news/an-essential-guide-to-unleashing-the-power-of-generative-ai
2024-09-14T15:20:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00551.warc.gz
en
0.930439
2,885
3.3125
3
A lack situational awareness is hurting the ability of companies and the public sector to adequately protect sensitive information. Intellectual property, sensitive business data, personally identifiable information and infrastructure access, are at risk. Situational awareness in the context of cybersecurity involves the following three areas: - Effective management of data networks, IT assets and system configurations - Awareness of threats including the tracking of incidents, the identification of internal suspicious behaviors and the knowledge of how external threats operate - Awareness of the organization mission, the critical dependencies of IT operations and how specific failures affect organization operations Most organizations have some aspects of situational awareness covered, but many don’t take a comprehensive approach that allows them to address data security in the face of new and sophisticated threats. They don’t have a clear and objective assessment of their data security policies and react to security incidents with a “more of the same” status quo protect my perimeter approach. As a result, they construct a better, more impenetrable perimeter and implement additional login procedures such as two-factor authentication, hoping to create more protection for their data. But does two-factor authentication provide the required data protection relevant for today’s threats awareness? Authentication procedures are perimeter security processes that limit access to the data and do not protect the data itself. If you want to increase access security, you might opt for two-factor authentication. Authentication can be based on what you know, what you have and what you are. For example, a password is something you know, a mobile phone or a security token is something you have, and a fingerprint is something you are. Two-factor authentication uses elements from two of these authentication classes to increase security. Usually, the content of one of the elements changes with each use. For instance, a user might have a password and, when he logs in, a unique code is sent to his mobile phone. He has to enter the code to complete his authentication. This process increases security against old-style hacking that uses a stolen database of usernames and passwords, but this is where situational awareness comes in. Two-factor authentication does not protect you in today’s situation where attacks are more sophisticated, or when the threats are coming from your cloud service providers, the vendors that you authorized to access your applications and government surveillance. “Two-factor authentication isn't our savior. It won't defend against phishing. It's not going to prevent identity theft. It's not going to secure online accounts from fraudulent transactions.” Bruce Schneier One type of attack common today is phishing, defined by the Anti-Phishing Working Group (APWG) as a “...mechanism employing both social engineering and technical subterfuge to steal consumers’ identity data...” Consumers clicking on links in fake emails are taken to counterfeit websites where they log in, disclosing their credentials to the hackers. Phishing attacks are increasing rapidly and, for the fourth quarter of 2016, APWG reported 1,220,523 phishing attacks, a 65% increase from 2015. Such phishing attacks combined with man-in-the-middle hacking can completely defeat two-factor authentication. For example, a phishing email entices a consumer to click on a link to a fake website to verify his password. The consumer enters his username and password which the hacker sees and enters on the real website. The consumer has two-factor authentication enabled, and so the real website sends a code to the consumer’s mobile phone. The consumer enters the code on the fake website and the hacker, in turn, enters it on the real one. The consumer is shown a screen thanking him for verifying his password and does not realize he has been hacked. The hacker has access to the consumer account and can carry out transactions and change the security settings. Vendors advocate for two-factor authentication as secure, but an independent situational analysis would show that attacks such as detailed above can penetrate such security. Data is subject to threats from both outsides and inside the organization perimeter. While reliance on credentials such as username and password combinations or two-factor authentication are effective in reducing unauthorized access to the resources (e.g. network, segment, application), they are not immune to security failures. Theft of credentials and phishing emails linking to fake websites can defeat such strategies. For internal threats, e.g. insiders with privileged access such as administrators and cloud service providers, are already inside the perimeter and they can access sensitive information beyond the established user authorizations. At the same time, government agencies can access the same information through blind subpoenas to the cloud providers and the use other techniques. We must address the above scenarios. Data needs an extra layer of protection when credentials-based authentication and perimeter security fail or when insiders try to access private data. Rather than doing more of the same with increasingly complex perimeter and login security measures, a data-centric approach identifies sensitive data and masks it so it can’t be read. When perimeter security fails, and login credentials are compromised, data-centric security keeps data safe because unauthorized individuals will not be able to read it. Dynamic Data Masking (DDM) Protects Your Data Perimeter security and login procedures limit access to data rather than protecting the data. If this access is compromised, through hacking or credential theft, your data is vulnerable. You may even have compromised secure access yourself by agreeing to give access to your emails or your data to various apps or social media sites. You need an additional security layer just for your sensitive data. By utilizing DDM, Data is Masked as you generate it on your end device. As you type, the application changes individual characters to mask sensitive information while the physical structure of the data remains unchanged. This technique is easy to implement, and with the data structure unchanged, business processes such as SaaS, MS 365 and Google Apps can continue to function as before. For instant; with CloudMask patent DDM, sensitive data is illegible for unauthorized individuals. When hackers gain access to your data, the masked text is useless. The application provides end-to-end data security independent of other security measures. When you authorize someone to see your data, it remains masked until the authorized person reads it on his computer or mobile device. The additional layer of security provided by CloudMask gives you complete control of your data and keeps it safe. With CloudMask, only your authorized parties can decrypt and see your data. Not hackers with your valid password, Not Cloud Providers, Not Government Agencies, and Not even CloudMask can see your protected data. Twenty-six government cybersecurity agencies around the world back these claims. Watch our video and demo at www.vimeo.com/cloudmask Share this article:
<urn:uuid:63ee4d84-3aeb-4bb9-94c8-fbb980a9da29>
CC-MAIN-2024-38
https://www.cloudmask.com/blog/can-two-factor-authentication-keep-your-companys-data-safe
2024-09-15T21:13:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00451.warc.gz
en
0.922589
1,391
2.578125
3
Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring. Reserve Your Seat TodaySupervisory Control and Data Acquisition systems, also known as SCADA systems, can be located at the heart of industrial processes in many different industries. Since SCADA systems allow for managing and control of critical equipment and processes, weaknesses that are not addressed can cause devastating consequences. As a trusted SCADA solutions provider, we know how important it is to our clients to make sure that their system is properly protected. In order to help you as well, we'll take a look at some of the most common vulnerabilities that can be found in SCADA systems and how you can solve them. If you don't work with SCADA yet, then it's important for you to start at the beginning. Getting to know the elements of SCADA systems and their roles will allow you to see where vulnerabilities are likely to exist. SCADA systems are Industrial Control Systems (ICS) that gather information from equipment and industrial processes and provide supervisory-level control over them. These devices and processes are usually located over a wide geographical area. These systems are based on the manager-slave structure. Meaning you will have a central master station and slave devices. These slave devices can be Programmable Logic Controllers (PLCs) and Remote Terminal Units (RTUs). Both PLCs and RTUs are deployed at the sites where the processes happen and where the equipment is located. They have sensors that will receive commands from and send information to the master station. This gathered information will help you and your network technicians to make informed decisions based on real-time data. That's where the master station comes into play. Also called Human Machine Interfaces (HMIs), central master stations will present to you the multiple functions and information collected from your sites by PLCs and RTUs. This will allow you to have an understandable display to review and control. Because they have many functions, SCADA systems can be found in many different settings and industries. Some of them are: SCADA can bring many benefits to many organizations, however, its vulnerabilities and potential threats are a problem for network technicians. Not only these issues can affect your bottom line, but they can also affect end users. Finding where SCADA vulnerabilities can exist helps technicians to evaluate which mitigation methods will be better suited to prevent and neutralize attacks. This can be better said than done since SCADA comprehends multiple devices, sensors, and software - which only means more to keep in mind and protect. Here are the main elements where threats can be found in: HMIs offer you an interface to manage and control your multiple devices and processes. They help you make informed decisions that can also be carried on through this display. Because of its capabilities and functions within the SCADA system, HMIs are an ideal target for malicious people that are trying to take over the control over your processes and to steal critical information. Many SCADA systems nowadays give technicians the option to access the monitoring system through web interfaces. They allow you to remotely connect to your system through the internet to help you control your PLCs and RTUs. This can represent a big threat, though. Hacking applications through the internet is a common issue because computers (and phones) are riddled with vulnerabilities and networks that are easily penetrated. The internet is the first place where hackers go, simply because anyone can access it. If they a find a breach in your system, hackers can steal your sensitive information, take control over your industrial processes, or even lead your techs to make wrong decisions that will negatively impact your network. Protocols such as SNMP, DNP3, and Modbus are the mechanism responsible for the communication between your SCADA devices. In other words, a protocol is a language your devices will use to talk to each other. It's important to keep in mind, though, that some protocols lack in terms of security features to defend your SCADA system from malicious intents. Hackers can take advantage of vulnerabilities in communication protocols to harm your systems by stealing or modifying the information sent from your RTUs or intentionally causing the malfunction of equipment. There are other elements in place to help individual SCADA components to stay connected, active and working in real-time, such as individual sensors. Some of these elements might not be well equipped to deal with threats surrounding many different companies. And that's because many of these components are not used for SCADA systems alone, instead, they are also part of other technologies and systems. There have been many previous attacks against industrial facilities that have brought to light the impacts of vulnerabilities on SCADA systems. The most well-known attack was done by the Stuxnet malware in 2010. It was a true wake-up call because it was the first known threat to specifically target SCADA systems with the intent to control networks. In 2016, another malware known as CRASHOVERRIDE - or Industroyer - was the first malware designed to attack electric grids, causing power outages in Ukraine. In 2018, the City of Atlanta and the Colorado Department of Transportation were hit with ransomware called SamSam. It caused outages, loss of important data, and also loss of money through extortion. Cyberattacks like those continue to happen to this day. In fact, the interest in SCADA systems and industrial equipment is becoming more common as more remote monitoring systems can be found online. Also, the potential extortion through threatening organizations with downtime causes the curiosity in many hackers. What raises the urgency of fixing vulnerabilities in SCADA systems, even more, is the possible success of future cyberattacks with worse consequences than what has happened in the past. The impact of these attacks can include: These are impacts relatively easy to be caused by cybercriminal groups for whatever motivation and have to be avoided by organizations and government agencies. Fortunately, most of the weak points that we talked about before have already been addressed by many vendors. At the end of the day, the battle against SCADA attacks means that you need to always be on the watch for new vulnerabilities and address them as soon as possible. Aside from managing potential threats, you should also maintain security measures that will be able to defend your system against cyber attacks - especially if you work with critical services, such as public safety communications and energy. There are some best practices that organizations can put into place in order to secure their systems. They are: Many older SCADA systems have little to no security features. If that's your case, it's important to check with your vendor for firmware upgrades that will provide security features. Newer SCADA devices are shipped with basic security features, which are usually disabled to ensure an easier installation process. Before buying your system, make sure to determine whether security features are present. Also, set all security features to provide the maximum level of protection possible. In order to be able to effectively respond to cyberattacks, it's critical to have an intrusion strategy planned. This way your network technicians will be notified about malicious activity coming from internal or external sources. An intrusion detection system monitoring is essential 24x7. Alerts can be easily sent out via email or text messages. Also, incident response procedures must be in place to allow an efficient response to any security breach. Make sure your SCADA system allows you to log all daily activities. Constantly monitoring and managing who has authorized and access to certain capabilities of your SCADA system can help reduce open doors to both cyber and physical threats. Conduct a thorough risk analysis to evaluate the risk and necessity of each connection to your SCADA network. Take a look at how well these connections are protected. Identify and assess the following types of connections: Knowing all points of data entry and exit is critical to identifying all potential access points for malware and hacks. Even though safety in all your company's networks are important and should be equally protected, a good practice is to isolate the SCADA network from other network connections as much as possible. Any connection to another network introduces security risks. Even though direct connections with other networks usually allow important information to be passed efficiently and conveniently, insecure or unprotected connections are simply not worth the risk. Isolation of your SCADA system must be an important goal to provide needed protection. Some SCADA systems use proprietary protocols for communication between RTUs and master station. However, often the security of your systems is based uniquely on how secure your protocols are. The bad news is proprietary protocols don't provide much of a "real" security. So, don't depend on proprietary protocols or factory default configuration settings to protect your network. Also, make sure your vendor disclose any backdoor or vendor interface to your SCADA systems and expect them to provide a system that is capable of being fully secured. Come up with a disaster recovery plan that allows for quick recovery from any emergency (including but not only cyberattacks). System backups are an important part of any plan and make a rapid reconstruction of the network possible. Remember to make sure your personnel is familiar with your plan so they know which actions should be taken during a cybersecurity incident. It takes a carefully designed combination of security policies and effective controls to properly secure your SCADA system. However, you can't defend your system if you don't have capable devices. Your devices should be equipped to give you the best security features possible and to meet all your network needs. Most of the time, an off-the-shelf device will not be able to provide you with both. You need a perfect-fit solution. That's what we have been doing for the last 30 years. We custom design devices that will attend all your specific requirements, while also making sure we can give you the highest level of security possible. If you would like to know more about how you fully protect your SCADA system without losing capabilities, just talk to us - we can help you.
<urn:uuid:1eeedf29-29f5-4264-bb20-dffe86e9bf20>
CC-MAIN-2024-38
https://www.dpstele.com/blog/where-can-vulnerabilities-be-found-in-scada-systems.php
2024-09-18T09:04:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00251.warc.gz
en
0.960848
2,059
2.890625
3
The fall in an asset’s value is known as depreciation. It is for a fixed period and is considered an expense. The value has to be added in expenses of that period, as it has a direct impact on profit/loss. In the same way, the amount is subtracted from the asset’s value to reach book value for every year. As per the IFRS, an organization must select a depreciation policy at the start of the business. However, it also allows the business to change its policies, if there is a genuine reason. There are several methods of depreciation. Generally, businesses choose the one that syncs well with the nature of the assets. Some very commonly used methods of depreciation are: The simplest and most commonly used depreciation method is the straight-line method. In this method, the company estimates the asset’s salvage value (resale value when the company decides to sell it), which is subtracted from the original value of the asset. Salvage value may be zero or in negative. After subtracting the salvage value, the remaining amount is divided by the useful life of asset, as clearly shown in the formula below: The main reason for the success of this method is the fact that it is very easy to compute. However, due to the ambiguous nature of the estimate, some companies do not consider it the right option. This method gives a higher depreciation charge in the first year, as the balance keeps on declining every year. This method is a more realistic approach, as assets are most useful when they are new. It is understood that the value declines with time. However, an upgrade might increase the value of an asset. This method of depreciation allows accountants to re-evaluate the value of an asset, if the requirement meets the criteria set by the IFRS. In this method, the depreciation value is multiplied by a schedule of fractions to determine the annual depreciation amount. Just like declining balance method, the value of the asset keeps on decreasing every year (with a different percentage), and finally reaches a salvage value, where it is disposed off or sold in the market. Formula: Depreciable Cost = Original Cost − Salvage Value Formula: Book Value = Original Cost − Accumulated Depreciation There are a few other methods of depreciation too such as: (1) Units-of-production method (2) Units-of-time method (3) Group depreciation method. However, the above mentioned methods are the most commonly used methods by most accountants.
<urn:uuid:38092db4-b13e-4547-bcc0-559b9948897a>
CC-MAIN-2024-38
http://www.best-practice.com/best-practice-in-reporting-accounting/types-of-accounting/methods-of-depreciation/
2024-09-19T15:41:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00151.warc.gz
en
0.943852
526
2.96875
3
This article will explain why DDoS attack mitigation is a necessity for enterprises looking to protect themselves. We’ll take a look at the past, present, and future of DDoS, and how you can navigate these attacks to not lose your organization money. The first DDoS attack occurred in 1997 during a DEF CON event in Las Vegas. The culprit was notorious hacker Khan C. Smith, who successfully shut down Internet access on the Vegas Strip for over an hour. The release of some of this code opened up a can of worms and soon led to online attacks against Sprint, EarthLink, E-Trade, and many more corporations. Later in 2001, Smith would go on to create the first botnet, which was responsible for 25% of the entire Internet spam at the time. Fast forward two decades later to March 5, 2018, where an unnamed customer, with the help of NETSCOUT (formerly Arbor Networks), battled the largest DDoS in history, reaching a peak of about 1.7 terabits per second. This broke the previous record, which occurred just 3 days earlier, where GitHub was knocked down by a 1.35 terabit per second attack. Before 2018, there was never an attack higher than a terabit. DDoS attacks have been around almost as long as the Internet, yet enterprises are not only struggling to protect themselves, but encountering attacks the breadth of which has never been seen. It’s no longer a matter of if your business will fall victim, but when. So what happens now? The Current State of DDoS Attacks Memcached Servers Help Create Largest DDoS Attack Ever: That GitHub attack originated from over a thousand different systems across tens of thousands of different endpoints. This was an amplification attack using the memcached-based approach. Memcached is free software that speeds up database-driven websites by caching data in RAM to reduce disk strain. Hackers exploited a UDP vulnerability and ended up flooding valid requests to an exposed Memcached server. This attack has a 10,000:1 amplification ratio, so on average, every byte an attacker sent to the exposed Memcached server, the server would then send 10 KB to the actual victim. Here at HostDime, we saw evidence that servers on our network could be directly affected and immediately blocked all packets with a UDP destination port of 11211 from entering our networks. DDoS Attacks Growing in Quantity and Quality: It was all bad news in Q1 2020. While humanity continues to deal with the coronavirus pandemic, hackers did not take a break. Q1 2020 saw a huge increase in both size and total number of attacks. There was a significant rise in the length of these incidents, in both average and maximum duration. The amount of attacks actually doubled versus Q4 2019, and 80% higher than Q1 2019. It’s Costly to Get Attacked: How much money should a business expect to lose when they’re hit with a DDoS attack? DDoS defense company Corero Network Security sought to answer that question and polled 300 security professionals from cloud, government, finance, media, and online gaming. Here are the not-so-uplifting findings: - 91% of those surveyed said that DDoS attacks can cost their organizations up to $50,000. - 85% believe that DDoS attacks are used by attackers as a precursor or smokescreen for data breach activity. - 71% reported that their organization has experienced a ransom-driven DDoS attack. - 78% cited the number one most damaging effect of a DDoS attack is the loss of customer trust. While not all DDoS attacks are going to cost 5 figures, what may be worse is having your website inaccessible and ruining customer confidence. A company’s reputation can be linked to corporate profits. Rise of the IoT Botnets: What a world we live in when your refrigerator is hackable. Khan C. Smith’s spammy e-mail botnet has evolved into an Internet of Things botnet that shows no signs of stopping. Here’s Guy Rosefelt of cloud security company NSFOCUS on his main IoT security concern: “As IoT innovation continues to blossom, more and more IoT devices will continue to get involved in DDoS attacks. Routers and cameras are the major types of IoT devices involved in DDoS attacks, with routers making up 69.7% of IoT devices exploited to launch DDoS attacks, and 24.7% of cameras. This is because a great number of routers and web cameras have been introduced into production and living environments, with no sufficient security measures enforced. We have every reason to believe that attacks leveraging the IoT will become more diverse in the future.” As these IoT devices become outdated, vulnerabilities will surface and become an easily exploitable weapon. These devices take much longer to patch than Operating Systems, so these attacks often last longer. Thanks to information and the below graphic from Kaspersky Lab, we can see the US leads the world in botnets, with Russia in second. More countries’ botnets are being exploited like never before, as hackers aim for geographic redundancy. The CoAP Protocol: Oh fun, another amplification attack is here. CoAP, or the Constrained Application Protocol, can be thought of as a layer on top of the Internet of Things that handles congestion control. The CoAP protocol is a newish breakthrough that plays a role in speedy machine to machine communication among both consumer and industrial applications. However, with this newness and lightweight protocol, comes a chance at exploitation. CoAP uses UDP, which like the memcached servers, is vulnerable to the type of IP spoofing and packet amplification that enable large DDoS attacks. As aforementioned, this can cause huge amplification attacks spoofing 10 to 50x the normal amount of traffic. CoAP devices have become all the rage lately, going from 6,500 devices in November 2017 to now over 600,000! ZDNet has recently reported from an anonymous source that over half of these devices could be used in DDoS amplification attacks and attacks have already begun, with increasing frequency and size. This won’t be an easy fix, as we are accustomed to speed, especially with the IoT, and it looks like we will have to sacrifice some of the speed for security, or perhaps stop using these CoAP devices altogether. Pro tip to anyone with IoT devices in their home or office. While it’s not much, be sure to change the default username and password, and be on the lookout for suspicious activity and patch updates. There seems to be a shift to using outdated devices to launch these attacks. Let’s hope this isn’t what the near future looks like: DDoS Attack Mitigation Services Do you have the latest DDoS fighting tools and technologies ready for 2020? You can’t predict how large an attack against your server will be, however you can choose what protection you need based on how mission critical your operations are. HostDime’s secure network is among the most DDoS protected in the infrastructure industry. That’s because we offer three types of DDoS protection: NETSCOUT’s local inline mitigation, a cloud-based traffic scrubbing service, and a combination of the two with our hybrid protection. The NETSCOUT appliance sits within our facility for inline protection. When the appliance detects irregular traffic, our team moves the affected subnet and begins filtering. With our Cloud Scrubbing service, ALL traffic gets filtered through one of our various GRE tunnels. Our Hybrid DDoS is unique for its performance based, “always on” protection. Read on for a more in-depth look at the intricacies of each service. HostDime offers premium hardware-based DDoS monitoring and mitigation, while most service providers offer reverse proxy DDoS detection. The problem with this is your traffic goes to a third party, is cleaned, then re-routed back to your host. Our DDoS protection is different because it’s inline, or actually within the data center. We take both your affected and unaffected traffic, filter out the bad stuff, and leave only legitimate traffic flowing to and from your server. Once a server is placed behind our DDoS protection hardware, it learns “normal” traffic patterns so it can identity bad traffic in the future. Therefore end users notice no added latency, even when active mitigation is taking place. HostDime’s DDoS Cloud Scrubbing is a IP routing service where all traffic is examined and filtered, then the cleaned traffic is forwarded to our enterprise network via our Generic Routing Encapsulation (GRE) tunnels. This security cloud is a private point-to-point link between network nodes, and acts as our safety net if a large DDoS attack occurs. Your whole critical infrastructure is safe with this protection. Here is a visual depicting how HostDime’s global security cloud delivers clean traffic to a visitor accessing a DDoS affected website. All IP addresses routed through our scrubbing centers are protected against DDoS attacks. The always active service uses network sensor devices to immediately detect suspicious patterns. Traffic toward that IP is then redirected for mitigation in our security cloud. Redirection stops minutes after the attack ends. HostDime’s DDoS protected GRE tunnels can clean an attack up to 100Gbps; that’s an impressive amount of scrubbing power. Lastly, HostDime’s Hybrid DDoS Protection gives clients the best of both worlds with a performance based option with “always on” protection. This unique set-up combines the inline DDoS protection appliance with the DDoS cloud filtering service. This protection detects and acts immediately against all sorts of large and complex attacks. When disruption occurs, clean traffic still gets through thanks to the cloud scrubbing. One difference between this service and the cloud scrubber is added latency when in protection mode. The following chart spells out the differences between the three DDoS protection services. Whichever you choose, enjoy peace of mind knowing your business will have superior uptime, uninterrupted data center access, and relief from network security threats. HostDime’s inline DDoS service is included on all managed dedicated servers, but if you require additional protection, contact us now to figure out the right plan for you. We offer both monthly and on-demand packages, featuring over 5GBPS of attack protection if necessary. Currently, only dedicated server and colocation clients can take advantage of these additional DDoS protection services. Stay safe out there! Jared Smith is HostDime’s Content & SEO Strategist.
<urn:uuid:21f2abe8-0021-4244-88cc-eaae69a1e616>
CC-MAIN-2024-38
https://www.hostdime.com/blog/ddos-attack-mitigation/
2024-09-20T20:59:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00051.warc.gz
en
0.938678
2,212
2.828125
3
Microsoft SQL Server is a data-intensive and disk I/O (read and write) intensive database management system. SQL Server running on systems with a large amount of physical RAM memory (8 GB or more) can be configured to use the Address Windowing Extensions (AWE) API to provide access to physical memory in excess of the limits set on configured virtual memory, and force all paging to take place in memory for faster data and thread access. With the Address Windowing Extensions (AWE) API, Microsoft SQL Server can support and access very large amounts of physical memory, upwards of 64 gigabytes or more on Windows 2000 Server, Windows Server 2003 and Windows Server 2008 (R2). The specific amount of memory SQL Server can use depends on hardware configuration and operating system support. Before enabling AWE, Lock Pages in Memory permission must be granted to the user account that runs SQL Server as AWE memory cannot be swapped out to the page file. Note that AWE is not required for 64-bit systems, but the Lock Pages in Memory privilege is recommended for 64-bit systems. Step 1: Enable PAE support on Windows Server to allow large segment of physical memory to be used. Step 2: Assign to enable Lock Pages in Memory permission to SQL Server account. Step 3: Enable AWE Option Note that in Windows 2000 (Windows Server 2003 and 2008 provide dynamic allocation on demand), if a value for max server memory is not specified, SQL Server reserves almost all available memory during startup, leaving 128 megabytes (MB) or less physical memory for other applications. Enabling AWE mode is an advanced option. If you are using the sp_configure system stored procedure to change the setting, you can change AWE enabled only when show advanced options is set to 1. To enable AWE and configure the min server memory to 1 GB (so that AWE mapped memory can be released up until 1 GB) and the max server memory to 6 GB, use the following commands: sp_configure ‘show advanced options’, 1 sp_configure ‘awe enabled’, 1 Restart SQL Server with the following commands: net stop mssqlserver net start mssqlserver Then, configure memory: sp_configure ‘min server memory’, 1024 sp_configure ‘max server memory’, 6144 Restart the SQL Server after all configuration to make the changes effective. To disable AWE, simply set the awe enabled to 0 and execute the RECONFIGURE statement again.
<urn:uuid:f80acd46-6f5c-466b-950d-3b1e4b12f1b0>
CC-MAIN-2024-38
https://md3v.com/optimize-sql-server-200x-in-large-ram-system-by-locking-pages-in-memory-and-awe
2024-09-07T10:07:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00415.warc.gz
en
0.803569
534
2.671875
3
Cloud computing is, in essence, storing or using programs and data using the internet instead of an on-site system. But one cloud computing model may be vastly different from the next. The term covers a range of cloud models, types and services, each tailored to meet various technology needs for individuals and organizations. While the variety in cloud computing services is vast, these applications are categorized into three different deployment types — public clouds, private clouds and hybrid clouds. So what is the difference between private, public and hybrid cloud computing? Below, we’ll discuss the key features of each deployment type and the advantages and disadvantages of each. With those features in mind, you can then choose what’s right for your organization. What Is Public Cloud Computing? Public clouds are the most commonly used cloud computing deployment type. In this model, a third-party cloud service provider owns, operates and manages all cloud resources, including hardware, software, servers and other infrastructure items. The services are then delivered to remote users over the internet. In a public cloud model, all cloud users, also called “tenants,” share the same cloud resources. Each tenant’s data is separated and isolated from other tenants’ data, and they can use individual logins to access their data from a website or application. Public clouds are often used in cases where there are predictable computing needs. They can host web-based email services, office applications, storage services and testing environments. While individual public cloud models can be used for a variety of purposes and can offer numerous capabilities and features, there are some characteristics shared across public cloud services. These key characteristics of public clouds include the following: - Subscription-based pricing - Resource pooling - Rapid elasticity - High scalability - Managed security Public cloud examples vary by provider. General providers tend to offer great availability and numerous integration options, while smaller ones can provide more customization options for specific applications. What Is Private Cloud Computing? Private cloud hosting solutions are also known as internal or enterprise clouds. In this model, the cloud is proprietary and is dedicated to a single organization. The cloud platform may be hosted on-site in the company’s data center or in an off-site, third-party data center. In a private cloud model, the company owns and manages its own cloud platform on a private network. Instead of having a third party own and manage the infrastructure, the business handles it all itself. This structure allows the company to have exclusive access to its own cloud platform. But the company is also responsible for managing, maintaining, securing and updating the data center. Private clouds are highly customizable since the organization can make adjustments freely to serve its unique needs. It can also enjoy greater security since it does not need to worry about sharing resources with other companies. Companies using the private option can also apply any number of security features to protect their cloud platform. While private clouds vary widely in their specific features based on the needs of the companies using them, some common characteristics include: - Need-based pricing - High customizability - Internal security So what are the examples of private cloud applications? Most commonly, companies working in highly regulated industries that require tight security and handle sensitive data will use private clouds. Government agencies and financial institutions, for example, will often use private clouds to manage sensitive information. What Is Hybrid Cloud Computing? A hybrid cloud platform is effectively a marriage of public and private cloud computing. Hybrid clouds orchestrate on-premises and third-party resources by connecting an internal data center with a public cloud. The hybrid cloud then allows the user company to deploy data between the private and public clouds as needed. As computing and processing needs change, hybrid cloud platforms let businesses scale up their private cloud infrastructure. They can move overflow into a public cloud instead of purchasing more equipment for their internal infrastructure. Companies can then select what types of data must stay internal and what workloads can be moved to the public cloud. This choice allows for greater flexibility and scalability while still maintaining security for regulatory requirements. Some key characteristics of hybrid cloud computing include: - Use-based pricing - Improved scalability - Custom data management - Internal and managed security A common example of a hybrid cloud is an organization using a private cloud environment for their workloads. It combines that with a public cloud resource to handle peripheral workloads following a spike in computing needs. Hybrid clouds are an excellent choice for companies with strict regulatory requirements or those that have fluctuating workloads. Pros and Cons of Each Cloud Solution Each cloud computing type has its own key features and characteristics. But what are their relative benefits and challenges? Below are some of the key advantages and disadvantages of each cloud type and what they mean for potential applications. Pros of Public Clouds Public clouds hosted by third-party service providers offer a range of advantages to users, including: - Cost savings: Moving resources to a public cloud allows companies to cut down IT costs. Public cloud services manage, secure and update their own hardware and software so your IT team doesn’t have to. All your company needs to do is pay for the service, which is often less expensive than typical IT costs. - Security: Most small to medium-sized businesses don’t have the resources they need to implement quality security protocols. A public cloud service can handle this for you, offering baseline security and regular updates. - No maintenance: Your IT team doesn’t need to maintain systems when you switch to a public cloud provider. The provider handles it all, freeing up your IT team to handle emergencies, updates and customer-facing issues. - Scalability: Public clouds offer almost unlimited scalability. When your workload fluctuates, you don’t need to invest in new hardware — your cloud service provider will scale up for you by allocating more computing resources toward your account. - Reliability: Vast server networks with automated failover protocols power public cloud services. Even when one server goes down, another takes its place, protecting your business from downtime. - Updates: Larger public cloud service providers update their systems regularly, taking advantage of the latest IT technologies to your benefit. These benefits make public clouds an excellent resource for small to medium-sized businesses with limited IT budgets. Cons of Public Clouds While the public cloud offers many benefits, some of the potential drawbacks of public clouds can make it an unviable option for certain types of businesses. Some of the primary disadvantages of public clouds include: - Security and compliance: The multi-tenant nature of public clouds makes it a concern for businesses with strict regulatory and compliance standards. Multitenancy comes with a small risk of data leakage, and though this risk is minimal, any risk is unacceptable for regulated industries. Additionally, the security protocols in place for a public cloud may not be as stringent as needed for regulated industries handling sensitive data. - Changing costs: The cost of public clouds is based on use, but this can present a con for organizations processing significant amounts of data in the cloud. A public cloud is cheaper for most businesses. But large organizations handling massive quantities of data may find public cloud costs are significantly higher than what they would pay to establish and support a private cloud. - Limited control: Public clouds are managed by their owners, not by the users. While this relieves businesses of the need to manage their resources, this structure also removes their ability to control many aspects of their IT infrastructure, including their configurations, security protocols and failover algorithms. Businesses with highly specific configuration and control needs may want to select another cloud option. - Vendor dependency: Another concern with public cloud technology is the reliance on cloud vendor services. While public clouds offer incredible IT technologies like virtual machines and machine learning, businesses can start to rely on these services for their business operations. That reliance can make it difficult to switch to alternative providers or a private cloud later on. - Design restrictions: While many public clouds offer some level of customizability, these customizations can be minimal. Some of these disadvantages do not present problems to certain industries or business types. But they can be a deal-breaker for businesses working in highly regulated industries like healthcare and finance or larger companies. Pros of Private Clouds Now that we’ve looked into the pros and cons of public clouds, what are the benefits of private cloud models? Some of the most popular advantages of private cloud models include: - Customizability: One of the most significant advantages of a private cloud model is adaptability. Organizations establishing their own private cloud environment can customize the platform to meet their specific business needs. - Control: Private clouds allow businesses to make all the decisions, letting them control every aspect of their model. This ability includes control over hardware, infrastructure and configuration. - Exclusivity: Private clouds are exclusive to the companies that own them. With this ownership, the environment is dedicated to the company and inaccessible by other organizations, which helps reduce the risk of accidental data leaks. - Security: Companies with private clouds can apply as many firewalls, security protocols and configurations as they want. Increased security lets businesses meet stringent compliance regulations, which is particularly critical for regulated industries. - Efficiency: Private cloud platforms offer excellent scalability and efficiency. A private cloud is able to meet significant variations in demand, all while maintaining the system’s security. These advantages make private clouds a top choice for highly regulated industries and businesses that need greater control over their IT resources. Cons of Private Clouds While private clouds offer numerous advantages in terms of compliance and control, there are some challenges that make it a less viable option, especially for smaller businesses. So what are the disadvantages of private cloud models? Some of the potential ones include: - High costs: The private cloud is the most expensive of the cloud types. Establishing a private cloud requires extensive investments in hardware and software, and maintaining the cloud requires sufficient personnel. You’ll also need staff to monitor and update the hardware and software regularly to keep up with changing business and security needs, which incurs additional costs. - Maintenance needs: Private clouds require considerable administrative and IT work to manage. You need to have a dedicated staff to ensure the maintenance of the private cloud, and regular updates need to be performed. - Limited mobility: Unlike many public cloud options, private clouds either limit or do not allow mobile accessibility. This limitation is usually due to the high-security measures in place, but it reduces the mobility of the system for remote users. - Scalability limits: A company’s infrastructure limits a private cloud model’s scalability. Expanding the cloud resources requires investing in additional infrastructure, which adds to the cost of maintaining the private cloud model. - Internally dependent reliability: Many companies with private clouds have failover protocols and redundancies to maintain reliability. But the resources a company has limits the potential of these measures. In the event of a massive outage or disaster, the private cloud may go down. These possible disadvantages can make private clouds an exclusive option, as they are too limited for a portion of business ventures. Pros of Hybrid Clouds Hybrid clouds present a unique option to businesses, combining the capabilities of the public cloud with the security and control of private clouds. Some of the major advantages of hybrid clouds include: - Policy-driven control: In a hybrid cloud, you control what data stays in your private cloud and what can be moved to a public one. Your organization can maintain a private cloud that houses sensitive data and assets while moving peripheral processes to public options. - Secure scalability: Hybrid cloud models allow businesses to scale utilization up and down based on day-to-day needs. With your data control protocols, you can easily scale your cloud solutions without introducing security risks. - Reliability: Hybrid cloud models allow users to distribute their services across multiple public and private data centers. You can then ensure systems stay up in the event of a failure, preventing costly downtime. - Cost-effectiveness: Like with a public cloud, users pay for hybrid clouds based on use, meaning users pay for the extra computing power only when they need it. These advantages make hybrid clouds an excellent option for mid-size to large businesses that need a balance of flexibility and security. Cons of Hybrid Clouds While hybrid clouds get around some of the more common issues with both private and public clouds, this model still comes with its own set of challenges. Some of the common drawbacks of hybrid cloud models include: - Price: Like with public clouds, hybrid clouds can become expensive when heavily used. Switching between public and private cloud environments can also become difficult to track, making it challenging to judge utilization to compare with expenses. - Compatibility: In some cases, a private cloud may not be fully compatible with the public cloud used when trying to establish a hybrid model. Imperfect compatibility can then result in latency problems or lapses in communication between the two environments. - Maintenance: The owner company still needs to manage the private cloud component of a hybrid model, which requires dedicated staff and maintenance. That maintenance leads to IT costs in addition to the costs of the public cloud subscription. - Complexity: A hybrid cloud environment’s infrastructure can quickly become complex as the organization operates and manages a mix of public and private cloud architectures. Administrators need to stay on top of both architectures to ensure continuity in business operations. Quality implementation and assistance from specialists in cloud computing technology can help mitigate some of those challenges. How to Choose a Cloud Model Learning about the different types of clouds gives companies greater insight into the options available, but it inevitably leads to the question of which to choose. Is the public or private cloud better, or would a hybrid cloud be the best choice for your application? When considering the different types of cloud computing models, think about your company’s needs and whether a specific model would support your business. Below are a few key factors to consider: - Budget: One of the first factors to think about is your company’s budget. Public clouds tend to be less expensive than private clouds, with hybrid clouds typically falling in between. - Size: If your company is relatively small, a public cloud provider should be able to handle your data at a decent price. Larger organizations tend to have greater processing needs that can quickly get expensive on public clouds. For these organizations, private or hybrid clouds tend to be a more cost-effective option. - Security: Specialized industries such as government, finance and healthcare have strict compliance requirements, especially when it comes to security. Public clouds offer basic protection but rarely at the level needed for compliance requirements. As a result, these industries typically need to use private clouds. A hybrid cloud is also an option, provided there is strict control over what data migrates to a public server. - Control: Public cloud models offer little to no user control over the hardware, servers, security protocols or failover algorithm used. If your organization wants control over these components, a private cloud is going to be the best option. - Workloads: If your business has relatively stable workloads with little fluctuation, either a public or private cloud will be a good solution. If your workload is more variable, either a public cloud or a hybrid cloud will allow for quick scalability. - Service-level agreement (SLA) management: Public clouds allow users limited negotiation on SLA terms. If you want complete control over SLA management, you will want a private cloud. Cloud Solutions From Morefield Communications Business information technology is quickly evolving, with every development introducing new opportunities. Cloud technology is just one of those developments, and it’s becoming a staple across industries of all types. If you’re wondering how your business can get started with cloud technology, Morefield Communications can help. Morefield Communications specializes in cloud solutions for businesses of all sizes in Pennsylvania. We can help you by creating a tailored cloud solution plan specifically made to meet your business needs and goals. Our cloud services can include anything from data management and infrastructure to analytics and security protocols. On top of it all, we will help you integrate your new cloud services with your existing applications, facilitating a smooth transition. Cloud computing is still a new concept for many businesses, but Morefield Communications can help you get started. For over 70 years, we’ve made it our mission to stay on top of the latest developments in information technology for businesses. When you partner with us, we can help you take advantage of those developments. Contact Morefield Communications today to learn more about our cloud services and solutions.
<urn:uuid:da96f058-5a6f-4211-b427-2ed83cb232ae>
CC-MAIN-2024-38
https://morefield.com/blog/public-vs-private-cloud-computing/
2024-09-08T17:22:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00315.warc.gz
en
0.939386
3,421
2.890625
3
LLMs like OpenAIs GPT (Generative Pre-trained Transformer), Google Gemini and Meta LLaMA have revolutionized the way we interact with AI, enabling applications in translation, content creation, and coding assistance. However, as LLMs enter mainstream use, securing them becomes much more important, especially in sensitive applications like finance, healthcare, and legal services. Vulnerabilities in LLMs can lead to misinformation, privacy breaches, and manipulation of information, posing significant risks to individuals and organizations. With the increasing reliance on LLMs, the exposure to cyber threats also escalates. Cyber attackers can exploit vulnerabilities to perform attacks such as data poisoning, model theft, or unauthorized access. Implementing robust security measures is essential to protect the integrity of the models and the data they process. This is part of a series of articles about Vulnerability management In this article: LLM Security Risks As Large Language Models (LLMs) like OpenAI’s GPT, Meta’s LLaMA, and Google’s BERT become integral to more applications, their security vulnerabilities and the landscape of related cyber threats have come under increasing scrutiny. A recent study by Cybersecurity Ventures predicts that by 2025, cybercrime will cost the world $10.5 trillion annually, a huge increase from $3 trillion in 2015, with much of the rise attributed to the use of advanced technologies like LLMs. Adversarial attacks against LLMs are becoming more sophisticated. In 2023 alone, several high-profile incidents demonstrated that even well-secured models like GPT-4 could be susceptible when faced with novel attack vectors. These attacks not only manipulate model outputs but also seek to steal sensitive data processed by the models. With the increasing deployment of LLMs in sensitive areas, regulatory bodies are beginning to step in. For instance, the European Union’s Artificial Intelligence Act is set to introduce stringent requirements for AI systems, including LLMs, focusing on transparency, accountability, and data protection. Key Components of an LLM Security Strategy Here are some of the important aspects in securing a Large Language Model. 1. Data Security LLM data security involves protecting the integrity and confidentiality of the data used in training and operation, including user-provided inputs. Employing encryption, access controls, and anonymization techniques safeguards against unauthorized access and data breaches. Ensuring data security enhances the trustworthiness of LLM outputs and protects sensitive information. This is important for ensuring responsible AI implementations. 2. Model Security Model security focuses on protecting the LLM from unauthorized modifications, theft, and exploitation. Strategies include employing digital signatures to verify model integrity, access control mechanisms to restrict model usage, and regular security audits to detect vulnerabilities. Securing the model ensures its reliability and the accuracy of its outputs, crucial for maintaining user trust. By prioritizing model security, organizations can protect their AI investments from emerging threats, ensuring that these tools continue to operate as intended. 3. Infrastructure Security LLM infrastructure security encompasses the protection of the physical and virtual environments that host and run these models. Implementing firewalls, intrusion detection systems, and secure network protocols are key measures to prevent unauthorized access and cyber attacks on the infrastructure supporting LLMs. A secure infrastructure acts as the foundation for the safe development, deployment, and operation of LLMs. It helps in mitigating risks associated with data breaches, service disruptions, and cyber espionage. 4. Ethical Considerations Ethical considerations in LLM security include addressing the potential for bias, misuse, and societal impact of AI models. Building transparency, fairness, and accountability into LLM operations ensures that these systems are used responsibly and for the benefit of society. Incorporating ethics as a core component of LLM security strategies fosters trust, promotes inclusivity, and helps minimize harm. Ethical AI also contributes to reinforcing the positive potential of AI in addressing complex challenges. Who Is Responsible for LLM Security? Many organizations and end-users consume LLMs through websites or managed services, such as ChatGPT and Google’s Gemini. In these cases, the responsibility for model security and infrastructure security rests primarily with the service provider. However, when organizations deploy LLMs on-premises—for example, via open source options like LLaMA or commercial on-premises solutions like Tabnine—they have additional security responsibilities. In these cases, the organization deploying and operating the model shares responsibility for securing its integrity and the underlying infrastructure. Software Supply Chain Vulnerabilities LLMs can be compromised through vulnerabilities in their supply chain, including third-party libraries, frameworks, or dependencies. Malicious actors might exploit these to alter model behavior or gain unauthorized access. Establishing a secure development lifecycle and vetting third-party components are critical defenses against supply chain attacks. Auditing and continuously monitoring the supply chain for vulnerabilities allows for timely detection and remediation of threats. Insecure Plugin Design Insecure plugins in LLMs introduce risks by expanding the attack surface through additional functionalities or extensions. These plugins can contain vulnerabilities that compromise the security of the entire model. Ensuring that plugins follow security best practices and undergo rigorous testing is necessary to mitigate this risk. Developers must prioritize security in the design and implementation of plugins, incorporating mechanisms such as authentication, access controls, and data protection to safeguard against exploitation. Excessive agency in LLMs refers to situations where models operate with higher autonomy than intended, potentially making decisions that negatively impact users or organizations. Setting clear boundaries and implementing oversight mechanisms are crucial to control the scope of actions available to LLMs. Balancing autonomy with constraints and human oversight prevents unintended consequences and ensures LLMs operate within their designed parameters. Establishing ethical guidelines and operational boundaries aids in managing the risks associated with excessive agency. Overreliance on LLMs without considering their limitations can lead to misplaced trust and potential failures in critical systems. Acknowledging the limitations and incorporating human judgment in the loop ensures a balanced approach to leveraging LLM capabilities. Building systems that complement human expertise with LLM insights, rather than replacing human decision-making entirely, mitigates the risks of overreliance. Model theft involves unauthorized access and duplication of proprietary LLM configurations and data, posing intellectual property and competitive risks. Implementing access controls and encrypting model data help defend against theft. Protecting intellectual property and maintaining competitive advantages requires vigilance against model theft through continuous monitoring and other advanced cybersecurity measures. Top 10 OWASP LLM Cyber Security Risks Here’s an overview of the OWASP Top 10 security risks for Large Language Models. Prompt injection attacks exploit vulnerabilities in LLMs where malicious input can manipulate the model’s output. Attackers craft specific inputs designed to trigger unintended actions or disclosures, compromising the model’s integrity. Prompt injection also poses a threat to users who rely on their outputs This risk underlines the importance of sanitizing inputs to prevent exploitation. Addressing it involves implementing validation checks and using context-aware algorithms to detect and mitigate malicious inputs. Insecure Output Handling Insecure output handling in LLMs can lead to the unintended disclosure of sensitive information or the generation of harmful content. Ensuring that outputs are sanitized and comply with privacy standards is essential to prevent data breaches and uphold user trust. Monitoring and filtering model outputs are critical for maintaining secure AI-driven applications. With secure output handling mechanisms, developers can reduce the risk associated with malicious or unintended model responses. These mechanisms include content filters, usage of confidentiality labels, and context-sensitive output restrictions, ensuring the safety and reliability of LLM interactions. Training Data Poisoning Training data poisoning attacks occur when adversaries intentionally introduce malicious data into the training set of an LLM, aiming to skew its learning process. This can result in biased, incorrect, or malicious outputs, undermining the model’s effectiveness and reliability. Preventative measures include data validation and anomaly detection techniques to identify and remove contaminated inputs. Employing data integrity checks and elevating the standards for training data can mitigate the risks of poisoning. Model Denial of Service Model Denial of Service (DoS) attacks target the availability of LLMs by overwhelming them with requests or exploiting vulnerabilities to cause a failure. These attacks impede users’ access to AI services, affecting their performance and reliability. Defending against DoS requires scalable infrastructure and efficient request handling protocols. Mitigation strategies include rate limiting, anomaly detection, and distributed processing to handle surges in demand. Sensitive Information Disclosure Sensitive information disclosure occurs when LLMs inadvertently reveal confidential or private data embedded within their training datasets or user inputs. This risk is heightened by the models’ ability to aggregate and generalize information from vast amounts of data, potentially exposing personal or proprietary information. To counteract this, implementing rigorous data anonymization processes and ensuring that outputs do not contain identifiable information are critical. Regular audits and the application of advanced data protection techniques can also minimize the chances of sensitive information being disclosed. Best Practices for LLM Security Here are some of the measures that can be used to secure LLMs. Adversarial training involves exposing the LLM to adversarial examples during its training phase, enhancing its resilience against attacks. This method teaches the model to recognize and respond to manipulation attempts, improving its robustness and security. By integrating adversarial training into LLM development and deployment, organizations can build more secure AI systems capable of withstanding sophisticated cyber threats. Input Validation Mechanisms Input validation mechanisms prevent malicious or inappropriate inputs from affecting LLM operations. These checks ensure that only valid data is processed, protecting the model from prompt injection and other input-based attacks. Implementing thorough input validation helps maintain the security and functionality of LLMs against exploitation attempts that could lead to unauthorized access or misinformation. Access controls limit interactions with the LLM to authorized users and applications, protecting against unauthorized use and data breaches. These mechanisms can include authentication, authorization, and auditing features, ensuring that access to the model is closely monitored and controlled. By enforcing strict access controls, organizations can mitigate the risks associated with unauthorized access to LLMs, safeguarding valuable data and intellectual property. Secure Execution Environments Secure execution environments isolate LLMs from potentially harmful external influences, providing a controlled setting for AI operations. Techniques such as containerization and the use of trusted execution environments (TEEs) enhance security by restricting access to the model’s runtime environment. Creating secure execution environments for LLMs is crucial for protecting the integrity of AI processes and preventing the exploitation of vulnerabilities within the operational infrastructure. Adopting Federated Learning Federated learning allows LLMs to be trained across multiple devices or servers without centralizing data, reducing privacy risks and data exposure. This collaborative approach enhances model security by distributing the learning process while keeping sensitive information localized. Implementing federated learning strategies boosts security and respects user privacy, making it useful for developing secure and privacy-preserving LLM applications. Incorporating Differential Privacy Mechanisms Differential privacy introduces randomness into data or model outputs, preventing the identification of individual data points within aggregated datasets. This approach protects user privacy while allowing the model to learn from broad data insights. Adopting differential privacy mechanisms in LLM development ensures that sensitive information remains confidential, enhancing data security and user trust in AI systems. Implementing Bias Mitigation Techniques Bias mitigation techniques address and reduce existing biases within LLMs, ensuring fair and equitable outcomes. Approaches can include algorithmic adjustments, re-balancing training datasets, and continuous monitoring for bias in outputs. By actively working to mitigate bias, developers can enhance the ethical and social responsibility of LLM applications.
<urn:uuid:4ea8a413-7379-42e9-b509-fb68c3b54cb2>
CC-MAIN-2024-38
https://www.aquasec.com/cloud-native-academy/vulnerability-management/llm-security/
2024-09-11T04:13:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00115.warc.gz
en
0.907629
2,416
3.015625
3
For most people, the term "intellectual property" conjures up a slew of high-profile, often odd battles over who had the first idea. Intellectual property cases have generated plenty of headlines over the years. With the tremendous advent of technology, IPRs have come to the limelight more than ever before! Many reliable surveys show that IP as a career is one of the most sought-after careers in recent times. The IP industry has grown exponentially in the previous two years; the sector has grown by leaps and bounds. Working on tough IP cases inspires and intellectually stimulates many young graduates and experienced officials alike. The reason for this tremendous rise in the IP career is that there is no boundary to what human beings may invent or how far IP professionals will go to challenge or defend that innovation for the benefit of their clients! Every work of intellectual property contains certain creative elements. If you're a creative or intellectual person, the proximity to art, invention, and literature may appeal to you. As things stand now, the two horsemen of IP law are technology and media. Emerging technologies such as solar, 5G, biotech, automated transportation, e-commerce, fintech, and food technology are driving significant growth in the technology sector. For the past ten years, media has been the fastest growing industry on the planet, and it is expected to continue to develop rapidly in the future, thanks to the ubiquitous smartphone and internet connection. Both of these industries rely heavily on intellectual property. Not only do these businesses generate a lot of work in terms of patent, copyright, and trademark registration, but they also generate a lot of work in terms of licensing, franchising, IP assignment, IP prosecution, and IP enforcement around the world. So, students inclined towards arenas like these are in a win-win situation! IP as a career can be chosen by students of various science and law streams: 1. BTech graduates/engineers A quick look at the salary statistics reveals that a career as a patent agent, which does not require a legal degree, pays far more than most engineering areas. The biggest relief for engineers in the IP field is that they can work in their core field– Most graduates have issues finding work in their core area and associated with their degree. The IP field gives them a chance to be working in the central field of new technological ideas generated by corporations operating in the IPR Cell. Engineers have a deep understanding of the core domain, which is a prerequisite in the IP industry. As a patent professional from an engineering background, your work profile will include both technical as well as legal knowledge. If you are someone who loves reading about new advances in your subject area, and technological upgrades, then this field is right for you! Law is a popular, respectable and one of the most sought after careers in India. In recent years, the developments in IPR have only increased their importance, so a career as an IP lawyer is a very attractive option for students after pursuing law. The designing of confidential information agreements, license agreements, assignment agreements, and franchise agreements are all examples of agreements in Intellectual Property law that are expected from an IP law professional. As law students have technical know-how about legal language and articulation, taking up such a role will help them in the long run. Lawyers who are already in the field have a better understanding of the law and stand a fair chance to excel in the field of IP law. As an independent IP lawyer, you can handle a range of important responsibilities in the field of intellectual property protection. You can serve as an advocate for clients in court proceedings in some cases. You can also act as a consultant, advising customers on intellectual property issues. You may analyze laws and regulations for clients, perform research for various documents, and communicate with clients and other legal experts both orally and in writing. IP professionals also stand a fair chance to work in Government Patent Offices They are highly trained engineers and scientists that work closely with business owners to process patent applications and determine whether or not a patent can be issued. The Government appoints the Controller General of Patents, Designs and Trademarks as per Section 3 of the Trademarks Act, 1999. They enforce regulations and guidelines related to Intellectual property laws, and they are responsible for the administration of rules. The positions and categories change according to jurisdictions and countries. Pursuing a career in the government patent office has its perks, and it is an exciting path where you are required to abide by guidelines and work for society at large. Scope Beyond Borders IP is, by definition, a global activity. Apart from local operations, there is a lot of work connected to worldwide IP enforcement. Furthermore, IP legislation is relatively consistent around the world. The TRIPS agreement, which stands for Trade-Related Aspects of Intellectual Property Rights, is now followed by most large economies. As a result, IP rules in different jurisdictions varied only slightly. Employers search for a specific set of skills in their potential professionals and some of them are: Technical Background - They must have a solid technical background to work on inventions. Analytical Skills - The ability to analyze data for IP projects requires an analytical mindset. Problem-solving abilities - The ability to quickly handle client and company difficulties. Learning Agility - The ability to learn continuously and quickly to accomplish desired outcomes. Reading Ability - A propensity for swiftly grasping the gist of technical material. The skillset required for pursuing a career in IP IP as a field is interdisciplinary with few exceptions, and multi-disciplinary skill is a must for success. IP pervades multiple streams, from science and technology to business, economics, management, law, and success as an IP professional necessitates a good understanding, if not specialization, in more than one stream. Most effective IP professionals have degrees in at least two fields and are willing to learn whatever else is needed. Every work of intellectual property contains certain creative elements. If you're a creative or intellectual person, the proximity to art, invention, and literature may appeal to you. It's not just about proximity; IP lawyers are also responsible for daily connecting with art, literature, and innovation. It won't seem that way if you're filling out paperwork all day to register some intellectual property. Still, it gets better as you gain seniority and work on more contentious projects. Let us explore the categorical positions in the IP field A patent agent, also known as a patent practitioner, is an expert licensed by the Patent Office to provide advice and assistance to inventors in filing patent applications. Patent agents can also give patentability assessments and assist with the preparation and filing of patent applications. A patent attorney is a lawyer who specializes in intellectual property law and helps inventors secure and protect their Intellectual property rights. Patent attorneys need to have passed the "patent bar exam," a federal exam that authorizes them to represent clients before the respective patent office. They also need to clear the state bar exam, which is required of all attorneys. Attorneys specializing in intellectual property who prepare and file patent applications. It is also their responsibility to defend their clients' rights, the innovators' patents, before organizations such as the Patent and Trademark Offices (PTO). is thorough in nature, requiring them to prepare and then argue the legality of patent documents, where the meaning of each phrase can be used to determine individualized monopolies of protection. What are the mandates to be a patent attorney? In most countries, obtaining a technical undergraduate degree in a subject of engineering or science is required before becoming a patent attorney. Then you'll need to have some experience and suitable qualifications in intellectual property law. You should have a passion for legal argument, be convincing, and have an eye for detail, in addition to having an interest in the technical subjects you studied. It would help if you had excellent commercial awareness as a patent attorney so that you can comprehend and support your client's business goals. Trademarks attorneys guide in various facets of Trademark phenomena like trademark adoption and selection, filing and prosecuting trademark applications, advising on trademark use and registration; handling trademark oppositions, revocations, invalidations, and reassignment; conducting searches; and providing trademark infringement advice. Examining existing patents and protections, developing and updating policies, detecting patent infringements, and assuring the protection of your employers' intellectual property are all responsibilities of an intellectual property manager. All these activities are managed and monitored by the IP manager, making the role of utmost importance. They specialise in transactions involving intellectual property. This consists in licensing as well as the purchase and sale of enterprises. Litigation and resolution of IP disputes are also taken care of by them. Intellectual property (IP) analysts carry out infringement analyses. These investigations aim to ensure that other businesses or persons do not infringe on their employer's intellectual property and that their own companies do not infringe on others' IP rights. They may be involved in the preparation of patent applications in addition to investigating existing intellectual properties. IP Analysts also assist in determining which existing similar patents may pose a threat of competition by thoroughly studying them. The knowledge of an IP analyst informs their client discussions on what R&D projects are worth spending money on and what patents are worth applying for. These professionals invest a lot of time looking at prior art and working with technical experts, in addition to the typical work of a litigator—propagating and responding to discovery requests, taking and defending depositions, drafting court documents, engaging in legal research, and so on. The patent litigator's job is to persuade the court that there is sufficient previous art to invalidate the patent at issue. They are majorly involved in patent infringement lawsuits or revocation proceedings. An in-house counsel is responsible for advising and working with various teams on a variety of matters. This necessitates a thorough understanding of contracts, technology, intellectual property, real estate, and other related laws. On the other hand, while dealing with similar work profiles, Law companies are more concentrated on a particular law sector. A general counsel, sometimes known as a chief legal officer, or corporation counsel, is the company's primary attorney and source of legal guidance. Because the GC's input is crucial to corporate decisions, they usually report directly to the CEO. Intellectual Property Law has outgrown itself in a matter of years and has brought about new opportunities and new prospects. IP professionals will always be needed to obtain the rights to fresh ideas and protect the ownership of existing creations as long as invention and innovation exist. Because people's imaginations never truly end, an intellectual property professional's job takes precedence. Intellectual property work can be a particularly intriguing discipline and career to pursue persons with inquisitive minds.
<urn:uuid:c7222344-31a5-4ccb-86bf-7d59019a18ee>
CC-MAIN-2024-38
https://www.copperpodip.com/post/intellectual-property-as-a-career
2024-09-11T02:19:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00115.warc.gz
en
0.955342
2,207
2.609375
3
Cybersecurity and Information Security. One and the same? Or two very different disciplines? While closely related, key differences exist in their roles in a company's security strategy. In today's digital workplace, cybersecurity and information security are both integral to safeguarding sensitive data, maintaining privacy, and ensuring seamless day-to-day operations. Both cybersecurity and information security practices protect valuable assets from various threats, ranging from data breaches and cyberattacks to insider threats and unauthorized access. Achieving balance and cohesiveness between these two practices is essential in establishing a robust security posture at any organization. We can sum up information security (InfoSec) as "data protection." It is a broader concept encompassing protecting information assets, including physical documents, intellectual property, employee records, financial information, personal identifiable information (PII), etc. Information security addresses the digital and non-digital risks posed to any data handled by a business. Information security transcends the digital realm and involves policies, procedures, and practices. Per the National Institute of Standards and Technology (NIST), key components include: On the other hand, cybersecurity is one form of applied information security that evaluates risks and protects from threats. It revolves around safeguarding digital systems, networks, and devices from cyber threats. Cybersecurity measures are essential in today's technology-based day-to-day business operations, where cybercriminals look to exploit any weakness for financial or political gain. Cybersecurity attempts to thwart or mitigate the effects of common cyberattacks like phishing, social engineering, malware, ransomware, and man-in-the-middle (MITM) attacks, which hackers utilize to steal data, money, or intellectual property. Cybersecurity encompasses various applied practices, technologies, and processes like password management, two-factor authentication, endpoint security, and threat detection to defend against unauthorized access, data breaches, and other cybercrimes. While distinct, cybersecurity and information security share fundamental principles that enable them to collaborate effectively. Risk assessment involves identifying potential vulnerabilities and threats to an organization's assets. Both disciplines emphasize categorizing a company's data and understanding common risks to that data so the organization can implement mitigation strategies. Security awareness and training also play a crucial role in preventing security breaches. Employees must be educated about best practices for handling sensitive information, recognizing phishing attempts, and adhering to security protocols. Improving employee adoption of security products and features can aid in reducing overall cyber risks. By fostering a security-conscious culture at all levels, organizations can reduce the likelihood of successful cyberattacks. Cybersecurity tactics involve deploying technologies like firewalls, intrusion detection and prevention systems, encryption mechanisms, and multi-factor authentication (MFA) to safeguard digital systems and networks. Cybersecurity professionals look to assess digital threats, patch vulnerabilities, and protect every entry point to the company network and devices. Regular security audits and third-party penetration testing help identify vulnerabilities that attackers might exploit. Training employees on end-user security products and features is also instrumental to the overall cybersecurity strategy. Information security strategies encompass a broader scope of data security across an organization, including policies for data classification, access controls, physical security, and disaster recovery. Information is categorized based on sensitivity and data regulations. Organizations can tailor suitable security measures and permissions-based access, minimizing the risk of unauthorized use of data. The boundaries between digital and non-digital security are increasingly blurred. Remote and hybrid work introduced many new personal devices to the workplace, each potentially acting as an entry point for bad actors and bringing another layer of complexity to employee access. This convergence underscores the need for a comprehensive approach that harmonizes the implementation of cybersecurity and information security measures. Consider a scenario where a financial institution aims to protect its customer data. Cybersecurity measures would involve deploying robust firewalls, encryption techniques, additional factors for authentication, and intrusion detection systems to prevent unauthorized access to its online banking platform. Simultaneously, information security strategies would dictate how customer data is stored, who has access to it, and how physical records are secured to prevent misuse or breaches. Both work together to create a holistic security strategy for the financial institution. As businesses undergo digital transformation and define their digital workplace, they must build on the harmonious integration of cybersecurity and information security principles to craft an effective security strategy. While information security practices ensure the proper handling of all forms of sensitive data, cybersecurity practices dictate how to protect that data in real time. By embracing the interplay of information security and cybersecurity, businesses can bolster their defenses and navigate the complex challenges of an ever-evolving threat landscape. How can the IT team help keep the organization secure? For starters, make sure the technology you use, especially those that can potentially open doors to cyber threats like remote access connections and remote support sessions, are as airtight as possible. Rescue’s enterprise-grade remote support security measures are designed to lock out malicious actors and keep organization and end user data secure, both with basic security measures and optional ones you can turn on based on your specific security policies or compliance requirements. See how Rescue can help keep your business supported and protected.
<urn:uuid:4ed81398-aab4-43c3-a9c7-cb9c608493fd>
CC-MAIN-2024-38
https://www.logmeinrescue.com/pt/blog/whats-the-difference-between-info-security-and-cybersecurity
2024-09-18T12:31:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00415.warc.gz
en
0.916357
1,045
3.140625
3
A Campus Area Network (CAN) is a network designed specifically to meet the needs of an extensive area like a university, corporate campus, or hospital complex. It connects multiple buildings, providing seamless internet and intranet access across a more extensive area than a traditional Local Area Network (LAN). This network type offers a robust solution for organizations spread across several buildings or campuses, ensuring consistent connectivity throughout. The Range and Scope of a Campus Network CANs typically span a limited geographical area, often not exceeding the boundaries of the specific campus it serves. They enable the integration of various technologies like Wi-Fi, fiber optics, and Ethernet, making them versatile for diverse organizational needs. The scope of a CAN includes not only internet access but also the integration of services like VoIP, video conferencing, and secure data transfer among different departments within the campus. How a Campus Area Network Compares to LAN and WAN While a LAN is confined to a smaller area like a single building, a CAN extends this coverage, linking multiple buildings or areas within a specific geographic location. In contrast, a WAN covers a much broader area, often spanning cities or even countries. CANs provide a middle ground, offering more extensive coverage than LANs but more localized services than WANs. Core Elements of a Campus Network The heart of a CAN lies in its infrastructure, which includes: - Networking Hardware: Routers, switches, and firewalls that manage data traffic. - Connectivity Medium: Ethernet cables, fiber optics, or wireless connections. - Access Points: Strategically placed to ensure wide coverage. - Network Services: Like DHCP servers for dynamic IP address allocation and DNS servers for resolving network names. Understanding the Backbone Network in a Campus Area Network At the core of a CAN is its backbone network. This pivotal component acts as the primary pathway for data traffic, connecting various subnetworks within the campus. It’s designed to handle high data traffic volumes, ensuring swift and reliable data transfer across the network. Campus Area Networks on Campus A CAN serves as the backbone of daily digital interactions, connecting students, faculty, and staff to vital resources. Whether it’s accessing the university library’s database, submitting assignments, or facilitating research, a CAN ensures that all these activities happen smoothly and efficiently. Network Design Strategies for Optimal CAN Performance Designing a CAN for optimal performance on a university campus involves several key strategies: - Scalability: The network must accommodate growing numbers of users and evolving technological needs. - Reliability: Ensuring minimal downtime and quick recovery in case of outages. - Security: Protecting sensitive academic data while allowing necessary access. - Bandwidth Management: Balancing the network load to prevent congestion during peak usage times. High-Speed Networking for Academic Needs High-speed networking is not a luxury but a necessity in the modern academic landscape. It supports the increasing demand for online resources, digital collaboration tools, and remote learning options. A robust CAN setup not only enhances the overall academic experience but also prepares students for the fast-paced, technology-driven world they will enter post-graduation. Wireless Solutions for a Flexible Campus Network A wireless campus network provides the agility needed to adapt to different teaching methods, research requirements, and student needs. It enables students and faculty to access the network from anywhere on campus, be it in lecture halls, libraries, or outdoor study areas. Deployment of Wi-Fi Access Points Across the Campus Strategic deployment of Wi-Fi access points is crucial for comprehensive coverage. This includes not just placing access points in high-traffic areas, but also ensuring there are no dead zones where connectivity drops. Access points should be distributed to balance the network load and accommodate varying user densities. With MU-MIMO, several devices can connect without having to wait. Wireless vs. Wired Campus Networks: Pros and Cons While wireless networks offer flexibility and ease of access, wired networks are known for their reliability and speed. Wireless networks, however, can be more susceptible to interference and security challenges. Wired networks, on the other hand, require physical infrastructure, which can limit mobility and adaptability. The ideal campus network often includes a blend of both, leveraging the strengths of each to provide a robust, flexible, and reliable network. Enhancing Mobile Access with Campus Network Wireless Solutions Mobile access is no longer a convenience but a necessity on campus. Wireless solutions cater to the increasing use of smartphones, tablets, and laptops, facilitating a mobile-first approach. This includes ensuring that educational resources, communication platforms, and administrative portals are easily accessible on mobile devices. Securing the Wireless Network within the Campus Security is a concern in wireless campus networks. Protecting sensitive academic data and personal information requires robust security protocols. This includes secure authentication methods, encryption, regular network monitoring, and proactive threat detection measures. Ensuring a secure wireless network not only protects against cyber threats but also builds trust among the users. Practical Use Cases of a Campus Area Network Campus Area Networks (CANs) are enablers of modern education and research. Practical use cases of CANs in educational settings include: - Streamlined Administrative Operations: Automating processes like enrollment, scheduling, and student record management. - Enhanced E-Learning: Supporting online courses, digital libraries, and virtual classrooms. - Campus Safety: Integrating surveillance systems and emergency communication networks. How Campus Networks Transform Educational Environments CANs are pivotal in transforming educational environments into dynamic, interactive, and interconnected spaces. They facilitate a shift from traditional learning methods to more collaborative and tech-driven approaches. This includes flipped classrooms, where students access lectures online and use class time for discussions and practical applications. The Role of Campus Area Networks in Research and Development In research and development, CANs enable high-speed data transfer and collaboration across different departments and even institutions. They support advanced research activities that require large bandwidth and specialized network capabilities, like sharing complex simulation data or conducting remote experiments. Real-world Deployment: Use Cases in University and Corporate Campuses In university settings, CANs are used to connect different faculties and research centers, facilitating interdisciplinary collaboration. In corporate campuses, they link various departments and data centers, streamlining communication and data sharing. This network setup enhances efficiency and fosters innovation by breaking down silos. Enhancing Collaborations with Innovative Campus Network Use Cases CANs empower educational and corporate campuses to engage in innovative collaborations. They enable video conferencing and remote collaboration tools, allowing for joint ventures between institutions located in different parts of the world. This global connectivity broadens the scope of academic and professional partnerships. Large Files and Resources Sharing on a Robust Campus Network A key advantage of a well-implemented CAN is the ability to share large files and resources quickly and reliably. This capability is essential in today’s data-driven world, where sharing high-definition video content, large research datasets, and extensive digital libraries is commonplace. A robust CAN ensures that these large files are transferred efficiently, without compromising network performance. Designing the Optimal Campus Area Network Infrastructure Designing the optimal Campus Area Network infrastructure requires a strategic approach that balances current needs with future growth. This involves a comprehensive understanding of the campus layout, user demands, and technological advancements. The goal is to create a network that is not only robust and reliable but also adaptable to evolving educational and technological trends. Key Considerations for Campus Area Network Design When planning a CAN, several key factors must be taken into account: - User Density and Distribution: Assess the number of users and their distribution across the campus to ensure network capacity meets demand. - Application Requirements: Understand the types of applications the network will support, whether they are data-intensive research applications or general administrative tasks. - Future-Proofing: Plan for future expansions and technological upgrades to avoid obsolescence. - Budget Constraints: Balance the best possible network design with available financial resources. Choosing the Right Cable and Router Configurations for Campus Connectivity Selecting the appropriate cables and routers is crucial for a high-performing CAN. Factors to consider include: - Cable Type: Evaluate the merits of fiber optic vs. Ethernet cables based on bandwidth needs and campus layout. - Router Capabilities: Ensure routers can handle the expected network load and offer advanced features like Quality of Service (QoS) management. Building a Scalable and Secure Network for an Expanding Campus Scalability and security are two pillars of a successful CAN. A scalable network can accommodate increasing numbers of users and devices, while a secure network protects against cyber threats. Implementing robust security protocols and regularly updating them is as vital as planning for physical network expansions. Optical Fiber vs. Ethernet in Campus Area Network Deployment The choice between optical fiber and Ethernet depends on several factors: - Bandwidth Requirements: Optical fiber typically offers higher bandwidth than Ethernet. - Distance Considerations: Fiber optic cables are better suited for longer distances without signal degradation. - Cost Implications: Ethernet might be more cost-effective for smaller networks or shorter distances. Monitoring and Maintenance of Campus Network Infrastructure Ongoing monitoring and maintenance are critical for the longevity and efficiency of a CAN. Regular performance checks, timely updates, and proactive troubleshooting can prevent network failures and ensure consistent connectivity. This also includes training staff to manage and maintain the network effectively. Designing an optimal Campus Area Network infrastructure requires careful consideration of various factors including user needs, technological advancements, scalability, security, and budget. By making informed choices about cable and router configurations, and prioritizing regular monitoring and maintenance, universities and corporate campuses can establish a robust and efficient network infrastructure tailored to their unique needs.
<urn:uuid:670bcff8-d187-4030-aad3-ec4f1e4b4c5f>
CC-MAIN-2024-38
https://purple.ai/blogs/campus-area-network-can/
2024-09-07T14:38:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00515.warc.gz
en
0.909759
2,017
2.671875
3
The world runs on data centres. Without them, critical services and infrastructure would be disrupted and lead to complications in the lives of millions. In the past, data centre security has been an essential aspect of their operation, with tight ground perimeters becoming industry standard. It was important to control who could and could not enter the data centre’s perimeter. But now, a new threat has emerged in the form of unmanned drones, and a different type of perimeter is necessary to protect data centre operations. The growth of the drone industry In Canada, the drone industry is worth over $6 billion, and it’s still growing. Drones are forecasted to help Canadians through creating jobs, improving supply chains, and connecting communities. They are also seeing amazing applications in agriculture, insurance, mining, construction, media, and law enforcement industries. However, these are legitimate operations, and with exciting, new technology come concerning new threats to security. Gatwick Airport in England saw 46.9 million passengers pass through its gates in 2019. But in 2018, during the hectic Christmas period, a drone flying near its runway disrupted 1,000 flights and 140,000 passengers. The army was deployed to help police locate the drone pilot, but no suspects were ever apprehended, nor were any confirmed photos of the drone ever taken. Airport officials labeled the drone interference “a deliberate act of disruption” which affected thousands. In the emerging drone industry, the potential for misuse is clear. Transport Canada, in their Drone Strategy extending to 2025, acknowledges that “we are at the early stages of collectively understanding the security threats and risk posed by drones at airports and other critical infrastructure”, and the Gatwick incident saw traditional perimeter protection methods such as physical barriers and advanced surveillance technology overcome by a drone which can cost less than a plane ticket. The fact that police, and even the army, could not locate a singular drone and pilot during the Gatwick incident highlights a pressing issue: they did not have technology to detect and verify the presence of drones in an actionable amount of time. If the drone had been carrying equipment to conduct hostile reconnaissance, or to hijack network signals and disrupt safety systems and software, or even a biological weapon, this issue would be even more apparent. Now consider a similar threat for one of Canada’s large data centres. Maintaining an effective air perimeter More and more people rely on data centres, even without knowing it, to provide the infrastructure that powers daily digital living. With this increased reliance, ensuring operations run smoothly and securely is critical. Data centre personnel must be ready to combat physical and cyber threats. Any period of downtime can cause mass disruption of critical services and will result in significant cost implications for data centre operators, and the people and businesses who have come to rely so heavily on their services. Of course, in recent years we have seen incredible advancements in data centre security. However, it is important to not become complacent or overly confident in any security practice, as there is no guarantee what worked yesterday will work today. To ensure maximum security, data centres must be aware of new threats, and work to be one step ahead of potential risk factors. Modern network-enabled security systems offer impressive ground perimeter protection capabilities, with video surveillance cameras, thermal imaging, and radar to follow intruder’s actions. On the inside, server rooms are equipped with cameras and sensors which further detect and deter any would-be criminals. But a comprehensive ground perimeter is no longer enough when the threats of tomorrow arrive from the sky. For example, a drone could potentially damage a data centre by means of physical or logical attacks Dedicated detection software can complement existing network-enabled physical security solutions by detecting drones based on the radio frequency (RF) signals that they emit. This technology can identify the make and model of more than 200 drones, whether they be commercial, hobbyist, or DIY in nature. Additionally, it can pinpoint the location of the drone’s pilot. However, not all drones have pilots, and these can present some new challenges in relation to others. They can be piloted via Global Positioning System (GPS) waypoints and flown miles away, sometimes making it difficult to track the operator. The eyes and ears of security personnel alone are not enough to provide advanced warning of drone approaches and indications of their intent to data centre security forces. Drone identification and threat analysis The detection and tracking of drones are of utmost importance for data centre security, but arguably just as important is establishing the reason a given drone is present. It’s common for data centres to operate a no-fly zone, so security and police forces need to be able to quickly discern if a drone has accidentally passed into restricted airspace due to carelessness, or has intentionally approached its target due to maliciousness. If a confused amateur drone pilot has genuinely misunderstood the laws regarding their drone usage, then deploying informational signage or speakers which issue pre-recorded alerts could be enough to make it clear that drones are not permitted in a given location. Naturally, in either case, rapid identification is necessary to facilitate informed decision making. Drone detection software can interface with PTZ (pan/tilt/zoom) cameras to lock-on to and track the movements of drones. High-quality imaging can be used to determine the function of a drone or the substance of its payload to separate friend from foe. Integrated drone detection solutions The threat posed by drone activity will only increase as drone technology improves and options become more affordable. Deploying integrated solutions will be essential in combating physical and cyber threats. Emerging detection technology uses millions of drone images to train camera-based AI/ML software to accurately discern friendly and malicious drones, even when disguised. Remote identification protocols, known as RemoteID, are also forecasted to become more commonly used by police, logistical, and medical drones to provide information which can verify a drone’s legitimacy. Both breakthroughs will better protect data centres from airspace threats. Drones are an exciting new technology with the potential to have amazing impact in the fields of logistics, security, agriculture, healthcare, and others. But their potential for misuse must be kept in mind as we look to the future. With an integrated security solution in place, a data centre’s air perimeter can be monitored just as effectively as its ground perimeter. Further integrating these security solutions with existing video management systems (VMS) will also enable a drone detection system to be part of a larger, comprehensive security solution that will work to ensure a smarter, safer world for everyone.
<urn:uuid:ac41e411-0eb6-490f-9e7b-8eed74308740>
CC-MAIN-2024-38
https://www.itworldcanada.com/blog/drones-pose-new-threats-to-data-centre-security/495007
2024-09-08T18:53:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00415.warc.gz
en
0.951114
1,342
2.765625
3
What is in your data? You need to know Data-driven decision making empowers problem solvers to address business challenges with confidence. From the factory floor to the marketing department, big data drives policy and enables businesses to connect more effectively with customers. However, unless managed properly, too much data can prove costly. Solve the surplus data problem with effective data governance. Risks Associated with Surplus Data With everything from websites to appliances generating data, organizations gather vast quantities of information each day. As a result, as many as 65 percent of companies report that they deal with a problematic surplus of data. And too much data, if mis-handled, can have significant consequences. First, consider the hard costs of dealing with data. Big data analytics requires a supporting IT infrastructure. Storage space on servers or in the cloud, bandwidth to transfer data, and personnel costs for data scientists all add up. Using those resources to handle useless data wastes precious budget. But low-quality data carries even greater risk than the financial cost. For example, when decision makers use outdated information to determine policy, the resulting strategies can damage customer relationships and reduce productivity. Surplus data also increases cybersecurity and compliance risks. Large volumes of data create greater potential for data breach and a more attractive target for hackers. Additionally, organizations must store sensitive data in a way that demonstrates regulatory compliance. For instance, HIPAA and GDPR strictly regulate how organizations handle personal information. Emphasize Quality Over Quantity The statistics prove emphatically that data-driven decisions spur growth, help businesses attract and retain customers and inform powerful strategies. And yes, companies need a sizeable amount of data to inform the process. However, simply collecting more data does not always result in better decisions. To effectively manage and use data, companies should prioritize data quality over data quantity. This involves collecting the right data, meaning actionable data that is directly related to business goals and problems to solve. In addition to collecting the right data, organizations ensure quality data by implementing comprehensive data governance. Data governance means that companies know what data they have and where it resides. They implement multi-faceted cybersecurity to keep data safe, and they carefully define both information access controls and data retention policies. Benefits of Effective Data Governance Organizations that build a culture of data governance experience several key benefits, including: - Financial savings – When organizations carefully manage their information assets to remove ROT (redundant, obsolete or trivial data), they realize cost savings related to data storage and eDiscovery. - Reduced risk of data breach – Multi-layered security and carefully managed access controls mean the right people can access the data and the wrong people cannot. - Improved productivity – When employees can access the right data at the right time, they work smarter and more effectively. - Regulatory compliance – A good data governance program includes automated compliance monitoring to reduce the risk of non-compliance. - Take advantage of emerging technologies – With the right data, organizations can implement new technologies to drive innovation. For example, digital twin technology can revolutionize research and development, but its successful implementation depends on high-quality data. Start Now to Build Robust Data Governance Successful data governance takes time and careful planning. Get the right stakeholders involved from the beginning, start simple, and partner with data governance experts for best results. The consultants at Messaging Architects will help you get started by identifying your data assets and developing a comprehensive data governance plan to categorize, secure and monitor them.
<urn:uuid:ca319726-bc85-4233-9cab-1c2382001554>
CC-MAIN-2024-38
https://messagingarchitects.com/surplus-data-governance/
2024-09-14T21:33:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00815.warc.gz
en
0.910452
718
2.640625
3
Pages and forms are similar in certain ways. A page is a database design element that displays information. Pages can be used anywhere in your application that you have text, graphics, or an embedded control, such as an outline, to present to the user. A page or form can contain the following: Elements to use on a page | Actions automate tasks for the user. Add actions to the menu in the Notes® client, or add actions with buttons or hotspots on a page, Xpage or form. For more information, see the topic Actions in the chapter "Adding Automation to Applications." | Use Java™ applets to include small programs, such as an animated logo or a self-contained application, in a page, Xpage or form. For more information about including Java™ applets, see the chapter "Including Java™ Applets in Applications." | Attach files to a page or form so users can detach or launch files locally. | Computed text | Use computed text to generate dynamic text based on formula results. | Embedded elements | You can embed the following elements in a page, Xpage or form: a view or folder pane, navigator, outline, date picker, or Instant Messaging Contact List. Use these elements alone or combine them to control how users navigate through your application. | Place a graphic anywhere on a page, Xpage or form. Use graphics to add color to a page, Xpage or form or to create imagemaps. | Horizontal rules | Add horizontal rules to separate different parts of a page or form, or to make a page or form more interesting visually. | If you have existing HTML or you prefer using HTML to using the formatting tools Domino® Designer offers, you can import, paste, or write your own HTML on a page or form. You can also convert pages and forms to HTML. | An imagemap is a graphic you enhance with programmable hotspots. Hotspots, in the form of pop-up text, actions, links, and formulas, perform an action when clicked by a user. Use imagemaps as navigational structures in an application. | Layers let you position overlapping blocks of content on a page, form, or subform. Layers give you greater design flexibility because you can control the placement, size, and content of information. For more information on layers, see the topic "Layers" in this chapter. | Add links to take users to other pages, views, databases, or URLs when they click on text or a graphic. | OLE objects and custom controls | Designer supports objects that are linked and embedded (OLE) as well as custom controls, sometimes called OCX controls. Including a linked or embedded object on a page or form lets you use a page or form as a gateway to another application. For example, an Employee Information page or form can include an OLE object that links to a Word Pro® file where the employee annual performance reviews are stored. Notes/FXTM 2.0 fields create a two-way exchange between Notes® and a supporting application by allowing field data to be shared and updated from either application. For more information on including OLE objects and custom controls on a form, see the chapter "Including OLE Objects in Applications." | A section is a collapsible and expandable area that can include objects, text, and graphics. | Style Sheet (CSS) shared resources | You can find and insert a cascading style sheet (CSS) as a shared resource on a page, form, or subform. For more information on style sheets, see the topic "Creating style sheets as shared resources" in this chapter. | Use tables to summarize information, align text and graphics in rows and columns, or position elements on a page, Xpage or form. For more information on creating programmable tables, see the topic "Creating programmable tables" in this chapter. | Use text anywhere on a page, Xpage or form and apply text attributes, such as color, size, and font styles to the text. For complete information on creating and formatting text, see Notes® Client Help. | For information on creating and formatting tables, see the topic Creating tables in the Notes® Client Help. How pages compare to XPages and forms Pages, XPages and forms all display information to users. Pages and forms can be viewed in the Notes® client or over the Internet, while XPages are viewed only over the Internet. Forms and XPages also let you collect information. Fields, subforms, layout regions, and some embedded controls can be used on forms, but not pages. A page is best suited for displaying information, while a form is more suitable for gathering information. Using pages in composite applications Composite applications incorporate a number of different components in the same user experience. If you are creating a page to use as a composite application component, be aware that it may be positioned in different areas of the screen, and combined with multiple other components. For more information about the elements of composite applications, refer to the IBM® Composite Application wiki at http://www-10.lotus.com/ldd/compappwiki.nsf.
<urn:uuid:60b4a7ad-f22c-46e1-a588-2bf8eb68be76>
CC-MAIN-2024-38
https://help.hcl-software.com/dom_designer/10.0.1/basic/H_ABOUT_PAGES_4715_ABOUT.html
2024-09-16T02:01:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00715.warc.gz
en
0.847699
1,087
2.765625
3
Table of Contents Open Source Intelligence (OSINT) is the practice of collecting and analyzing publicly available information from a variety of sources to gain insights, assess risks, and make informed decisions. Its history can be traced back to the early days of espionage and military intelligence, but it has evolved significantly with the advent of the internet and digital technologies. OSINT sources include social media, websites, news articles, academic publications, and more. Main uses of OSINT encompass national security, law enforcement, corporate intelligence, and competitive analysis. It aids in threat assessment, fraud detection, due diligence, and identifying emerging trends. Key features of OSINT include data collection, analysis, and visualization tools, as well as techniques for verifying information and protecting privacy. In the future, OSINT is poised to expand further as the internet grows, offering even greater access to information. Advances in artificial intelligence and data analytics will enhance the ability to process vast amounts of data quickly. As the digital landscape evolves, OSINT will remain a vital tool for decision-makers across various sectors, helping them navigate an increasingly complex and data-rich world. List of Best Open Source Intelligence Tools NexVision is an AI-powered OSINT tool that automates data collection and processing to drive decision-making. NexVision stands as a game-changer in the world of open source intelligence (OSINT) tools. It’s not just a tool; it’s a powerful ally for corporations, governments’ entities, and researchers alike. Unlike its counterparts, NexVision boasts unparalleled comprehensiveness, offering the most extensive open source intelligence (OSINT) data collection from the surface and dark web, including a vast social media data lake. What sets it apart is its utilization of cutting-edge artificial intelligence (AI) technology, which meticulously sifts through data to eliminate false positives, ensuring users access the most precise and reliable intelligence available. - Provide accurate, timely, and actionable intelligence that empowers teams throughout the organization to make faster, more accurate decisions and amplify their impact — from security operations, compliance, incident response, fraud prevention, risk analysis, and threat monitoring. OSINT Tools Features - DARK Web Search Engine: NexVision’s dark web search engine is a game-changer, offering automated, in-depth exploration that goes beyond superficial landing pages. Utilizing cutting-edge technology, it reaches deep into authenticated, marketplace, and account-level sources, providing unparalleled access and invaluable insights for security, intelligence, and research applications. - Real-time Crisis Management, which specializes in real-time crisis alerts. This module employs advanced technology to deliver timely alerts and comprehensive analysis during critical events. It caters to users seeking immediate, reliable information to make informed decisions in crisis situations. NexVision’s Intelligence Module is a valuable tool for staying updated and responding effectively to emerging crises. - Geopolitical Intelligence, provide cutting-edge solutions for organizations seeking in-depth geopolitical insights. They offer real-time analysis, trend monitoring, and predictive intelligence to aid decision-makers in understanding global events and risks. 2. Social Links Social Links, a software company, specializes in AI-driven solutions that extract, analyze, and visualize data from diverse open sources, including social media, messaging platforms, blockchains, and the Dark Web. Their flagship product, SL Professional, empowers investigators and data security professionals by enhancing their efficiency in achieving work objectives. SL Professional boasts a comprehensive suite of custom-designed search methods, spanning over 500 open sources. These advanced search queries, some leveraging machine learning, enable users to filter data effectively during collection. But Social Links OSINT solutions go beyond data gathering; they also offer advanced analysis tools that refine information throughout investigations, providing accurate results for a clearer, more comprehensive understanding of the case. - A professional bundle of 1000+ original search methods for over 500 open data sources including all major platforms across social media, messengers, blockchains, and the Dark Web - Utilizing machine learning for advanced automation, it swiftly retrieves a wide spectrum of information, ensuring precise results at impressive speeds. - Bespoke analysis tools enable data to be significantly enriched and molded to the user’s particular purposes - Seamless integration within any IT infrastructure Maltego OSINT, short for “Open Source Intelligence,” is a powerful and versatile data analysis tool designed for gathering and visualizing information from various publicly available sources on the internet. It’s particularly valuable for digital investigations, threat intelligence, and cybersecurity purposes. Maltego enables users to connect, aggregate, and analyze data points from a wide range of online platforms, including social media, websites, DNS records, and more, helping users uncover hidden relationships, patterns, and insights. Its intuitive graphical interface and data visualization capabilities make it an essential tool for professionals seeking to conduct in-depth investigations and research in the digital realm. - Maltego enables users to generate visual graphs, depicting a wide array of entities including individuals, organizations, domains, IP addresses, and many others. - Maltego integrates with numerous data sources, including public data sets, social media platforms, DNS records, online services, and more. - Maltego assists in the automatic generation of links between entities, facilitating the identification of connections and relationships based on collected data. - Maltego supports collaboration among users by allowing the sharing of graphs and collected data. Recon-ng is an open-source reconnaissance framework primarily utilized for information gathering and web reconnaissance in the realm of cybersecurity. Developed in Python, it provides a powerful and extensible platform for security professionals and penetration testers to automate the process of collecting, analyzing, and organizing data from various online sources, including search engines, social media, and public databases. Recon-ng offers a wide range of modules and plugins that facilitate the discovery of valuable information such as subdomains, email addresses, vulnerabilities, and network infrastructure details. Its flexibility allows users to customize and extend its functionality, making it a valuable asset in ethical hacking, vulnerability assessment, and digital forensics. Recon-ng simplifies the complex task of reconnaissance by providing a structured and efficient approach, helping security experts uncover critical insights and vulnerabilities in their target environments, ultimately contributing to better-informed decisions and enhanced cybersecurity defenses. - Modular Architecture: Recon-ng’s modular architecture allows users to easily extend its functionality. It provides a wide range of pre-built modules for tasks such as DNS enumeration, subdomain discovery, and email harvesting. Users can also develop custom modules tailored to their specific needs, making it highly adaptable to different reconnaissance objectives. - Automated Data Gathering: Recon-ng streamlines the process of collecting information from diverse sources, automating tasks that would otherwise be time-consuming. Its modules can fetch data from search engines, social media platforms, public databases, and more. This automation not only saves time but also ensures consistency and thoroughness in data collection. - Data Organization and Analysis: The framework excels at organizing and presenting the gathered data in a structured manner. It provides tools to analyze the collected information, helping users identify patterns, vulnerabilities, and potential attack vectors. Recon-ng’s reporting capabilities make it easier to communicate findings effectively to stakeholders. - Customization and Extensibility: Recon-ng is highly customizable, allowing users to tailor their reconnaissance efforts to the specific requirements of a project. Security professionals can modify existing modules or create new ones, adapting the tool to evolving cybersecurity challenges and ensuring it remains a versatile asset in their toolkit. SEON Identity verification through cross-referencing social media and online platform accounts is gaining prominence for compelling reasons. It erects a formidable barrier against fraudsters deterred by the effort and resources required to fabricate profiles. Moreover, this approach offers a comprehensive view of an individual’s digital presence and can provide insights into their socioeconomic status, especially in regions with limited financial data access. Additionally, the choice of social media connections can offer valuable cues about a person’s identity, making it a multifaceted tool for confirming and understanding individuals’ authenticity. - Fraud Deterrence: One of the primary features is its effectiveness in deterring fraud. This high barrier to entry discourages fraudsters from attempting to deceive the verification process, enhancing overall security. - Digital Footprint Analysis: By cross-referencing various online accounts, this method compiles a comprehensive digital footprint of the individual. - Socioeconomic Insights: In regions where financial data is limited or unavailable, analyzing linked social media and online platform accounts can provide valuable socioeconomic insights - Identity Validation: helps organizations and platforms confirm the authenticity of users and minimize the risk of identity theft or fraudulent activities. ABOUT THE AUTHOR IPwithease is aimed at sharing knowledge across varied domains like Network, Security, Virtualization, Software, Wireless, etc.
<urn:uuid:1230155c-d42e-494e-b0f0-6b084c99decd>
CC-MAIN-2024-38
https://ipwithease.com/best-open-source-intelligence-tools-osint-tools/
2024-09-19T20:53:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00415.warc.gz
en
0.883031
1,840
2.703125
3
Blockchain is a digital transaction and identity platform that operates as a digital ledger. Blockchain technology works by maintaining an updated list of all transactions. So, what is blockchain? The power of blockchain is in the decentralized ledger, which keeps a record of each transaction that occurs across a fully distributed peer-to-peer network. Blockchains can be public or private. Blockchain technology can be used for identity and verification due to its strong cryptography that validates and chains together blocks of transactions, making it very difficult to tamper with or manipulate a transaction record. To clarify, cryptocurrency (for example, Bitcoin, a digital form of money), is not the same thing as blockchain. Bitcoin leverages blockchain technology, but blockchain applications are much broader than just cryptocurrency. For example, blockchain and AI—two evolving technologies—have the potential to transform the digital transaction management (DTM) market. Leveraging blockchain-based smart contracts can be particularly useful for processes that require multiple parties or steps. However, because blockchain is in its early stages, Aragon recommends holding off on deploying blockchain technology for digital identity or digital signatures; current DTM offerings are still safer.
<urn:uuid:6d4c2318-bbaa-4cc1-bafa-b0964d1723fe>
CC-MAIN-2024-38
https://aragonresearch.com/glossary-blockchain/
2024-09-07T17:46:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00615.warc.gz
en
0.915086
233
3.359375
3
In today’s data-driven world, organizations collect and store vast amounts of information. However, handling such large datasets can be challenging, especially when analyzing, storing, and securing them. This is where data generalization comes into play. In this article, we will discuss generalization, why it’s important, and the security concerns related to it. What is Data Generalization? Data generalization is a technique used to create summary information from detailed data. It involves reducing the granularity of data by aggregating or combining individual data points into broader categories or ranges. The goal is to simplify the data while preserving its essential characteristics and patterns. Let’s consider a dataset containing the ages of individuals. Instead of storing each person’s exact age, we can generalize the data by creating age groups such as “0-10”, “11-20”, “21-30”, and so on. This way, we reduce the level of detail while still maintaining the overall distribution of ages. Forms of Data Generalization: - Aggregation: Grouping data points based on common attributes or ranges. - Binning: Dividing continuous data into discrete intervals or bins. - Rounding: Approximating numerical values to a specified precision. - Sampling: Selecting a representative subset of data from a larger dataset. Importance of Data Generalization: - Improved Performance: Generalized data consumes less storage space and processes faster, improving overall system performance. - Easier Analyzing: Summarized data is easier to comprehend and analyze, enabling quicker insights and decision-making. - Privacy Protection: Generalizing sensitive data helps protect individual privacy by reducing the risk of identification. - Compliance: Generalization techniques can help organizations comply with protection regulations like GDPR and HIPAA. Security Aspects of Data Generalization Data generalization plays a crucial role in ensuring security. By reducing the granularity of sensitive information, it becomes more difficult for unauthorized individuals to identify specific individuals or reveal confidential details. However, it’s essential to strike a balance between data utility and privacy protection. Consider a healthcare database containing patient records. Instead of saving exact birth dates, the database can save just the birth year or age range to simplify the information. This approach helps protect patient privacy while still allowing for meaningful analyzing. Implementing Data Generalization To implement generalization effectively, organizations need robust management tools. DataSunrise offers exceptional and flexible solutions for data security, audit rules, masking, and compliance. Their comprehensive suite of tools empowers businesses to generalize data seamlessly while maintaining data integrity and security. Data generalization is a powerful technique that simplifies datasets, improves performance, and enhances security. Organizations can make better decisions about data management and protection by understanding generalization basics, importance, and security aspects. Using DataSunrise tools and effective strategies can help businesses utilize data effectively and protect sensitive information. Request an online demo of DataSunrise’s advanced data management solutions. Discover how DataSunrise can help you generalize data efficiently, ensure compliance, and fortify your data security posture.
<urn:uuid:4c6afb78-93eb-4646-83ec-fe1f9dd6b224>
CC-MAIN-2024-38
https://www.datasunrise.com/knowledge-center/data-generalization/
2024-09-12T16:28:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00215.warc.gz
en
0.863327
657
3.40625
3
One billion people around the world lack access to all-season roads, and modern infrastructure, making it difficult to transport critical supplies, medicine, and goods to market. This lack of infrastructure is a major obstacle to development, and it can have a devastating impact on people’s lives. In this TED Talk, “No Roads? There’s a Drone For That,” founder and CEO of Matternet Andreas Raptopoulos envisions a new way to transport goods in these challenging environments. His firm’s technology uses a fleet of small, autonomous drones to fly goods between designated points. These drones are able to carry small payloads over short distances, and they can navigate autonomously using GPS and other sensors. This technology has the potential to revolutionize the way goods are transported in developing countries. In a scenario where urgent medication is needed in a remote location, drones could deliver the medicine within hours. This could save lives and improve access to healthcare in some of the world’s most remote and underserved communities. This approach is also being used to address the challenges of urban transportation. In megacities, which face significant congestion issues, drones could provide a new layer of transportation between roads and the internet. This could help to reduce traffic congestion and improve air quality. While this technology is still in its early stages of development, it has the potential to make a major impact on the way goods are transported around the world. Drones like these could help to improve access to healthcare, reduce traffic congestion, and improve air quality. This technology has the potential to make a real difference in the lives of billions of people. Binge-watch more of our favorite TED Talks. https://www.geminidata.com/ted-talks/
<urn:uuid:e3a91de8-1797-4a41-ab56-ae8ff474b3b1>
CC-MAIN-2024-38
https://www.geminidata.com/using-drones-to-make-the-logistics-network-of-the-future/
2024-09-12T15:27:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00215.warc.gz
en
0.95148
363
3.21875
3
The Japan Act on the Protection of Personal Information, also known as APPI, is an act designed to protect the personal information of Japanese citizens. Anyone who receives the personal data of Japanese citizens must be in full compliance with the Act on the Protection of Personal Information or risk legal action. This Japan privacy law is relevant for anyone who does business in the Land of the Rising Sun. This guide will explain all you need to know about it. The Protection of Personal Information Act was initially passed in 2005. This was a significant shift in how personal information is protected. Originally, private business operators were monitored by various ministries and agencies. When breaches occurred, victims would seek restitution via tort law. The Japan Personal Information Protection Act looked to consolidate everything under the new Personal Information Protection Commission (PPC). The PPC is now required to provide guidelines for all private businesses, with additional guidelines for fields like healthcare, finance, and telecommunications. The most obvious equivalent is the GDPR laws passed in the European Union (EU). There are many similarities between both laws. When conducting business in Japan, all businesses must be aware of what constitutes personal information and the amendments passed since 2005. Personal information applies to any piece of information that could be used to identify a specific citizen. It includes personal names, physical addresses, IP addresses, and government ID numbers. In short, it must pertain to a specific individual to count as personal information. Amendments have been made twice since the original introduction of the Japan Act on the Protection of Personal Information. The APPI was updated in both 2015 and 2020, in accordance with major changes in both the business and digital worlds. The 2015 amendments forced businesses who use an “Opt-Out” method to report this to the PPC, which will then publish it on its website. It also forced businesses to obtain consent from citizens if their data was transferred offshore. The amendments, however, gave the APPI no power to enforce its mandates on entities located offshore, only to liaise with the equivalent overseas body. The 2020 amendments, on the other hand, chiefly focused on increasing the severity of penalties for noncompliance. It also gave citizens the right to request the deletion of their data, the mandatory reporting of data breaches, and personal information operators must now disclose their addresses. In principle, all businesses that obtain and handle the personal information of Japanese residents must comply with the APPI. There are a few exceptions under the latest incarnation of this Japan privacy law. Exceptions include professional writers, the press, academics, political parties, and religious groups. Like most major international data privacy rules, the APPI and its latest amendments don’t provide much information on best practices. However, if you’re in compliance with the EU’s GDPR rules, the chances are your business is already largely in compliance. Let’s take a look at how to become APPI compliant with the latest amendments. There are many ways to become APPI compliant. Most of these tips are best practices for businesses operating in any jurisdiction, so you may not need to spend significant time and energy on APPI compliance specifically. Update Encryption Standards – Adopting the latest encryption standards can help you avoid a significant part of the APPI. The law states that when data is encrypted to the highest standards, it’s not a requirement to report leaks and breaches to the regulatory authorities. Update Legacy Systems – Vulnerable technology could lead to breaches, and thus legal cases. Failing to take into account new technology could inadvertently expose vulnerabilities. Ensure your company has a mechanism for carrying out regular system updates. Implement Access Controls – The APPI mandates that only those employees necessary should have access to personal data. Implement a cutting-edge Identity and Access Management (IAM) system to limit who has access to data and to investigate breaches swiftly. Appoint a Data Protection Officer (DPO) – A DPO isn’t a legal requirement, but it’s highly recommended. DPOs are required to constantly check compliance and update company privacy policies as international guidelines evolve. Restrict Data Transfer – Under the amendments, you must seek the consent of individuals when transferring data. This is a significant change to the law, where implied consent was previously permissible regarding personal information. One of the major aspects of the Protection of Personal Information Act was the increase in penalties for non-compliance and data breaches. Under the amendments, offenders can be liable for fines of up to 1,000,000 Yen or 100,000,000 Yen in the case of businesses. Offenders will also have their names publicized by the PPC. Thankfully, the PPC tends to allow non-compliant businesses to amend their practices before escalating to enforcement action. In the case of data breaches, businesses are required to inform the PPC unless the data was encrypted at the highest levels. If you want to learn more about compliance best practices, learn how Delphix provides an API-first data platform enabling teams to find and mask sensitive data for compliance with privacy regulations.
<urn:uuid:81246a2b-d462-4442-88f3-a4cfee40009a>
CC-MAIN-2024-38
https://www.delphix.com/glossary/japan-act-protection-of-personal-information
2024-09-13T22:19:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00115.warc.gz
en
0.949394
1,046
3.125
3
A digital fingerprint is a collection of data about a digital device, a device component, or an application (such as a browser) that uniquely identifies it. A digital fingerprint can also be a set of data that identifies a file, such as an audio or video recording. The process of gathering information to create digital fingerprints, as well as using fingerprints to identify a device, program, or file, is known as fingerprinting. Data can be collected either stealthily or with user consent. Immutable or rarely changed data about a device, as well as about programs on it, can be used as parameters to identify said device. These include: - Device brand and model - Device MAC address - Operating system - Browser used and its settings (this collection of browser data is also called a browser fingerprint) - Screen size - Fonts installed - Time zone and language settings - TCP/IP configuration Applying device/browser fingerprints Fingerprints of devices or web clients such as browsers can be used to track users online. Among the purposes of such tracking are: - Screening out bot-generated traffic - Preventing identity theft and bank fraud - Combating piracy - Website traffic analysis - Ad personalization - Collection of data for sale Thus, digital fingerprints can be used for both fighting and perpetrating crime. Audio/video file fingerprints A digital fingerprint of a media file typically consists of fragments of the respective file, for example, the sound of a lead singer’s voice at a particular part (in terms of time) of a song. These fingerprints are used to detect illegally distributed content, for example when uploaded to YouTube, and to subsequently restrict or block it. Audio file fingerprints are also used to identify songs in apps like Shazam. A fingerprint of a text document can be generated based on a unique text pattern or document template. In particular, such fingerprints are used to detect and prevent data leaks in data loss prevention (DLP) systems. Protection against unauthorized fingerprinting Because fingerprinting can be used to both protect users and invade their privacy, many browsers offer some level of anti-fingerprinting protection. For example, Firefox blocks third-party requests to services known to engage in fingerprinting, while Safari provides sites with a simplified data set that is harder to uniquely associate with a specific device and user. In addition, there are various utilities and browser plugins that mask or falsify data that can be used for fingerprinting.
<urn:uuid:860b293b-1a88-44d1-9f4a-916f66671fd4>
CC-MAIN-2024-38
https://encyclopedia.kaspersky.com/glossary/fingerprint/
2024-09-15T02:57:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00015.warc.gz
en
0.910894
513
3.8125
4
Data Asset Management: Importance, Pillars and Benefits Data asset management is the process of managing, organizing, and optimizing data as a valuable business asset. It involves various activities such as identification, classification, storage, safeguarding, retrieval, and destruction of data. Data asset management goes beyond simply managing data; it focuses on extracting the maximum value from data. It encompasses the acquisition, tracking, utilization, optimization, and leveraging of data assets to create value. It can also be referred to as data governance, which is centered around technical data management and ensuring that data supports the business effectively. Data asset management provides oversight and vision to institutional data and the information systems, software, and hardware that make data assets available. It involves the conservation, curation, and exploitation of valuable enterprise data assets, including associated services that provide access to data. Data asset management is crucial as an IT asset because it recognizes the value of data within an organization and ensures that it is managed effectively. By treating data as a valuable resource, organizations can extract maximum value from it, leading to improved operational efficiency, better customer experiences, and competitive advantages. Data asset management encompasses data governance, which establishes policies, procedures, and controls to govern data throughout its lifecycle, ensuring compliance with regulations, minimizing risks, and protecting sensitive information. It also enables data-driven decision-making by providing access to reliable and relevant data, informing strategic planning and operational improvements. Additionally, data asset management ensures data accessibility and availability, organizing and storing data in a way that is easily accessible to authorized users. It also involves managing the entire lifecycle of data, from acquisition to disposal, ensuring data integrity, reducing storage costs, and mitigating data privacy risks. By integrating data asset management with IT asset management practices, organizations can effectively track, manage, and optimize their data assets alongside other IT assets, aligning data management with overall IT infrastructure management. What is the importance of data asset management? The importance of data asset management lies in its ability to unlock the full potential of data as a valuable business asset. By managing, organizing, and optimizing data effectively, organizations can derive significant benefits and create value. Data asset management ensures that data is properly identified, classified, and stored, which enables easy retrieval and safeguards against data loss or unauthorized access. This aligns with the activities mentioned in the definition, such as storage, safeguarding, and retrieval of data. Moreover, data asset management focuses on extracting maximum value from data. By tracking, utilizing, optimizing, and leveraging data assets, organizations can gain insights, make informed decisions, and drive innovation. This aligns with the activities mentioned in the definition, such as utilization, optimization, and leveraging of data assets. Data asset management also encompasses data governance, which ensures that data supports the business effectively. This involves technical data management and the establishment of processes, policies, and controls to ensure data quality, integrity, and compliance. By governing data effectively, organizations can enhance data reliability and trustworthiness, enabling better decision-making and reducing risks. Furthermore, data asset management provides oversight and vision to institutional data and the associated information systems, software, and hardware. This ensures that data assets are conserved, curated, and exploited efficiently, maximizing their value. It also includes providing access to data through associated services, enabling users to leverage data assets effectively. In summary, the importance of data asset management lies in its ability to manage, organize, and optimize data as a valuable business asset, extracting maximum value, ensuring data governance, and providing oversight and vision to data assets. By implementing effective data asset management practices, organizations can gain a competitive edge, drive innovation, and make data-driven decisions that contribute to their success. 1. Data Discovery Data discovery is a business-user-oriented data science process that involves visually navigating data and applying advanced analytics techniques to detect patterns, gain insights, answer specific business questions, and derive value from business data. It is a process of exploring data through visual tools, enabling non-technical business leaders to find new patterns, outliers, and valuable insights from various data sources. Data discovery includes statistical analysis, data type recognition, and the detection of anomalies such as missing values, outliers, or duplicates. It also involves identifying, classifying, and providing visibility into the location, volume, and context of structured and unstructured data. Overall, data discovery is a process of exploring, analyzing, and extracting meaningful insights and patterns from data to improve decision-making and understand trends in various industries. Data discovery plays a crucial role in data asset management by enabling organizations to unlock the full potential of their data assets. By conducting data discovery, businesses can gain a comprehensive understanding of their data landscape, including the location, volume, and context of structured and unstructured data. This knowledge is essential for effective data asset management as it allows organizations to identify and classify their data assets accurately. With data discovery, organizations can uncover valuable insights, patterns, and outliers within their data. By visually navigating and applying advanced analytics techniques, businesses can detect trends, answer specific business questions, and make informed decisions. These insights derived from data discovery can help optimize data asset management processes, such as data storage, retrieval, and utilization. By understanding the patterns and trends within their data, organizations can identify opportunities for improvement, streamline operations, and extract maximum value from their data assets. Furthermore, data discovery enhances data governance efforts within data asset management. It enables organizations to ensure that data supports the business effectively by identifying anomalies such as missing values, outliers, or duplicates. This helps maintain data quality and integrity, which are crucial for reliable decision-making and accurate reporting. By providing visibility into the data landscape, data discovery supports the oversight and vision required for effective data asset management. In summary, data discovery is of utmost importance for data asset management as it enables organizations to gain insights, understand trends, and optimize their data assets. It empowers businesses to make informed decisions, improve data governance, and extract maximum value from their data. By leveraging data discovery techniques, organizations can enhance their data asset management processes and drive better business outcomes.2. Data Classification Data classification is the practice of organizing and categorizing data elements according to pre-defined criteria. It involves the systematic assignment of each entity to a specific class within a system. This process allows for the easy retrieval, sorting, and storage of data for future use. Data classification is used to protect data from unauthorized disclosure, alteration, or destruction. It can be done through various methods, such as user-based classification, where knowledgeable users manually judge and classify files, or through predefined criteria based on sensitivity, importance, or other shared characteristics of the data. Overall, data classification is foundational to data security and helps in defining and categorizing files and critical business information. Data discovery is crucial for effective data asset management because it enables organizations to identify and understand the data they possess. By discovering and documenting data assets, businesses can gain insights into the types of data they have, where it is stored, and how it is being used. This knowledge is essential for making informed decisions about data governance, data protection, and data utilization strategies. Data discovery helps organizations ensure compliance with regulations and industry standards by identifying sensitive or personally identifiable information (PII) within their data assets. It allows them to assess the risks associated with different data elements and implement appropriate security measures to protect sensitive data from unauthorized access or breaches. Additionally, data discovery aids in optimizing data storage and retrieval processes by identifying redundant or obsolete data, enabling organizations to streamline their data management practices and reduce storage costs. Furthermore, data discovery plays a vital role in leveraging data assets to drive business value. By understanding the data they possess, organizations can identify opportunities for data-driven decision-making, uncover patterns and trends, and gain insights that can lead to improved operational efficiency, customer satisfaction, and innovation. Data discovery also facilitates collaboration and knowledge sharing within an organization, as it allows employees to locate and access relevant data quickly, enabling them to make informed decisions and drive better outcomes. In summary, data discovery is essential for data asset management as it provides organizations with a comprehensive understanding of their data assets, helps ensure data security and compliance, optimizes data storage and retrieval processes, and enables data-driven decision-making and innovation.3. Data Lineage Data lineage is crucial for effective data asset management because it provides transparency and accountability in the data lifecycle. By understanding the lineage of data, organizations can ensure data integrity, accuracy, and reliability. It allows them to track the origin of data, identify potential data quality issues, and assess the impact of changes or modifications on downstream processes and analytics. In the context of data asset management, data lineage helps in several ways. Firstly, it enables organizations to comply with regulatory requirements and data governance policies. By documenting the flow of data, organizations can demonstrate data provenance and ensure compliance with data privacy and security regulations. Secondly, data lineage aids in data discovery and understanding. It helps data stewards and analysts to locate and access relevant data assets, understand their structure and relationships, and assess their suitability for specific use cases. This promotes data reuse, reduces redundancy, and improves overall data efficiency. Furthermore, data lineage supports data troubleshooting and issue resolution. When errors or discrepancies occur, organizations can trace the data lineage to identify the root cause, understand the impact on downstream processes, and take corrective actions. This helps in maintaining data quality and reliability throughout the data lifecycle. In summary, data lineage plays a vital role in data asset management by providing visibility, ensuring compliance, facilitating data discovery, and aiding in issue resolution. It enhances data governance practices, promotes data reuse, and helps organizations maximize the value of their data assets.4. Data Quality Management Data quality refers to the level of accuracy, completeness, consistency, reliability, and relevance of data within an organization. It encompasses the processes, methodologies, and practices implemented to ensure that data is trustworthy and suitable for decision-making, business intelligence, analytics, and other initiatives. Data quality management involves continuous efforts to improve and maintain the overall quality of data throughout the organization. It aims to establish a solid information foundation that enables a comprehensive understanding of the organization and its operations by ensuring the data is reliable and fit for purpose. In the context of data asset management, data quality management plays a crucial role in ensuring the effectiveness and value of data assets. High-quality data is essential for accurate decision-making, reliable business intelligence, and meaningful analytics. Without proper data quality management, organizations risk making flawed decisions based on inaccurate or incomplete data, which can have detrimental effects on their operations and overall performance. Data quality management helps in identifying and addressing issues related to data accuracy, completeness, consistency, reliability, and relevance. By implementing robust data quality processes and methodologies, organizations can ensure that their data assets are reliable, up-to-date, and aligned with their business objectives. This, in turn, enables them to make informed decisions, gain valuable insights, and drive strategic initiatives effectively. Furthermore, data quality management supports the overall data governance framework within data asset management. It ensures that data is properly classified, stored, safeguarded, and retrieved, while also addressing data privacy and security concerns. By maintaining high data quality standards, organizations can enhance data integrity, reduce risks associated with data breaches or compliance violations, and build trust among stakeholders. In summary, data quality management is of utmost importance in data asset management as it ensures that data assets are accurate, reliable, and fit for purpose. It enables organizations to leverage their data effectively, make informed decisions, and drive successful business outcomes. By investing in data quality management, organizations can maximize the value of their data assets and gain a competitive advantage in today’s data-driven business landscape.What are the benefits of data asset management? Data asset management (DAM) offers several benefits, including improved organization, collaboration, and accessibility of digital assets. It provides a centralized repository for files, making it easier to locate and retrieve specific assets. DAM systems also enable version control, ensuring that the most up-to-date files are available. Additionally, DAM enhances brand consistency, data security, and overall efficiency, leading to cost savings and improved productivity. Is data asset management related to IT assets? Yes, data asset management is related to IT asset (Information Technology Asset) management because both involve the management and organization of assets within an organization. Data asset management focuses on the management of digital assets, such as files and media, while IT asset management encompasses a broader range of assets, including hardware, software, and infrastructure. Both disciplines aim to optimize the utilization, protection, and value of assets within an organization.
<urn:uuid:0e61f5e6-3c44-4f0b-9378-266bd1c501bd>
CC-MAIN-2024-38
https://www.itamg.com/data-asset-management/
2024-09-17T11:04:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00715.warc.gz
en
0.916163
2,580
2.75
3
The advancements in modern forensic capabilities have enabled law enforcement officials to quickly identify the details of who caused an incident and how it occurred. For instance, when the 1995 Oklahoma City bombing took place, investigators knew within a day that a truck containing ammonium nitrate fertilizer and diesel fuel was the cause behind the explosion that killed 168 people. Technology has changed a lot since 1995, and so has the frequency and type of attacks, requiring law enforcement to integrate technology into virtually every aspect of forensics, from rapid DNA sequencing to quick chemical tests of explosives and narcotics. Criminologists recognise that their ability to quickly develop leads has a direct impact on the ability to solve a criminal case. That is why more and more police departments have mobile labs, bringing the ability to solve the crime directly to the site. As our world has continued to change, the same techniques have begun to be used in conflict zones in Iraq and Afghanistan. A response team is sent in to evaluate the site and look for forensic evidence that may have been left behind. However, battlefield forensics brings a set of challenges with it; technicians need to be brought to the field with the support of a local forensic lab so that the delays of sending material back to the US for further analysis can be avoided. When a murder occurs in the USA, the local police can cordon off the area and have the luxury of conducting a very deliberate and methodical documentation of the crime scene. In combat scenarios, the time on the scene of an attack is limited. The event site is often active with ongoing firefights and mortar fire and is otherwise a hostile environment. These are not ideal conditions from which to collect evidence, but they are the conditions for which a new process and programme needed to be developed by the military. They had to adapt, both in the way they processed Improvised Explosive Devices (IEDs) and how they handled other ‘crime’ scenes; those resources that would typically be in safety in the background had to be moved to the ‘pointy end of the spear’ where the danger was. Unlike a domestic criminal investigation, the exploitation of evidence from the bombing of a café in Kabul, a sniper attack against coalition forces, or a car loaded with explosives detonating near a vehicle gate to a Forward Operating Base are uniquely different scenarios in that the investigation is taking place in an active combat zone. Despite this obvious variance, the inquiry will proceed in a similar methodology and approach to collecting and processing evidence. Forensic science is pristine; professionals with white lab coats need expensive machines with steady air conditioning, keeping sensitive equipment cool and in a dust‑free environment. This presents another challenge as the forensic environment in an operational theatre is often dirty, gritty and uncomfortable. In police work stateside, technicians can rely on the police to secure a site and keep it closed to the public for hours and even days. In combat zones, attacks may occur simultaneously while soldiers are taking enemy fire and trying to gather evidence. Joint Expeditionary Forensic Facilities Shortly after the removal of the Saddam Hussein regime, coalition forces began to encounter a sharp increase in terrorist and insurgent attacks. As IED attacks became the principal way to attack US and coalition forces, it became apparent that the process of collecting evidence and then shipping it back stateside was not logistically practical and was not responsive enough to commanders who needed information and intelligence to fight the terrorists and insurgents while identifying and targeting the bomb makers. The National Ground Intelligence Center (NGIC), in collaboration with the Naval Criminal Investigative Service (NCIS), deployed the first Joint Expeditionary Forensic Facilities (JEFF) lab to Camp Victory and Ramadi as a test pilot to see if standard law enforcement forensic capabilities could help coalition forces in identifying who was responsible for these attacks. The same law enforcement tools and capabilities used in identifying the Oklahoma bomber were now going to be put to the test in an active warzone. NGIC deployed a veteran police detective with forensic experience to determine the effectiveness of a JEFF lab and determine if additional labs should be deployed. The goal of the JEFF labs, aside from removing terrorists and insurgents from the population, was to teach the Iraqi and Afghani militaries on how to tackle the issue of terrorism through the rule of law. By partnering with Iraqi and Afghani military and law enforcement, this capability was being passed down so they could incorporate this into their own military and counter-terrorism units. When an IED is discovered and made safe, a forensic technician will examine the device and dust for fingerprints and DNA evidence. For example, if an IED builder assembles a cell phone to act as the remote detonator, he may handle the back of the battery or both sides of the battery, leaving a fingerprint behind. Likewise, when using tape to secure wires together, the adhesive in the tape traps fingerprints, sweat and/or hairs, which may contain DNA. All of this forensic evidence may be collected, even after a device has gone off, and in some cases this ‘latent’ evidence provides clues as to who constructed the bomb and potentially who placed the device. When this forensic and biometric information is collected, it is processed through the Department of Defense Automated Biometric Identification System (ABIS) database and an identity profile begins to develop. Creating profiles for more precise operations When this on-the-ground processing began, there were initially no immediate matches as the database was relatively small. However, as more evidence was collected from IEDs, sniper rifles, weapons, explosives, and car bombings, events began to be linked to one person or group of persons. This is similar to how a domestic detective would link one criminal to a string of burglaries or murders. Unlike civilian CSIs, the military separates the analytical functions such as signals intelligence, document analysis and cellular exploitation, to create not just a profile of the explosive device, but also of the individuals who may have been involved in the event. This builds the associations needed to identify who was involved and how the event happened. As this information is collected, the military analysts and intelligence officers can link with certainty an individual to a time and events, which can be used as evidence during criminal proceedings once the individuals have been apprehended. From an intelligence perspective, this is a huge advantage in removing anonymity from the battlefield. Knowing specifically who to target allows for more precise operations, similar to how a police force would leverage their special weapon teams to conduct a high-risk raid. Forensics and identity intelligence The use of biometrics is a powerful tool in countering anonymity, but it is only one part of the toolset needed to fully remove anonymity. Through the use of both physical and electronic forensics, the ability to hide behind anonymity is becoming harder to achieve or maintain for any length of time. A real world example of this capability occurred in the autumn of 2005, when US Forces conducted a raid on a suspected Al Qaeda member’s home. During the raid, a copy of a shipping manifest was recovered from one of the rooms and brought back for further analysis. The shipping manifest was for a number of higher end BMW and Mercedes vehicles that had been shipped from Europe to Iraq. Through cross‑referencing the Vehicle Identification Number (VIN) of the vehicles to past Vehicle Borne Improvised Explosive Devices (VBIEDs), it was discovered that two of the VINs matched vehicles from this manifest. Leveraging this information, a case was built against this individual for supplying the vehicles used to attack US and Iraqi forces. The shipping manifest also identified the shipping company used, so it could be further investigated for additional links and intelligence. In the early years of the Iraq war, Al Qaeda supporters were shipping vehicles from Europe to Iraq for free as a means of supporting ‘the cause’ by allowing local members to sell the cars on the market and use the money to fund further terrorist operations or to act as VBIEDs. An old beat‑up looking vehicle that is weighed down by explosives usually looks obvious to the trained observer. However, a newer looking high-end BMW or Mercedes would draw less attention at checkpoints and convoys driving along the motorway, making them the ideal vehicle to use for an attack. The forensics and document analysis of the shipping manifest led to the discovery of a previously unknown mechanism being used by Al Qaeda as a means of funding their operations in Iraq. It also allowed US and Iraqi forces to develop new tactics and techniques to identify vehicles possibly being used as VBIEDs by observing if a vehicle is riding low on its suspension when normally that would not be the case. Through further examination and research of the shipping manifest, the identity of the individual in Europe shipping the vehicles was discovered. The man’s information was added to the National Terrorist Watch List and the Biometric Enabled Watch List being leveraged by US Forces in Iraq. In late 2006, he had left Europe and was in Iraq for a short period, meeting with family. During a random checkpoint his biometrics were collected and then checked against the various watch lists. The individual’s name came up on a ‘Be On the Look Out’ or BOLO list, with the caveat to detain on site. After further questioning, the man’s unofficial travel route from Europe to Iraq and back was identified along with the various rest locations. Through additional questioning it was revealed how the vehicles were obtained in Europe and how the rest of the network was involved in both procuring the vehicles and moving them to Iraq. After several months of questioning, US Forces were able to identify a vast network of people, places and processes used to help fund the Al Qaeda operations in Iraq. This entire case and impact it had on operations in Iraq can be solely attributable to the forensic sweep made during a single raid on a suspected person’s home. What may have seemed as small and trivial to most was the deciding factor in the US’s ability to reduce the amount of funding and support moving into Iraq to sustain further terrorist activities. Overcoming anonymity is perhaps the greatest technological challenge of the 21st century. Terrorism is often a faceless and stateless enemy, and one that will require a unique approach to mitigate its effects. The use of forensic technologies on the battlefield is just one of many new strategies being added to a commander’s arsenal and will, in time, completely change the way wars will be fought in the future. The increased use of forensics and biometric technologies is certainly a key enabler to the effort to combat anonymity; however, the use and integration of identity intelligence is going to be the deciding factor between preventing a terrorist attack and dealing with its aftermath. The integration of technologies such as biometrics and forensics, coupled with the concept of identity intelligence will enable wars of the future to be fought more precisely and result in fewer innocent civilian casualties by targeting the right people. The following article will discuss in much greater detail the concept and use of identity intelligence and its integration into the idea of an ‘electronic border’. Joshua Steinhauer worked in the area of Human Intelligence during the Iraq troop surge from 2006 through 2007 and then as a contractor in Iraq from 2008 through 2010. He then went on to become the Identity Operations Manager at US European Command in Germany before returning to the US in 2014. He has an MSc in Major Programme Management from the University of Oxford and holds degrees in International Studies and Political Science from the University of Wisconsin and an MBA.
<urn:uuid:f345f6cf-372c-4976-9fca-deca98b9684a>
CC-MAIN-2024-38
https://platform.keesingtechnologies.com/us-biometric-and-identity-intelligence-programme-3/
2024-09-07T21:41:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00715.warc.gz
en
0.965108
2,338
2.9375
3
Where did smart technology come from? You might be surprised to learn that ‘smart’ technology is actually an acronym – originally, smart technology stood for Self-Monitoring Analysis and Reporting Training. Thanks to its’ ability to answer questions and talk back, this form of IoT technology is now known to be simply as intelligent as the name would suggest. Collaborating with different technologies in this way has led to a smarter, more productive way of working and organising our everyday lives. Did you know two million smartphones are sold around the world each day? The rapid growth in our need for constant communications has meant that smart technology has had to develop alongside, in order to provide the most efficient, instantly available service to millions of users across the globe. How can smart technology be used? In today’s society, smart technology has a whole host of uses, despite us only scratching the surface of what this type of technology can do.One of the most exciting aspects of this new form of working in the business world is the change in working patterns it can bring around. Whether it’s remote working, improved customer communications or even building trust between employees and their managers, smart technology really will come to impact on every aspect of our everyday working lives. There’s a whole heap of ways this digital boom is improving the way we operate outside of the home too. Smart technology can even help you live a healthier lifestyle. Do you use a FitBit to track your calorie intake, or check whether you’ve reached 10,000 steps on your iPhone? Devices like this are just the beginning when it comes to the ways in which data transfers can shake up the way we live, making more informed decisions to live a healthier, greener, more cost effective lifestyle. As more and more aspects of our lives continue to be governed – and improved – by digital data, smart technology will develop further to connect individual data details with one another. If you’re interested in learning more about the developments in IoT devices already taking the digital world by storm, take a look at our blog pages here. What’s next for smart technology? Have you heard of the buzz surrounding smart cities? As smart technology becomes even more advanced, the use of data handling digital assistants is sure to come to improve every aspect of our daily lives. As the world becomes even more aware of energy consumption and the effects of global warming, smart technology will be a key tool when it comes to making environmental changes in the digital world, saving businesses time, money and energy. This revolution is also likely to have an even bigger impact on the wider digital and working worlds. All that data generated by smart technology needs to be put to use, as well as requiring a fast Internet connection to process it all. Because of this, we’re likely to see an even bigger boom in the smart technology world, with businesses placing more focus on getting the job done faster, less expensively and more thoughtfully. Intrigued by the impact smart technology will have on your workplace? From your offices to further afield, our team of experts will be on hand to help you secure your place within the digital revolution. Take a look at our blog pages to find out more about making the most of digital transformation.
<urn:uuid:80bb43b4-7bf3-4057-b0c0-9a5b0e580a44>
CC-MAIN-2024-38
https://cisltd.com/blog/smart-technology/
2024-09-11T13:33:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00415.warc.gz
en
0.951822
671
2.828125
3
A couple of noticeable ones are: Less verbose, XML uses more sentences than required. JSON is faster; parsing XML software is slow and tedious. Many of these DOM manipulation libraries can contribute to your applications using massive quantities of memory due to parsing large XML files’ verbosity and cost. In the big event which you cannot pay attention to your endeavor and definitely require aid writing an article, just con Tact us. Every one is able to write an essay. Writing a true composition is really in fact an intimidating task. Maintain composed and purchase an essay nowadays! Again, which is certainly just the cause you should be certain you might have the appropriate people who will help you with your papertyper.net. Consequently, we’re here to assist you in writing the most effective essays. Just in case you purchase essays online, it truly is important that you know whether the work stays in process or continues to be concluded. Therefore contact us now to receive all the documents which you undoubtedly desire. Based on one of two large message formats, JSON and XML, we use almost all computer apps today, from desktop to mobile and smartphone. Today’s most widely used form is the JSON viewer, but XML has only been overtaken in the last five years. And what’s XML? XML is a markup language intended for data storage. ‘Data is used or transported widely. It’s also Case-responsive. XML gives you the ability to define elements for markup and develop custom markup languages. In XML, as an element, the basic unit is known. The file extension for XML is .xml. Ok, what’s JSON? The official Internet media form for the JSON viewer is application/json. JSON filenames use the extension.json. XML is not intuitive in its layout, making it hard to represent it in code. On the other hand, the JSON type is intuitive, making it easy to explicitly read and map domain objects in whatever programming language is used. Here are the crucial benefits/pros of using JSON: - Provide JSON viewer support for all browsers. - Read and write efficiently. - Simpler syntax. - Simple to build and tamper with. - Sponsored by most backend technologies. - It helps you to transfer and serialize structured data using a network connection. - You can use them for modern programming languages. Relevant advantages/cons of using XML are the following: - Permits transportable documents across programs and systems. With the help of XML, you can share data easily between different platforms. - Data is distinguished from HTML by XML. - XML simplifies transition channels for systems. Below are further benefits of JSON over XML that may not be as noticeable to users: The JSON data model structure fits the Data: JSON’s data structure is a globe, while XML is a tree. Although a map can be restrictive (just key/value pairs), since it is easier to understand and predictable, that’s what we want. Objects are represented in the same manner in the code. In many languages, especially dynamic ones, you can “slurp in the JSON,” and you immediately have your domain object. It is easy to go from JSON objects to code objects using an online JSON viewer because they match. When they go from XML objects to objects in code, they do not match, and there is a lot of room for interpretation. It’s limited to JSON, but that’s a good thing: JSON is constrained in terms of which objects can be modeled. Some may think XML is better since more artifacts can be modeled, not prohibited by developers. But it makes the code more straightforward, more predictable, and easier to read positively, even though JSON bans developers. A web-based, free platform for data formatting and analysis There are several free JSON tools to test decoded data for your convenience and to delete the blank spaces inside your data. This is one of the best Online JSON tools A user-friendly program. It does not require you to mount any powerful application within your computer, as the storage space specification for the installed application does not need to be included. As long as you got a high-speed internet connection, you can access its services from any browser. The JSON viewer also permits the data to be formatted. The JSON file reader has a built-in formatting facility such that additional commas, blank spaces, and unnecessary brackets do not have to worry about the programmers. Building RESTful APIs, like Cloud Components, requires a stable, fast, and easy-to-use form of data sharing. All of our APIs use JSON, and for endpoints that do not support JSON, we convert our easy-to-understand JSON to XML and back, so you don’t have to deal with it. As RESTful API practices and simpler forms of data sharing become more prevalent, JSON will leave XML practices in the dust.
<urn:uuid:e316331e-a73c-4222-8323-e50a09b3bbee>
CC-MAIN-2024-38
https://www.kovair.com/blog/json-vs-xml-whats-the-difference/
2024-09-11T12:33:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00415.warc.gz
en
0.924863
1,042
2.734375
3
Many cities and countries around the world are having problems with distracted driving accidents. During 2016, Great Britain recorded 1,445 fatal crashes where one or more people were killed. One study completed in the town of St. Albans found that one in six drivers were engaged in some type of activity that took their focus away from driving. In the United States, approximately nine people are killed on average each day due to distracted drivers. Another 1,000 people are injured with millions of dollars in property damage. In Toronto, there were about 7,500 distracted driving accidents during 2016. Of those, there were eight fatalities and 2,642 injuries with thousands of dollars in property damage. In spite of public awareness campaigns, drivers continue to ignore these statistics. People tend to think that bad things only happen to others, so they go ahead and text or talk while driving. This has become such a problem that cities around the world have changed their distracted driving laws, increasing penalties and fines. Ontario is one those areas that have recently increased the penalties and fines for those caught. What is Considered Distracted Driving? Many people connect distracted driving with texting while driving. In fact, distracted driving covers any activity that causes the driver to divert their attention away from the roads and traffic. Studies have found that younger drivers are more likely to ignore laws and put everyone on the road at risk. Distracted driving includes: Most cities are seeing a big rise in the number of tickets issued for distracted driving each year. Lawmakers feel that these penalties and fines will eventually cause drivers to realize that times have changed. Drivers must give their full attention to the road and avoid losing focus. Lives are at stake. A Major Factor One of the major factors that make distracted driving so deadly is speed. A car traveling at 60mph moves at 88 feet per second. Using simple math, we can see that if you turn your attention away from the road for only three seconds, your car will have traveled 264 feet. A football field measures 360 feet so you will have traveled approximately three-fourths the length of a football field, just to put things in perspective. That’s a great distance when you consider that there are autos moving at similar speeds all around you and on both sides of the road. If each driver looked away from their driving for three seconds, there would be dozens of crashes with multiple fatalities. The truth is that each driver feels that they’re probably the only one on the road who is looking away for a few seconds. Surely with all the other drivers paying attention, there’s no need to worry – they’ll see you and take action in time to avoid an accident. This can be a deadly assumption. An average vehicle weighs around 3,500 pounds, making it a very dangerous weapon in the wrong hands. Though more experienced drivers are less likely to be guilty of distracted driving, the truth is that everyone occasionally feels that a text message or phone call is important enough to take the risk. This type of thinking has become an epidemic in Ontario, England, America and many other countries. For these reasons, legislatures and lawmakers have been forced to increase the penalties for these crimes. Changes in Ontario’s Laws New laws have recently taken effect in Ontario that increases the fines and penalties for failing to give your undivided attention to the road. Fines for a first offense can be as much as $490 with three demerit points on your license. If you decide to fight the ticket and lose, you could pay up to $1,000 in fines. If you’re a new driver with very little driving experience, you could lose your license for 30 days. With each conviction, novice drivers will have their license suspended for longer periods of time, up to 90 days. After that, a novice driver’s license could be revoked completely. When is it Safe to Use the Phone? Many drivers have had questions about these new laws. In Ontario, it’s not uncommon to see drivers pull off to the shoulder in order to take an important phone call. But is this legal? Or it is dangerous as well? Each city has its own unique laws when it comes to these types of issues. In Ontario, the law states that a driver may pull off the road to a safe location and take a phone call. In some cases, moving to the shoulder of the road is not deemed safe. There may be workers there making road repairs. The shoulder of the road often has gravel, which can fly up and crack a windshield. Since each case is a bit different, it’s up to the driver to ensure that they are pulling off and re-entering the highway in a safe fashion. If a traffic officer feels that a driver has not pulled off the road in a safe manner, the driver may receive a traffic citation for this. At the end of the day, the traffic officer’s job is to make sure that all drivers are protected. Everyone wants to get home safely to their family each day and it simply is not worth it to put anyone in danger over a text message or phone call. Tips for Avoiding Fines We all want to get to our destination safely so what is the standard these days for using a phone while driving? According to the new laws in Ontario, drivers can use hands-free devices (Bluetooth) to talk on the phone while driving. However, you cannot pause to dial a number or answer a call unless you can do so using voice commands. Most experts recommend turning your phone off while driving. That way, you will not be tempted to answer a call or text message. If you are leaving on a long trip, then email all your friends and tell them that you will be driving and will not be able to return calls until you stop for service or food. Safe Driving Tips Always be mindful if on the road with cyclists, emergency vehicles or buses. Pay special attention to the road, your driving and other drivers. Remember that although you may not be breaking the law, there are others on the road who are. There may be a teenager who wants to talk to his girlfriend while driving to school. These drivers present a special danger due to their limited experience behind the wheel. There may be drivers who are under the influence of drugs or alcohol. This condition has been shown to slow a driver’s reflexes. Sometimes drivers are worried about being late, so they try to speed, take shortcuts or make erratic movements. When you put all these different drivers on the road at the same time, it can be dangerous. It’s a good idea to bear these things in mind before getting behind the wheel.
<urn:uuid:98fd1bc7-3fc0-4ec9-9559-9b41b5b0b002>
CC-MAIN-2024-38
https://www.fuellednetworks.com/new-distracted-driving-laws-for-ontario/
2024-09-12T18:04:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00315.warc.gz
en
0.974503
1,373
3.1875
3
IEEE 802.1X is simply the standard Extensible Authentication Protocol (EAP), by which Wi-Fi® authentication is able to transmit. With EAP you pass information over ethernet frameworks and don’t use Point-to-Point Protocol (PPP). 802.1X is comprised of three main parts; the supplicant, the authenticator, and the authentication server. With 802.1X, the initiation phase involves the supplicant (aka the client machine or device that wishes to connect to the wireless network) which sends encapsulated EAP data in EAPOL (EAP over LAN) frames to the authenticator (aka the wireless access point, router or switch). Essentially, messages are shuttled between the authentication server and the supplicant’s device via the authenticator, but the authenticator can't see what it's relaying because the message is encrypted. Several pieces of pertinent information are validated in order for the authentication server to approve of the requests before authorizing access to the Wi-Fi® router's network server. In some situations the supplicant being referred to can be considered the software itself that is located on the machine trying to authenticate to Wi-Fi® via RADIUS. This diagram demonstrates how 802.1X functions: 802.1x mitigates many of the problems that occur with using outdated WEP, such as the long life of passwords. Wi-Fi® Authentication with Foxpass RADIUS Foxpass enables per-user logins instead of using a shared password, enhancing security and preventing unwanted access to your very important company data. An employee can log into their company’s Wi-Fi® with ease while also solving the age-old problem of getting rid of generic, overused, shared passwords that allow any random bystander to hack into to your infrastructure, making businesses vulnerable to major attacks--cyber attacks in which they are often unable to recuperate. Another added bonus of using RADIUS WiFi authentication With Foxpass is that an IT manager can seamlessly delegate with an internet connection and access to the Foxpass Dashboard. Foxpass offers secure and reliable Wi-Fi® Authentication with WPA2 Enterprise, 802.1x and RADIUS at the click of a button! Try us out free for 30 days Wi-Fi is a trademark of Wi-Fi Alliance®
<urn:uuid:4bdb54dd-0270-4f41-a25b-bc443797f29d>
CC-MAIN-2024-38
https://www.foxpass.com/wifi-radius
2024-09-15T05:58:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00115.warc.gz
en
0.91178
500
2.8125
3
Did you know that nearly 560,000 new instances of malware are detected every day? As cybersecurity advances, threat actors develop malware with new tricks that exploit weaknesses in an IT environment. Once the malware finds a loophole, it spreads exponentially like a disease, corrupting files, exfiltrating data, redirecting traffic to other destinations, and performing other malicious activities. Malware can spread at a jaw-dropping rate. Hundreds and thousands of files, irrespective of whether they’re stored on the internet or computers, are infected on a daily basis. How safe are your machines? Your machines, irrespective of whether they utilize a Windows, Linux, or Mac OS, can be exposed to thousands of malware attacks each day. Malware is constantly on the lookout for vulnerabilities in your IT environment. If the malware detects a vulnerability in just one computer, it can leverage that weakness to move laterally in to your IT environment. What is lateral movement? Lateral movement is a technique used by malware to plunge deeper into your network. Once initial access is gained in one computer, the malware can jump to other computers in your network in search of sensitive data and high-value assets. A vulnerability in one computer can expose your entire IT environment to malware attacks. Shocking, isn’t it? But wait, here comes the twist. Some malware are backdoors… What is a backdoor malware attack? A backdoor attack is when malware leverages weak entry points, such as compromised passwords, poor authentication management, and inadequate endpoint security to gain initial access. Once it enters your network, it erases its trail stealthily. Later, when the malware reenters your network, it can use the same path without raising any alarm. How can you protect your machines from malware? Two types of malware discovered recently have caused mayhem in the Linux world: RedXOR is a backdoor malware targeting Linux systems, specifically Red Hat Enterprise Linux (RHEL) 6. Although RHEL 6 has been designated as in its end-of-life status, many Linux users are still using it. Mamba is a ransomware that the FBI and the US Department of Homeland Security have issued a high-level warning about. It has garnered the attention of security professionals worldwide. In the following video, we address: How malware operates inside a resource. Weak entry points that malware, like RedXOR and Mamba, use to gain access into Linux resources. Safeguarding your machines (Windows, macOS, and Linux) using a powerful tool that provides advanced authentication strategies, concrete endpoint security for both remote and local logons, and more. Learn more about the recent FBI and the US Department of Homeland Security high-level malware alert and watch our new video. Want to jump into the tool straight away? No problem. ADSelfService Plus is an integrated self-service password management and single sign-on solution with powerful features to protect your organization against malware. Learn more about its features.
<urn:uuid:85de6202-2cbc-478e-bc5d-44a31e02c31c>
CC-MAIN-2024-38
https://blogs.manageengine.com/corporate/general/2021/09/03/beware-of-malware-attacks-little-known-facts-and-why-they-matter.html
2024-09-16T13:19:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00015.warc.gz
en
0.911916
615
2.71875
3
What Is Text Analytics? Text Analytics is the computer-driven process of analyzing large volumes of unstructured text data, utilizing Artificial Intelligence (AI) techniques to identify new and previously unknown insights, themes, or patterns in text. The Importance of Text Analytics The ability to use AI on unstructured text data, such as user reviews, client feedback, customer opinions, and more, can be very powerful and insightful for businesses. Especially in today’s “digital” environment, when such an avalanche of data is available. There is a wealth of information stored in emails, Tweets, call center agent notes, surveys responses, etc. Trying to analyze all this data by manual means is an impossible task! Text Analytics is the solution to unlocking the undiscovered insights from the types of unstructured data listed above. It uncovers patterns, trends, and themes in the data to reveal wants, needs, and thoughts—in other words, insights. The ability to track this information can provide earlier detection of potential business trouble because it tells us why dissatisfaction exists relating to your company, product, or service. Gain A Competitive Edge Using Text Analytics with competitors’ data gives businesses an opportunity to have an advantage. Unveil the hidden areas your business can hold over your competitors. For example, based on competitors’ customer review data regarding product pricing, offerings, location, etc., a company can see where they stand in the market. This allows businesses to find their strengths and weaknesses and adjust their strategies and business decisions accordingly. Save Time With Text Analytics Gaining insights from reviews online, survey responses, and social media is not a new idea. What makes Text Analytics so compelling is its efficiency. Before Text Analytics, people had to manually sift through unstructured data, text, and written communication themselves to discover trends (and some still do). This was both labor-intensive as well as time-consuming. Text Analytics uses AI to quickly analyze the data, providing faster results with minimal labor cost. For example, TD Bank, an American national bank and subsidiary of the Canadian multinational Toronto-Dominion Bank, used Text Analytics to create categories for employee benefits mentioned in the annual TD Bank employee survey. The 20,000 surveys had previously been reviewed manually by four groups (5,000 surveys each). Text Analytics In Your Industry Healthcare providers, the pharmaceutical industry, and biotechnology firms use Text Analytics to improve patient outcomes, increase drug discovery, and manage regulatory compliance. Unstructured data in pharmaceutical companies may include physician’s notes, pathology reports, operational notes, and electronic medical record data. The strict regulations of record-keeping in healthcare has led to giant databases filled with unused data. Text Analytics opens the door for the healthcare industry to utilize the stacked up unstructured data. Use cases of Text Analytics in this industry include discovering new drug compounds, matching participants to clinical trials, and marketing pharmaceuticals. Stock market traders must buy and sell stocks at lightning speed, so every second matters. In this environment, any small yet critical piece of information can cause the market to flip. Text Analytics provides easy tracking and detection of breaking news or stories with relevant information, creating an opportunity for traders to form quicker, better, more accurate decisions about their assets and potential buy-sell trading actions Retailers can now lose the fear of missing out on emerging customer trends. Text analytics helps retailers analyze, understand, and act on data from various real-time customer feedback channels, such as third-party online reviews, blogs and forums, and comments on social media. Yankee Candle, a popular candle store in the United States, used Text Analytics to sift through tons of online text to discover what scents people associate with different seasons. By the end of their research, they came out with a very successful new seasonal scents launch. Aviana Success Story: Cisco Insight Into Hearts And Minds Cisco, a multinational technology company, came to Aviana Global Technologies wanting to find a way to keep their top talent happy and avoid losing them to the competition. Text Analytics was the perfect solution to their problem. The Cisco team used text mining techniques to analyze more than 18,000 free form responses from employees who completed the optional text sections of the company’s employee questionnaire. Under the guidance of Aviana’s lead consultant, the team built a sophisticated SPSS model which combined cluster analysis techniques with unstructured text sentiment analysis. The team was able to segment the data by function for a customized view of different business areas, such as Engineering, Sales, Cisco Services, Supply Chain, and so on. The sentiment analysis assessed how positive or negative employees felt about many other cultural issues within the company.
<urn:uuid:3e2495ab-3147-481b-be92-ee16408af456>
CC-MAIN-2024-38
https://avianaglobal.com/text-analytics-mining-new-insights-from-text-with-ai/
2024-09-17T16:17:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00815.warc.gz
en
0.923113
972
2.515625
3
When you’re just starting as a network engineer, an IP address conflict can be a difficult issue to resolve. In this post, we’ll look at what an IP address conflict is and why it occurs. Then, we’ll talk about the types of IP address conflicts and the ways to fix them. We’ll also discuss how to detect and prevent IP address conflicts. Detection and prevention can be done best with certain tools, so we’ll learn about the top five tools to use. What’s an IP Address Conflict? An IP address conflict occurs when two or more computers in the network are using the same IP address. We’re mainly talking about private networks here, in which IP addresses are used internally. Now, an IP address is a unique number that comes in four pairs. It’s used to identify a computer, laptop, or mobile device connected to a network. It should be unique, or the router won’t understand where to send requests. IP addresses can be thought of in the same manner as real house addresses, which are used to identify a certain location. For example, if two houses have the same address, then a delivery person would get confused about where to deliver a parcel. The IP address is mapped to the MAC address, which is the unique hardware address given during manufacturing. In IPv4, the mapping is stored in a table called the Address Resolution Protocol (ARP) table. Whenever a request comes in, this table is referred, and the request is sent to the correct device. However, in the latest IPv6, there is no ARP table. Here, the Neighbor Discovery Protocol (NDP) is used to know the MAC address of each device when a request comes. But if the same IP address is used for multiple devices, it will corrupt both ARP in IPv4 and NDP in IPv6. How Does an IP Address Conflict Occur? The router is the main device used for communication within a network. The communication is done with packets being sent between devices. If any two devices have the same IP address, there will be an IP address conflict. There are several reasons why two devices might end up having the same IP address:: - Misconfiguration by the network administrator—While setting the static IP address of devices, the network administrator simply uses them twice. This type of error occurs due to human error. - Default addresses of internet of things (IoT) devices—Many IoT devices come with a built-in IP address. This should always be changed by the network administrator because if it’s not, an identical IoT device will cause an IP address conflict. - Personal devices—Many IT companies allow people to bring their own devices, and these can cause IP address conflicts. In many cases, personal devices use the same IP address range, and this can cause IP address conflict in organizations. - Virtual private networks (VPNs)—Users who work from home generally connect to an organization through a VPN. These VPNs use a similar IP address as the default, and this can cause IP address conflicts. - Dynamic Host Configuration Protocol (DHCP) issues—The DHCP server assigns IP addresses automatically. IP address conflicts could result from misconfiguration or bugs in the DHCP server. - Network attacks—Hackers who try to access corporate organizations often use techniques causing IP address conflicts. Types of IP Address Conflicts IP address conflicts can be broadly classified into three types: - Conflicts due to static IP addresses—In many cases, we give static IP addresses, and they can cause conflicts. These conflicts can be due to misconfiguration by a network administrator or the default addresses of IoT devices. Other causes can be bring your own device policies by an organization, which cause static IP addresses to be set in devices. Users working from home and using VPNs can also cause static IP address conflicts. - Conflicts due to DHCP—The DHCP can be misconfigured and assign the same IP addresses to multiple devices. - Conflicts due to hackers—When hackers get into a network, they often create disturbances. One way is to use duplicate DHCP servers, which causes IP address conflicts. How to Fix IP Address Conflicts When we get an IP address conflict, a device often will get a Windows error like the one below. The error is similar to one we get on a Mac. There are several ways to solve IP address conflicts. Let’s look at some popular ones. 1. Restarting the Router As the router only assigns a dynamic address, the most common way is to restart it. All routers come built in with a DHCP server, and they assign dynamic IP addresses to all connected devices. So when we restart them, they’ll assign fresh IP addresses to all devices, and the IP address conflict issue may get resolved. 2. Re-Enabling the Network Adapter If restarting the router doesn’t solve the problem, then the device giving the error needs to be manually fixed. All computer devices connect to the network using a network adapter. They either connect through a USB port or through Wi-Fi. In both cases, find the connection from the tray in Windows. Afterward, right-click on it, disable it, and then enable it. 3. Updating the Driver Windows has a network card driver, which is software for interfacing between the adapter and the network. Windows will occasionally release new patches. Sometimes, having an older version can cause IP address conflicts. Updating your older version to the new release can help fix this issue. 4. Renew Through Command Prompt This is one of the top methods to resolve IP address conflicts. First, we need to be on the device having the problem and open the command prompt in administrator mode. Afterward, give the commands below in the same order. These commands will first release the IP address and then renew it from the router. How to Detect and Prevent IP Address Conflicts When we have an IP address conflict, we can fix it from the respective system using the methods described in the previous section. But this isn’t feasible in the case of large and complex corporate networks. The network administrator can’t go to individually affected computers to fix IP address conflict issues. In large networks, the best thing to do is detect and prevent IP address conflicts from happening. This is generally done with the help of IP conflict software. These solutions work as a scanner and allow you to monitor IP addresses from a central location. They keep running in the background and keep a database of all IP addresses used. Additionally, the new IP addresses allocated will be updated in the database. They also detect some different activities with IP addresses in the network, helping to guard organizations against hacking attacks when abnormal behavior is detected. Most of them also allow organizations to change the IP address of computers in their network remotely from a central location. Five Best Tools for Detecting and Preventing IP Address Conflicts The best way to detect IP address conflicts is by using IP conflict software. In this section, we’ll look at the top five IP address conflict tools. SolarWinds is a top name when it comes to networking software. SolarWinds® IP Address Manager (IPAM) is one of the top software solutions for managing IP addresses in larger and more complicated networks. It does the IP address monitoring automatically. SolarWinds IPAM comes with DHCP and DNS software built in. Everything is managed from a professional dashboard, which is updated in real time. From the central dashboard, organizations can assign various subnets to different network administrators. Additionally, the tool supports VPN out of the box and integrates with all major services. This top IP address manager starts at $1,288. Infoblox IP address management automatically manages IP addresses, DHCP, and DNS servers. It also has some unique features like detecting unmanaged devices in the company network. Because it predicts IP address capacity, Infoblox IP address management can also predict outages in the network, which can occur due to a shortage of IP addresses. Additionally, it has a nice UI allowing you to monitor your network from a central location. The Infoblox IP address management doesn’t have as sophisticated features as SolarWinds IPAM, but it can do the required tasks. It comes with a one-time payment of $10,000. 3. SolarWinds IP Control Bundle (Free Trial) This is another IP address software from SolarWinds. The SolarWinds IP Control Bundle is a combination of the SolarWinds IPAM and SolarWinds User Device Tracker (UDT). The bundle offers all the features of SolarWinds IPAM, including monitoring and a professional and advanced dashboard. Additionally, the UDT portion of the bundle provides advanced user management functionalities to the network administrator. This includes user identification, user device login histories, remote port administration, and more. The SolarWinds IP Control Bundle comes with a free 30-day trial. ManageEngine OpUtils is a simple IP address and switch port management software. The tool comes with IP scanning, which gives a list of IP addresses currently used. It’s also a port management software, so it scans the ports of switches. ManageEngine OpUtils also automatically finds if a hacker is in your network by identifying unauthorized IP addresses. It then blocks them from causing issues within the network. You can find more details about the product on their site. The software starts at $98 for a single user and $958 for unlimited users. There are also many add-on options, which they mention on their site. GestióIP is a web-based and free-to-use IP address management software. The tool performs network scanning and provides the list of IP addresses in use. It’s the simplest of these solutions and offers the following basic features: - IPv4/IPv6 address management - DNS and DHCP integrations - Network discovery - Subnet calculator - VLAN management - Host discovery GestióIP is a free software service, although it provides commercial services. In this post, we’ve learned about IP address conflicts and how they occur. Most IP conflicts occur due to misconfigurations in static IP addresses by the network administrator. However, some conflicts occur due to DHCP issues or network attacks. We’ve also learned how to resolve IP address conflicts manually on each machine, which isn’t effective. The more effective way in a large and complex network is through IP address conflict software. Finally, we discussed the features of the five top software solutions for detecting IP address conflicts.
<urn:uuid:453c48af-3449-46cb-8727-601eaec92273>
CC-MAIN-2024-38
https://logicalread.com/ip-address-conflict/
2024-09-17T15:40:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00815.warc.gz
en
0.934807
2,229
3.625
4
Cybersecurity education: Tackling the compliance burden with a GRC program The field of Governance, Risk, and Compliance (GRC) grew out of tools and practices designed to help organizations assess and mitigate the risks associated with their operations. We now associate GRC primarily with the tools and platforms used to run the various parts of the GRC program, but what are the essential elements needed to understand risk, develop governance, and ensure compliance? Compliance is frequently the driver of a GRC implementation, so if you aren’t sure where to get started, we recommend our webinar The Compliance Conundrum. As you’ll see below, compliance is a natural starting point for building out a GRC program. It can be helpful for organizations that are growing quickly and need a leg up on assessing and mitigating risk. Gimme a G, R, C! GRC is a hybrid of several practice areas and tools — the term was popularized in a market research study highlighting the convergence of different tools and processes under an umbrella goal of managing risk. Therefore, it’s helpful to break down each of the practice areas to understand how they differ and how they mutually support each other: Governance is the guidelines or rules put in place to regulate (govern) how the organization operates. These may be aligned with customer requirements like protecting data to build trust, internal objectives like manufacturing sustainability, or even investors who want to see a company provide positive returns. Management and leadership are most often the sources of governance, and security policies are the most common forms of cybersecurity governance. Risk (often Risk Management) is the practice of identifying, assessing, and taking steps to mitigate or treat risks. Risk is any possible future action that could occur — both positive and negative — but risk management usually focuses on adverse risks that could disrupt operations, such as a system outage, data breach, or cyber crime. Compliance means following rules set out by somebody, often in the form of government or regulators. The Payment Card Industry Data Security Standard (PCI DSS) is one example, and it’s propagated by the PCI council to reduce payment card fraud. Compliance involves implementing controls aligned with the rules, operating them to achieve the desired outcome, and monitoring the status of the controls (both internally and often via external audits). Risk management underpins all aspects of GRC. Writing policies and other documentation to govern organizational operations can prevent unwanted activities and ensure processes are executed in accordance with requirements, and that governance is the way management addresses the risks identified during a risk assessment. Identifying and assessing risks is fundamental to designing controls, such as choosing multi-factor authentication (MFA) and data encryption to protect systems and data from unauthorized access. Compliance activities fit into the picture in one of two ways: firstly, audits and oversight ensure that governance is being followed and controls are working as intended. Secondly, external compliance frameworks can allow an organization to jumpstart their GRC program by providing pre-bundled risk identification and control suggestions. For example, PCI DSS deals with common risks to payment card data and lays out 12 required controls to mitigate them. Organizations that do not currently have a cybersecurity program can use compliance frameworks to get a head start, but with one important caveat: no one-size-fits-all approach will work. The compliance framework is a starting point and should be tailored to your organization’s unique needs. Grand unified theory, or how a GRC tool centralizes work There is a large amount of work to be done in a GRC program. For example, risk assessment and management are ongoing processes, audits and oversight should follow a continuous monitoring approach, and the design and implementation of security controls should be adaptable to support business agility. Centralizing those tasks into a unified toolset offers the advantage of reducing overhead and increasing shared resources across different teams. Some examples of GRC tool benefits include: Leveraging work across multiple compliance frameworks: Many businesses face the burden of complying with multiple sets of rules, such as ISO 27001 and SOC 2. A GRC tool can help identify the compliance objectives you’re trying to meet, and then let you map controls that meet multiple objectives. For example, access controls like a password policy are fundamental in most compliance frameworks, and a GRC tool can let you see how that one policy checks boxes across ISO 27001, PCI DSS, and SOC 2. Continuous oversight: Integrations are a common feature in GRC tools. They provide visibility into the status of technical controls; common examples include integrations with cloud hosting environments to monitor server configurations or pulling in data from security tools like endpoint detection and response (EDR) to identify malware incidents. Many GRC tools include task assignment and tracking, allowing you to track the status of routine processes such as manual access control reviews or physical facility security walkthroughs. Dashboards and orchestration: Management loves metrics, and it’s doubly important for many GRC activities because they are usually a cost center rather than a revenue generator. Showing the effectiveness of your risk management efforts and communicating the value they deliver to the business is critical, so the dashboards and data in a GRC tool are invaluable. These dashboards can also be useful to practitioners responsible for scheduling and executing the tasks associated with a GRC program, showing the status of activities like control implementation or routine tasks like annual policy refreshes. Not all organizations will need a dedicated tool to manage governance, risk, and compliance. Small businesses, such as those with <250 employees, or businesses in non-regulated industries (i.e., without a significant regulatory compliance burden) can often get by with some basic security policies in a shared drive and a single document for risk assessment and control documentation. Managing risk is all about cost-benefit analysis, so if your organization doesn’t need the capabilities of a GRC tool, then the cost is likely not justifiable. However, keep in mind that your GRC program will also need to scale as your business grows, so it’s crucial to monitor the business and identify the point when DIY tools become more of a headache than cost savings. Partner spotlight: get a handle on compliance with Reciprocity While audits typically have a negative connotation, Coalition policyholders can now remove the guesswork regarding governance, risk, and compliance (GRC). Reciprocity simplifies audit and compliance management with complete views of information environments, easy access to program evaluation, and continual compliance monitoring. With Reciprocity’s ZenGRC, policyholders can easily leverage their audit, third-party risk solutions, and policy management applications. Remaining compliant with industry standards and global requirements translates to lower risk and dollars saved. Reciprocity’s mission is to turn corporate compliance from a cost center into a valuable strategic asset. ZenGRC provides one platform for all your organization’s audit, risk, third-party risk solutions, and governance and policy management applications. Coalition Policyholders get exclusive savings from Reciprocity and can sign up for their services in Coalition Control, our active risk management platform with free, integrated attack surface monitoring. Additionally, Coalition’s cybersecurity guide outlines the basic tenets of a cybersecurity program — a critical factor in reducing your organization’s cyber risk.
<urn:uuid:fec468cc-b5e6-48d9-80bc-f77644078d83>
CC-MAIN-2024-38
https://www.coalitioninc.com/en-ca/blog/cybersecurity-education-tackling-the-compliance-burden-with-a-grc-program
2024-09-17T14:37:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00815.warc.gz
en
0.939117
1,516
2.875
3
Table of Content Why is Data Privacy Important for an Organization? Before COVID-19 coronavirus took over the news-press, data privacy was one of the major and critical topics of concern. Like any other societal trend, data privacy seems to work like a pendulum. It swings to and fro, hitting an apex and then swinging in the opposite direction with increasing speed. Data privacy is a matter of bigger issues, it emphasizes building trust and loyalty in users. Massive data breaches like “Collection #1” exposed the data records of around 773 million users to the world of cybercriminals. There are various other data breaches that have made headlines for exploiting data records of organizations that were categorized under the Fortune 100 companies. Last year in 2019, the French National Data Protection Commission imposed a fine of $57 million on a renowned US-based multinational technology company for privacy violations under the GDPR! Today, technology has evolved to such an extent that it takes only mere seconds of social media to spread the news of data privacy violations up in every corner of the world. And the news travels so fast and far that it quickly tarnishes the reputation of a company for its failure in securing user’s data. This is why it is essential for organizations to protect their integrity and strengthen their customer’s trust by keeping data privacy as the top priority. Many users are still oblivious of the fact that data privacy is a fundamental right for every one of us and even a mere violation of that fundamental right can lead to a massive data breach. You will never know what a data breach is capable of doing to an organization unless you see news headlines pointing out the big organizations’ names. The names of acclaimed organizations with heavy fines imposed on them for disregarding data privacy laws. The Current Key Challenges in Data Privacy According to BroadBrandNow, in 1995, only 1% of the world had internet access. Whereas today, that number has reached up to 57% with over 4 billion users of the internet worldwide. Now imagine the amount of data these 4 billion users have on the online platform! Moreover, what doesn’t help is the fact that every 2 seconds there is always a new victim of identity theft. In a data breaches survey report, it was found that up to 33% of data breaches were recorded in 2018 with a total of 7.9 billion data records exposed. Whereas, not less than 10 months, the research firm labeled 2019 the “worst year on record” for the most data breach incidents. With companies experiencing crippling security breaches, the wave of compromised data is also on the rise. Here are some recent statistics related to data breaches: - About 4.1 billion records were exposed in data breaches in the first half of 2019. - $3.92 million was the average cost of a data breach as of 2019. - The healthcare industry had the highest cost of the data breach at @429 per record. - Data breaches involve 34% of internal actors. - The average cost of a data breach is $6.3 million in companies with over 50k compromised records. - In 2020, the average cost of a data breach is expected to exceed $150 million. - 70 million data records were stolen or leaked in 2018 because of a poorly configured AWS S3 Cloud storage bucket. - Yahoo holds the largest record of data breaches of all time with 3 billion compromised accounts. - It took an average of 314 days in a data breach lifecycle of a malicious criminal attack in 2019. - As per a survey by a security research firm, 24% of data breaches are caused by human errors. Apart from these statistics, here are the major key challenges that are being faced by organizations related to data privacy today: - Small businesses are increasingly at high risk of data breaches. - Third-party breaches have become common in the cyber world. - A simple user holds a 27.9% chance of experiencing a data breach which could affect at least 10,000 records! - The financial sector accounts for about 14% of all data breaches. - As per expert security research analysis, in 2020, almost 25% of enterprises would succumb to data breaches through IoT devices. How to Conquer the Risks in Data Privacy? This modern interconnected world might leave organizations vulnerable to the threats growing from instances of cybercrimes. With new cyber threats emerging every day, the risk of data being unsecure online is becoming more dangerous than ever for every organization. Many large companies have fallen victim to such cybercrime schemes and have lost a good amount of revenue on lawsuits in recovering their losses. Thus, it is highly crucial to set permissions on files and dispose of stale data. For the protection and security of data, more severe consequences are being enforced as strict legislation is being passed in every region across the globe. Companies should take note of enacting and implementing data privacy rules and regulations to users and their private information. It is advisory to implement better controls over organizations’ access and right to store the data of their users. Keeping proper data classification and governance adequately is highly beneficial in maintaining compliance management with data privacy laws like GDPR, HIPAA, ISO 27001, PCI DSS, and more. Besides this, the government of India has also proposed groundbreaking data privacy laws in India akin to Europe’s GDPR. As per the data privacy law, technology companies in India will require to get consent from citizens prior to collecting and processing their personal information. It is essentially required to be enacted as any personal data that is sensitive for someone, could be further maliciously used by anyone with vicious intent. The personal information could be any of the following types of data privacy categories: - Online Privacy: Personal data of the user that is handed over during online interaction. - Financial Privacy: Financial information or record shared online or offline can be used for fraudulent practices. - Medical Privacy: Confidential details of medical treatment or history of privileged information should not be disclosed to a third party. - Residential Privacy Records: Sharing of addresses online can lead to the potential risk of unauthorized access. In order to protect such kinds of data from being hacked or misused, it is important to follow the best practices possible. Here are some guidelines to help in ensuring data privacy in an organization: - Set a formal procedure in place to handle access requests to personal data. - Have a habit of keeping minimal data collection and storage. - Do not hand over your credentials to any third-party website. - Implement strong data security policies and laws for privacy purposes. - Leave no space for vulnerabilities in the network and IT infrastructure. - Educate employees on security and privacy issues for creating a cyber-secure working environment. - Enforce strong password usage to stop hackers from getting unauthorized access to your systems. For any organization, data is recognized as a crucial corporate asset that needs to be safeguarded. By following these above-stated guidelines, any organization can have strong data security to mitigate the loss of information which directly leads to financial losses. What are your opinions on data privacy in this current scenario? Let us know by commenting below! Thank you for giving your valuable time to read this blog. Hope you had a good read!
<urn:uuid:676e0926-cd36-4d30-870b-ef740e341547>
CC-MAIN-2024-38
https://kratikal.com/blog/the-ongoing-impact-of-data-privacy-on-an-organization/
2024-09-18T23:00:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00715.warc.gz
en
0.943223
1,494
2.609375
3
With quantum computing emerging as a possible outlet for uncharted business innovation, the new ‘Quantum Tortoise and the Classical Hare‘ framework from MIT cites the notion that quantum computers must meet two conditions to yield an improvement on classical machines. Firstly, says the study, quantum computers must be powerful enough to solve an issue (feasibility); their currently small size by comparison only allows for solutions to experiment-orientated, so-called ‘toy problems’ such as factoring numbers so low that using a classical computer makes more sense. Second, such machines must gain enough of an advantage from their algorithm that problems can be solved in less time than it would take traditional computers. Adhering to these conditions enables businesses looking to invest in quantum going forward to determine whether such spending and usage would drive value in the long run. According to researchers, the new framework will be able to be used across multiple sectors, including automotive, chemistry, and finance. Is quantum computing the next frontier for machine learning experts? — Here, we consider the role that machine learning tech talent could play in the quantum computing space, as R&D gains traction. “Many business leaders are looking to quantum computing as the promising successor to classical computing, but research shows — and leaders in quantum computing agree — it will continue to underperform classical computing in many areas,” said co-author Neil Thompson, research scientist at MIT Sloan and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “As a result, to understand where quantum computing will perform better first requires understanding why it can be. “We need to consider the speed of the computer versus the route. Thinking of it like a race, in getting from point A to point B, the algorithm is the route. “If the race is short, it may not be worth investing in better route planning. For it to be worth it, it has to be a longer race.” As well as being smaller across the board, quantum computers need to be kept at near absolute zero Kelvin, or -273.15°C in order to function, and hardware tends to runs slower, completing fewer operations per second. However, companies across many industries have been found to be exploring how Shor’s algorithm — among the few quantum-specific systems that are widely known — can fare versus classical computers, with possible near-term advantage currently deemed limited to large-scale challenges. This work is part of a larger collaboration between Accenture and the MIT Initiative on the Digital Economy, and is funded by Accenture. Q-CTRL achieves risk based certification in quantum market first — Sydney-headquarted Q-CTRL is the first independent quantum software vendor to achieve ISO 27001 certification.
<urn:uuid:d4cfbcb3-7761-4df7-ac97-2e25682b5c58>
CC-MAIN-2024-38
https://www.information-age.com/new-mit-framework-helps-firms-determine-quantum-value-123508011/
2024-09-18T22:27:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00715.warc.gz
en
0.950457
573
2.8125
3
Have you ever heard of the acronym “DDoS” and wondered what it meant? DDoS stands for Distributed Denial of Service, a malicious cyber attack that floods targeted computers or networks with overwhelming data. This blog post will discuss the basics behind DDoS attacks, how they are conducted, and what you can do to protect yourself against them. Read on to learn more about this growing problem and how best to defend against it. What are DDoS? Distributed Denial of Service (DDoS) attacks are a type of cyber attack in which the attacker seeks to render a computer or network resource unavailable to its intended users by overwhelming traffic from multiple sources. This can be done by flooding the target with requests for data or sending it more than it can handle. Either way, the result is the same: the legitimate users of the resource cannot access it because the system is bogged down in dealing with illegitimate requests. DDoS attacks are often launched as a form of protest or retaliation, but they can also be used for other purposes, such as financial gain or to cause chaos. Whatever the motive, DDoS attacks are serious business and can significantly impact individuals and organizations. If you’re wondering what all this has to do with you, consider this: DDoS attacks are on the rise, and anyone who uses the Internet is at risk. In fact, according to a report from Verisign, there was a 36% increase in DDoS attacks in 2017 compared to 2016. And things show no signs of slowing down; another report from Kaspersky Labs found that DDoS attacks increased by 50% in just the first quarter of 2018. How do DDoS work? A DDos attack is a type of cyber attack in which a group of hackers overloads a server with requests, causing the server to crash. This can be done by flooding the server with traffic from multiple computers or sending many malicious requests. Either way, the goal is to remove the server so it can no longer provide service. What are the types of DDoS attacks? There are four types of DDoS attacks: - HTTP Flooding: This attack overloads the target server with HTTP requests, resulting in the server being unable to process legitimate requests. - DNS Amplification: This attack amplifies the DNS traffic directed at a server using spoofed source IP addresses. This can result in the DNS server being overwhelmed with traffic and unable to respond to legitimate queries. - SYN Flooding: This type of attack exploits the way that TCP connections are established. The attacker sends many SYN requests to the target server, causing the server to become overloaded and unable to process legitimate requests. - UDP Flooding: This attack involves flooding the target server with UDP packets. This can cause the server to become overloaded and unable to process legitimate requests. How to Prevent DDoS Attacks? One of the best ways to prevent DDoS attacks is to keep your software and operating systems up to date. Attackers often exploit known vulnerabilities in older software versions to launch their attacks. You can close these potential attack vectors by making sure your software is up to date. Another way to prevent DDoS attacks is to use a web application firewall (WAF). A WAF can help protect your website or application by filtering traffic and blocking malicious requests. This can be an effective way to stop DDoS attacks before they reach your system. Finally, it would help if you considered using a cloud-based DDoS protection service. These services can detect and block DDoS attacks before they reach your network. This can be a more expensive option, but it may be worth it if you are particularly worried about being targeted by a DDoS attack. How can you protect against DDoS attacks? There are several ways you can protect against DDoS attacks: - Use a Web Application Firewall (WAF): A WAF is a security system that sits between your website and the Internet. It filters traffic to your website, blocking malicious requests and protecting it from DDoS attacks. - Use a Content Delivery Network (CDN): A CDN stores copies of your website on multiple servers worldwide. When someone tries to access your site, they are automatically redirected to the closest server, which helps distribute the traffic and protect them from DDoS attacks. - Implement Rate Limiting: Rate limiting is a security measure limiting the number of requests that can be made to your website in a given period. This can help to prevent DDoS attacks by making it more difficult for attackers to overload your server with requests. - Use Security Measures: There are several security measures you can take to protect your website from DDoS attacks, including using strong passwords, encrypting sensitive data, and installing security software. How do you respond to a DDoS attack? There are a few different ways to respond to a DDoS attack, but the most important thing is to stay calm and not panic. The first thing you should do is identify the attack’s source. This cannot be easy, but if you can narrow it down, it will be easier to stop. Once you know where the attack is coming from, you can start working on blocking their IP address. There are a few different ways to do this, but the most effective is usually to contact your hosting provider or use a third-party service like CloudFlare. If the attack is more serious, or if it’s going on for an extended period, you may need to contact law enforcement. This can be a difficult decision, as you don’t want to overreact, but if the attack is genuinely malicious, it’s essential to get help from those who can track down and prosecute the offenders. Who is behind most DDoS attacks? There is no definitive answer to this question, as many different motivations for launching a DDoS attack exist. However, some of the most common reasons include the following: - To take down a competitor’s website or online service - To disrupt an organization’s operations - To protest or call attention to a particular issue or cause - To extort money from an organization - In many cases, the individuals behind a DDoS attack will remain anonymous. However, there have been some notable instances where attackers have been identified and apprehended, such as the case of Lizard Squad, a group of hackers responsible for launching several high-profile DDoS attacks in 2014. What are the consequences of a DDoS attack? There are many consequences of a DDoS attack. The most common is that the site or service being attacked is unavailable to users. This can result in lost revenue, customers, and opportunities. In some cases, DDoS attacks can also lead to physical damage to the equipment hosting the site or service. What to Do If You Are a Victim of a DDoS Attack? If you are a victim of a DDoS attack, there are a few things you can do to try and mitigate the attack: - Try to identify the source of the attack. If you can identify the attacker, you can sometimes work with them to stop the attack. - Contact your ISP or web host to inform them that you are under attack. They can help mitigate the attack or provide you with resources to help fight it off. - You can absorb the attack by increasing your bandwidth or using a DDoS protection service if all else fails. In summary, a DDOS attack is an attempt to take down a website or server by flooding it with traffic from multiple sources. It is one of the most severe threats facing online businesses and other organizations today, as it can cause significant disruption and lead to financial losses. Fortunately, some measures can be taken to protect against such attacks, including using firewalls and monitoring for suspicious activity. While these measures may not completely prevent a DDOS attack, they will help reduce the risk significantly and make it harder for attackers to succeed.
<urn:uuid:b4659aa6-241b-4b57-b0a1-926a6ee2832f>
CC-MAIN-2024-38
https://cybersguards.com/what-is-ddos-mean/
2024-09-20T04:34:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00615.warc.gz
en
0.951967
1,628
3.375
3
Market Trends of Indoor Farming Industry This section covers the major market trends shaping the Indoor Farming Market according to our research experts: Effect of Climate Conditions on Production According to the European Commission, the amount of land used for agricultural purposes may fall to 172 million ha in 2030 from the current level of 176 million ha in 2017, with a corresponding decline in the level of EU arable land, from 106.5 million hectares in 2017 to 104 million hectares in 2030. According to the World Bank statistics, South Asia declined the arable land percentage of the total land from 43.2% in 2017 to 43% in 2020. Thus, a reduction in arable land and an increase in pollution in the developing countries of Southern Asia are expected to increase the demand for alternative cultivation, including indoor farming. Due to the continuous decline in the per capita availability of farmland, the practice of increasing productivity is a way out. Thus, there is a need for high-yielding crops, which can solve the problem of farmland scarcity without compromising production volumes, which can be attained through indoor farming. The decline in agricultural land has been mainly due to diversion for non-agricultural purposes, such as urbanization, roads, industries, and housing, and soil erosion and pollution in various developing countries. In China, there are approximately 334 million acres of arable land, of which around 37 million acres are non-cultivable, and the growing population poses a major threat. The alternative to creating more arable land is to enhance the yield and productivity of cultivated land. These technologies include high-yielding varieties, the management of fertilizers and pesticides, mechanization, irrigation management, and employing new farming techniques, such as indoor farming. As the cultivable land is decreasing globally, indoor farming may help increase production by using hydroponics and artificial lighting to provide plants with nutrients and light, as they would only receive when grown outdoors. Thus, the demand for equipment for indoor farming may increase during the forecast period. North America Dominates the Market North America accounted for the highest global indoor farming market share in 2021. With the help of high-efficiency LED lights and enhanced indoor management practices, US growers have adopted large-scale indoor farming. Such practices are expected to reduce energy lighting costs by about 50%, thus, reducing the carbon footprint of controlled environment agriculture. As per the US Department of Agriculture (USDA), the average yield of conventional lettuce farming doubled twofold when cultivated through vertical farming. Currently, the indoor farming industry in the US is predominantly dominated by greenhouse crop production. The onset of urban population dwellings across cities, such as New York, Chicago, and Milwaukee, has propelled the environment for indoor farming with activities such as revamping derailed vacant warehouses, derelict buildings, and high rises, which has, in turn, led to an increase in the production of fresh grown foods altogether. The demand for greenhouse tomatoes in the United States is driving the market demand for hydroponic operations. Indoor farming is one of the fastest-growing industries in the United States. According to the UN Food and Agriculture Organization, drylands in Mexico occupy approximately 101.5 million hectares of land, thereby boosting the need for indoor farming practices. Canada has also seen a positive growth trend, contributing significantly to the world exports of hydroponically grown tomatoes. The region's growth of hydroponics and aeroponics systems is driving the overall indoor farming market, mainly due to the increasing focus on adopting innovative and efficient technologies to improve yields. A wide variety of crops, such as leafy vegetables, herbs, fruits, micro greens, and flowers, are grown through indoor farming in the countries of North America. Indoor vertical farming systems have provided organic food, which has become the major driving force for indoor vertical farming along with the increasing demand for pesticide- and herbicide-free food among the consumers of North America.
<urn:uuid:49d0186b-f85f-4880-9f50-f90de03dc728>
CC-MAIN-2024-38
https://www.mordorintelligence.com/industry-reports/indoor-farming-market/market-trends
2024-09-08T00:43:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00815.warc.gz
en
0.945533
790
2.71875
3
What is Data Explorer? Data Explorer is a brand new suite of tools found within DNS Analytics that give you a thorough and detailed breakdown of queries and records in your domains. To get started with Data Explorer, log into DNS Analytics and then click the pie chart button next to the domain of your choice. You are now at the Data Explorer dashboard. Here you can see your domain ID, number of configured records, what record types are used and the graph to the right shows how many queries each record type is getting within the parameters of the time series (scaled). Both the query count and total percentage are visible in the Scaled view. If you would like to view the chart with the actual proportions as opposed to scaled, click the Actual tab above the chart. This chart can be used in conjunction with any and all of the categories just below it. So for example, if you'd like a visual breakdown of how much traffic is coming to each PoP, click the Location tab. Flipping between these tabs also changes the spreadsheet below them which provides a more in-depth breakdown of the data in the chart above. There is also a filter function for each category in the spreadsheet to help you narrow in on the data you need. To use the filter function, click thebutton below the category. Time Series Explorer The Time Series Explorer gives you a graphical representation of queries over time based on several parameters: Location, Record Type, Record Name, IP Version, Protocol, Geo-filter, and Geo-Proximity. These parameters can be grouped, combined and filtered in a number of ways to give you a granular and customized view of your domain's queries. Group By: displays all-time series grouped by which ever parameter you choose; if you group by location then the graph will display the number of queries for all records in your domain by location over time. Combine: instead of bringing all attention to the selected parameter like Group By, it combines all-time series that share that parameter and as a result, the graph no longer reflects data based on that parameter, instead, it becomes based on all other unselected parameters. Filter By: This allows you to specify a value within a parameter to filter data, allowing you to view any individual time series with ease. Time Series Key The Time Series Key shows the top 500 most queried time series and allows you to hand pick which sets of data you would like to view in the graph above. Although the graph only shows 10 time series by default, you have view more or less by clicking the check boxes next to each time series. Each time series is labeled with the same parameters used in the Time Series Explorer, the one that is boxed off in the screenshot above shows that it's an A record, hostname is gardenorigin, protocol is UDP, queries from Frankfurt over IPv4, and the two greyed out boxes show that this time series does not have Geo-Proximity or GeoIP Filters applied.
<urn:uuid:ce200d99-e871-4f2e-b161-e499ad573b7a>
CC-MAIN-2024-38
https://support.constellix.com/support/solutions/articles/47001150199-data-explorer
2024-09-09T06:22:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00715.warc.gz
en
0.904767
615
2.546875
3