text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
I like to think of infrastructure as everything from wall jack to wall jack. Thinking of infrastructure in this manner enables effective conversations with those who are less familiar with the various components.
The term IT infrastructure is defined in ITIL as a combined set of hardware, software, networks, facilities, etc. (including all of the information technology related equipment) used to develop, test, deliver, monitor, control, or support IT services.
Associated people, processes, and documentation are not part of IT Infrastructure.
A network switch is the device that provides connectivity between network devices on a Local Area Network (LAN). A switch contains several ports that physically connect to other network devices, including:
- Other switches
Early networks used bridges, in which each device “saw” the traffic of all other devices on the network. Switches allow two devices on the network to talk to each other without having to forward that traffic to all devices on the network.
Routers move packets between networks. Routing allows devices separated on different LANs to talk to each other by determining the next “hop” that will allow the network packet to eventually get to its destination.
If you have ever manually configured your IP address on a workstation, the default gateway value that you keyed in was the IP address of your router.
Firewalls are security devices at the edge of the network. The firewall can be thought of as the guardian or gatekeeper.
A set of rules defines what types of network traffic will be allowed through the firewall and what will be blocked.
In the simplest version of a firewall, rules can be created which allow a specific port and/or protocol for traffic from one device (or a group of devices) to a device or group of devices. For example, if you want to host your own web server and limit it to only web traffic, you would typically have two firewall rules that look something like this:
Source | Destination | Port / Protocol | Description |
any | 10.1.1.100 | 80 / http | Web traffic in |
any | 10.1.1.100 | 443/ https | Secure web traffic in |
The source is the originating device. In this case, ‘any’ means ‘allow any computer to communicate’. Destination is the specific IP address of your internal web server. Port/Protocol defines what type of traffic is allowed from the source to the destination. Most firewall devices allow for a description for each rule that have no effect on the rule itself. It is used only for notes.
Firewall devices can get complicated quickly. There are many different types of firewalls which approach managing traffic in different ways. Detailed firewall capabilities and methods are beyond the scope of this post.
A network server is simply another computer, but usually larger in terms of resources than what most people think of. A server allows multiple users to access and share its resources. There are several types of servers, with the following being among the most common:
- A file server provides end users with a centralized location to store files. When configured correctly, file servers can allow or prevent specific users to access files.
- A directory server provides a central database of user accounts that can be used by several computers. This allows centralized management of user accounts which are used to access server resources.
- Web servers use HTTP (Hyper Text Transfer Protocol) to provide files to users through a web browser.
- There are also application servers, database servers, print servers, etc.
The physical plant is all of the network cabling in your office buildings and server room/data center. This all too often neglected part of your infrastructure usually is the weakest link and is the cause of most system outages when not managed properly. There are two main types of cabling in the infrastructure:
- CAT 5/6/7
- Fiber optic
Each type of cabling has several different subtypes, depending on the speed and distance required to connect devices.
By the strict ITIL definition, people are not considered part of the network infrastructure. However, without competent, well-qualified people in charge of running and maintaining your infrastructure, you will artificially limit the capabilities of your organization.
- In larger organizations, there are specialty positions for each of the areas mentioned in this article.
- In smaller organizations, you will find that the general systems administrator (sysadmin) handles many of the roles.
Server rooms/data center
The server room, or data center in large organizations, can be thought of as the central core of your network. It is the location in which you place all of your servers, and it usually acts as the center of most networks.
Infrastructure software is perhaps the most “gray” of all infrastructure components. However, I consider server operating systems and directory services (like MS Active Directory) to be part of the infrastructure. Without multi-user operating systems, the hardware can’t perform its infrastructure functions.
- BMC IT Operations Blog
- IT Infrastructure Management Explained
- IT Infrastructure Planning: How To Get Started
- Introduction To Load Balancing
- People, Process, Technology (and Partners): An Introduction
- What Is Software Defined Networking? SDN Explained | <urn:uuid:52c49f71-98d6-4392-8043-c98bc9bee4a4> | CC-MAIN-2024-38 | https://www.bmc.com/blogs/what-is-it-infrastructure-and-what-are-its-components/ | 2024-09-12T22:15:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00223.warc.gz | en | 0.92791 | 1,082 | 3.421875 | 3 |
Cliché images of AI are all over the internet – whether a generic picture of a robot with white plastic skin or a Terminator-esque cyborg out to kill innocent civilians.
To many, these images are part and parcel for a technology that, while decades old, is just emerging onto the mainstream consciousness and struggles to be accurately depicted by media that do not understand it.
But one group has had enough. Better Images of AI is a nonprofit group led by We and AI CEO Tania Duarte. The group warns that without wider public comprehension of AI, mistrust in technology will grow and groups will continue to be underrepresented.
This week, Better Images of AI launched a report that acts as a guide for creators on how to better depict AI in their content. It was unveiled at an event at the Alan Turing Institute in London, which has been supporting the work, alongside the BBC R&D team.
“The idea of diversifying is really important to us as an institute because it doesn't help us or anybody to understand the science if the visuals actually detract from the complexity of the issues that are faced,” said Mark Burey, head of communications and marketing at the Alan Turing institute.
The group’s image library is carefully curated and can be downloaded and used by anyone for free if credited using the Creative Commons license. Along with AI Business, the nonprofit’s images have been used in The Washington Post, the London School of Economics and Time Magazine, among others.
Kanta Dihal showcasing the report at an event at the Alan Turing Institute, London
The report covers research by Kanta Dihal, a senior research fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence, as well as findings from a series of roundtables and workshops in which stakeholders and creatives from across AI came together to identify what constitutes a good or bad image of AI.
And the spike in AI interest stemming from ChatGPT means the public need to be able to understand what is being shown in the media, Duarte said.
Dihal also authored a paper titled "The Whiteness of AI," in which she outlines that AI is often portrayed as white humanoid robots. At the launch of the Images report, Dihal said these depictions bring other associated ideas that AI is built by and for white people, as well as more intelligent.
The nonprofit also analyzed “the most common, unhelpful image tropes,” she said. Dihal warned of an increase in outlets using images of robots holding things like gavels and scales when referencing instances of increased automation in court and legal settings.
What constitutes a bad image of AI?
There are several parameters that can determine whether an image is unhelpful in its depiction of AI. According to the nonprofit, tropes to avoid include references to science fiction and white robots.
Also what to avoid are variations on Michelangelo’s religious fresco, The Creation of Man, anthropomorphized AI and human brain outlines.
Tropes to avoid, according to Better Images of AI
The newly published report found that creators find it difficult to source decent images of AI and that deadlines, particularly in media and news settings, lead to quick decisions being made regarding image use.
The report also states that some of these unhelpful stock images have become so recognizable as “a shorthand for AI, especially for non-technical audiences.”
How to pick better images of AI
The Better Images of AI team point to four tenets for picking a better image of AI: honesty, humanity, necessity and specificity. Images need to represent the tech as accurately as possible showing what it can do, without extrapolating an unlikely future.
Any images including people should be inclusive of the groups involved in the development and deployment of AI, from software engineers all the way to end users. AI can encompass many things, but oftentimes content is accompanied by images of robots where none are involved.
The group’s better AI image library contains pictures of hardware, autonomous driving, human-AI collaboration and computer vision, but it wants to build out the image library in the near future.
“We'd like to have more topics, things like AI in engineering or AI in space – lots more nuanced choices within those topics. And we need them to be used by more people and organizations.”
What about AI-generated images?
Duarte told AI Business that they do have some images in the library that use AI. She expressed hesitancy but was not against the idea entirely.
“This has been months of discussion. Some of our success may come from people looking for images generated by AI. We are very conscious that this project is about showcasing the value of creativity and creative people. In our workshops, we have had people who are able to think about new metaphors, new concepts, new ways of expression, with input from technology,” Duarte said.
“There's no one view but it’s amazing that you can test new concepts by giving a prompt to an image generator to help you come up with new ideas. I would only hope that we can get to a stage where artists can be credited in the datasets that image generators are trained on," she added.
Dihal said that currently generated images of AI just regurgitate existing tropes, and that the need is to change them, not reproduce them.
About the Author
You May Also Like | <urn:uuid:f884ecf7-54a2-4804-be2f-0bf4e570afed> | CC-MAIN-2024-38 | https://aibusiness.com/responsible-ai/enough-is-enough-images-of-ai-are-too-cliche | 2024-09-14T04:53:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00123.warc.gz | en | 0.964728 | 1,133 | 2.53125 | 3 |
Digitalization is getting a big word of mouth nowadays. In particular, the role of AI in digital transformation is getting extensive discussion as a central enabler of the tech-savvy business future.
Why – you ask? Digitalization requires the wide adoption of automated systems to reach business goals. These can include supreme consumer experiences, daily operations, and business decisions. And this is where AI digital transformation comes in. Let’s find out how this innovative technology makes a difference in modern business. We’ll dwell on the fundamentals of both. And we’ll also go over the main applications of AI and digital transformation.
Behind the beast: what is digital transformation?
Digital transformation is a revolutionary overhaul of the business model. It also refers to the use of technology that converts legacy processes into digital ones.
This approach doesn’t come down to introducing the latest hardware and software. It also initiates fundamental changes in approaches to management. Corporate culture and external communications are also important. Digitalization of business processes aims at facilitating quick decision-making. It also enables easier and faster shifts according to the evolving needs.
The development of technology is beneficial for IT companies and other industries. Transportation, banking, retail, and agriculture are all flocking to the tech-first strategy. Even public services are all going digital to cut costs and increase efficiency. Thus, in 2022, spending on digitalization will grow to $1.8 trillion. By 2025, this number will double to $2.8 trillion.
What is AI digital transformation?
While this definition sounds complicated, most smart systems share the same concept. They get value from available data to promote any digital efforts. Hence, digital transformation through artificial intelligence allows businesses to capitalize on their data. It addresses weaknesses, predicts outcomes, and gives a competitive advantage in the market.
As you see, it all comes to the effective usage of data. Thus, Artificial Intelligence is the most potent protagonist of digitalization due to its analytical and processing capabilities.
Artificial intelligence in digital transformation: top impacts
Data literacy and intelligent systems open up a wealth of new opportunities for businesses. Insights allow companies to expand their reach and enhance current products and services. Thus, AI-driven digital transformation ushers in new business models and new revenue streams. Now let’s see where you can apply smart systems to reinforce digitalization.
Supreme customer experience
Around 84% of customers spend more money with great experiences. That is why brands should channel their digital efforts on ensuring greater experiences. And intelligent algorithms can help tap into the minds of consumers.
Moreover, AI revolutionizes marketing efforts. It introduces personalized experiences, eliminating outdated all-purpose marketing. Thus, with AI, marketers can target specific customer segments. AI algorithms can track large groups of customers and consumer characteristics. Then, they group them into detailed segments for targeting.
The power of forecasting is the driving force behind business intelligence. Hence, analytics competency is a salient benefit of adopting artificial intelligence into your transformation journey. Data analytics amplifies decision-making through processing past and real-time data. This translates into data-driven marketing campaigns, effective asset maintenance, and improved financial performance.
Moreover, this form of AI assists companies in the following pursuits:
Smart security infrastructure is the next pillar of artificial intelligence digital transformation. As such, intelligent security involves leveraging AI to identify and avert cyber threats. With intelligent algorithms, businesses don’t need a large security staff. Artificial intelligence can automatically keep hackers at bay with less human intervention.
In 2019, around 75% of surveyed IT executives reported the use of AI for cybersecurity. The applications vary based on the business’s weak points. Thus, smart algorithms assist companies in network security and threat prevention. They also help avoid human errors and adopt biometric verifications.
Edge computing is heralded as the next frontier in digital and AI transformations. As such, edge AI is the combination of the latter and computer intelligence. This phenomenon refers to a distributed computing model where computing does not take place on a centralized server or in the cloud. Instead, it brings essential computation onto the device itself.
Why is this setup necessary for the tech revolution? By 2025, we’ll have 175 zettabytes of data. Sooner or later, the amount of generated data overwhelms cloud networks. This, in turn, will result in unacceptable response times. By bringing data processing to the edge, companies can drive real-time operations, connectivity, and scalability at affordable costs.
Paired with AI, edge computing can process data in real-time, without having to connect to a cloud. Deep learning also facilitates operations on the device itself, the origin point of the data.
Business operations is a broad term that encompasses companies’ daily activities. The latter aims at increasing the value of the enterprise and yielding bigger profits. According to IBM, data-based decision-making tools will reach a $2 trillion market by 2025. And artificial intelligence is what fuels these solutions.
Whether it’s strategic hiring or employee retention, algorithms rely on sentiment analysis technologies, biometrics, and others to uncover and correct hidden insights.
For example, IBM Watson Candidate Assistant is a popular talent management solution. It helps match potential candidates with the specific jobs that they will thrive in. As a result, companies act on accurate data and avoid biases that can occur in the hiring process.
Finance and accounting
According to the EY 2020 Global Tax and Transformation Survey, tax teams spend up to 70% of their time on tasks that could be automated. In this context, optical character recognition has proved effective in automating data extraction.
This technology distinguishes printed or handwritten text characters from scanned documents. Then, it converts the paper stimuli into code for further data processing. As a result, OCR document processing facilitates document recognition and eliminates human errors.
Chatbots and employee training gamification are also prominent examples of process automation in finance. They enable companies to reduce costs and scale up their services. Repetitive tasks are also handed over to machines.
Artificial intelligence digital transformation: picking your first AI project
Almost half of the companies are driving digital transformation using AI. The application area of machine intelligence is vast. From better security to hyper-automation, it underpins the digital efforts of businesses.
But companies shouldn’t rush into projects. First, they should align their key perspective with technology. Internal data and AI opportunity audit is also a vital prerequisite to the implementation of artificial intelligence. Consider choosing relevant KPIs, realistic goals, and performing team training. Moreover, there is much more on ethics and management of AI-enabled projects.
Combining digital transformation and AI seems like a monumental task for time-tested practices. But once you tap into these uncharted waters, your guesswork will transform into date-laden and calibrated efforts.
Looking for a way to move your business to digital or make it data-driven? Set up a call with our AI consultant to discuss your business and find a suitable solution for you. | <urn:uuid:3db4a821-8cb7-4238-96ee-ee85347db092> | CC-MAIN-2024-38 | https://indatalabs.com/blog/ai-in-digital-transformation | 2024-09-15T09:37:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00023.warc.gz | en | 0.919553 | 1,441 | 2.703125 | 3 |
What is Physical Security? +
Physical security refers to the measures and strategies implemented to safeguard physical assets, and resources from unauthorized access, theft, vandalism, or harm. It involves the protection of tangible elements such as equipment, data centers, and personnel. Physical security measures are designed to prevent or mitigate physical threats and ensure the safety and security of an organization's premises. Key components of physical security include access control systems, surveillance systems (CCTV), intrusion detection systems, and other controls. A comprehensive physical security plan considers the specific risks and needs of an organization, aiming to create a secure and resilient environment.
What are the seven domains of CPP (Certified Protection Professional)? +
The seven domains covered in CPP are Physical Security, Security Principles and Practices, Business Principles and Practices, Investigations, Personnel Security, Information Security, Crisis Management. | <urn:uuid:800dcb3a-a92b-476e-b34f-b92161a90448> | CC-MAIN-2024-38 | https://www.infosectrain.com/physical-security-training-courses/ | 2024-09-15T09:19:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00023.warc.gz | en | 0.897384 | 171 | 3.34375 | 3 |
In the intricate world of data storage, understanding the Universal Disk Format (UDF) file system is crucial. What exactly is a UDF file, and why does it play a pivotal role in modern storage solutions? This article delves into the heart of UDF files, unveiling their significance and how they revolutionize data storage.
Are UDF files the key to future-proof data storage? As we navigate through the nuances of this format, we’ll explore its versatility and compatibility with various devices and operating systems. Join us as we dissect the UDF file system, revealing its integral role in contemporary technology.
Table of Contents:
- What is a UDF File?
- 7 Surprising Aspects of UDF File Systems You Need to Know
- Technical Specifications of UDF Files
- Advantages of Using UDF Files
- UDF vs Other File Systems
- Creating and Managing UDF Files
- UDF in Various Industries
- Troubleshooting Common UDF File Issues
1. What is a UDF File?
Definition and Overview
A Universal Disk Format (UDF) file is a file system specification widely used for storing data on optical media. It’s recognized as the successor to the ISO 9660 standard, offering greater versatility and efficiency. UDF is known for its ability to be read by multiple operating systems, including Windows, Linux, and macOS, making it an ideal choice for data interchange.
The primary advantage of UDF lies in its ability to handle large files and capacities, which is essential in an era where data size is constantly growing. It supports various media types, from CDs and DVDs to newer formats like Blu-ray discs. UDF ensures data integrity and can be used for both read-only and rewritable media.
Historical Evolution and Development
The development of UDF dates back to the mid-1990s, spearheaded by the Optical Storage Technology Association (OSTA). It emerged as a response to the limitations of the then-popular ISO 9660 standard, which couldn’t adequately handle the evolving needs of data storage on optical media.
Over the years, UDF has undergone several revisions to enhance its capabilities. Notable milestones include:
- UDF 1.02 (1996): Introduced basic support for rewritable media.
- UDF 1.50 (1997): Added support for stream files and improved the random write process.
- UDF 2.00 (1998): Focused on enhancing support for streaming applications and large files.
- UDF 2.50 (2003): Introduced Metadata Partition to improve reliability and performance.
- UDF 2.60 (2005): Added Pseudo OverWrite method for rewritable media, enhancing data rewriting capabilities.
Each update aimed to enhance compatibility with emerging technologies, maintain data integrity, and improve performance across diverse media types.
2. 7 Surprising Aspects of UDF File Systems You Need to Know
- Universal Compatibility: UDF’s ability to function across various operating systems like Windows, macOS, and Linux is unparalleled. This cross-platform compatibility is a significant advantage over other file systems.
- Media Versatility: Unlike other file systems that are optimized for specific types of media, UDF excels across different types of optical media – from CDs to Blu-ray discs, ensuring future-proof data storage.
- Large File Support: UDF is adept at handling large files, a necessity in an era of high-definition media and extensive data collections, which is a challenge for older file systems like FAT32.
- Robust Data Integrity: The architecture of UDF is designed to maintain data integrity, even in long-term storage situations. This makes it an ideal choice for archival purposes where data preservation is crucial.
- Rewritability and Flexibility: UDF supports rewritable media, allowing for data to be updated or modified without the need for reformatting, a feature not commonly found in traditional file systems.
- Adaptability to Industry Needs: From healthcare to entertainment, UDF’s adaptability to various industry requirements for data storage and transfer is notable. Its capacity to handle diverse data types and sizes makes it universally applicable.
- Forward-Looking Evolution: The ongoing development and refinement of UDF standards showcase its ability to adapt to emerging technological needs, highlighting its potential in future data storage and interchange solutions.
3. Technical Specifications of UDF Files
File System Architecture
The architecture of the UDF file system is designed to be robust and adaptable. It consists of several key components:
- Volume Structure: UDF uses a logical volume approach, allowing it to be independent of the underlying physical media. It includes the Main Volume Descriptor Sequence, which contains essential metadata about the file system.
- File Set: This is where the actual file and directory information is stored. UDF’s file set can handle a variety of file types and sizes efficiently.
- Allocation Descriptors: These manage how data is stored and retrieved from the disk, providing flexibility in data management and ensuring efficient use of space.
- Metadata Partition (in UDF 2.50 and later): This enhances file system integrity and performance by segregating metadata from file content.
Compatibility with Operating Systems
One of UDF’s strengths is its wide compatibility with various operating systems:
- Windows: Windows has supported UDF since Windows 98, with each subsequent version improving its compatibility. Windows 10, for example, can read and write to UDF versions up to 2.60.
- macOS: Apple’s macOS provides robust support for UDF, allowing for both reading and writing of UDF files. This support is crucial for users who utilize optical media for data transfer between different platforms.
- Linux: Linux has had UDF support integrated into its kernel since version 2.6. UDF support in Linux covers a wide range of versions, making it versatile for different use cases.
The cross-platform nature of UDF ensures that it remains a popular choice for data storage and exchange, especially in environments where multiple operating systems are in use. This compatibility is key in today’s diverse technological landscape, ensuring seamless data transfer and access across different platforms.
4. Advantages of Using UDF Files
Versatility Across Platforms
One of the most significant advantages of UDF files is their platform-agnostic nature. This versatility makes UDF an ideal file system for data interchange across different operating systems. Key points include:
- Cross-Platform Compatibility: UDF files can be accessed on Windows, macOS, Linux, and various other operating systems without any need for additional software. This universal compatibility is particularly beneficial in mixed-OS environments.
- Media Versatility: UDF is not confined to a specific type of optical media. It supports CDs, DVDs, Blu-ray discs, and is adaptable to future optical media technologies. This flexibility makes it a future-proof choice for data storage.
Durability and Reliability for Long-Term Storage
UDF’s design focuses on the long-term preservation of data, offering durability and reliability:
- Data Integrity: UDF incorporates features that promote data integrity, such as robust error handling and recovery procedures. These features are crucial for archival purposes where data longevity is essential.
- Rewritability Support: UDF supports rewritable media, enabling data to be updated without compromising the integrity of the existing data. This is particularly useful for applications requiring regular data updates, like backups.
5. UDF vs Other File Systems
Comparing with FAT, NTFS, and Others
- FAT and FAT32: While FAT is widely compatible, it is limited in terms of file and partition size, making it less suitable for modern, large-capacity storage needs. UDF doesn’t have such limitations.
- NTFS: NTFS is ideal for internal drives due to its robust feature set (like security and large file support). However, it’s not as universally compatible as UDF for optical media.
- exFAT: exFAT bridges some gaps between FAT32 and NTFS, especially in handling large files and partitions. However, it’s not as universally compatible across different operating systems as UDF.
Different scenarios call for different file systems:
- Data Interchange: For sharing data across multiple platforms, especially on optical media, UDF is unparalleled due to its wide compatibility.
- Archival Storage: For long-term archival where data integrity is paramount, UDF’s stability and support for rewritable media make it a superior choice.
- Internal Drives and Large Files: For internal drives or situations involving large files and need for advanced features like file permissions, NTFS (on Windows systems) or ext4 (on Linux systems) might be preferable.
UDF’s unique blend of compatibility, flexibility, and reliability makes it an excellent choice for a variety of applications, particularly where data interchange and long-term data preservation are crucial.
6. Creating and Managing UDF Files
Creating and managing UDF files involves a few key steps, ensuring the data is stored efficiently and securely:
- Selecting the Right Tool: Choose software that supports UDF file creation, such as Nero Burning ROM, ImgBurn, or integrated tools in operating systems like Windows and macOS.
- Preparing the Media: Insert your optical media (CD, DVD, Blu-ray) into the drive. Ensure it’s clean and free of physical damage for optimal data integrity.
- Formatting the Media: Initiate the formatting process and select UDF as the file system. Choose the appropriate UDF version based on compatibility and requirements (e.g., UDF 2.60 for large files).
- Data Transfer: Copy the files to the optical media. Ensure not to exceed the media’s capacity and verify that file paths and names comply with UDF specifications.
- Finalizing the Disk: Once all data is transferred, finalize the disk. This step is crucial for read-only media, as it prevents further modifications and ensures compatibility across platforms.
Best Practices and Tips
- Regular Backups: For rewritable media, regularly back up data to prevent data loss due to media degradation.
- Compatibility Check: Before distributing UDF formatted media, verify compatibility with intended recipient systems.
- Labeling and Documentation: Clearly label the media and document its contents for easy identification and retrieval.
- Software Updates: Keep your UDF creation and reading software updated to ensure compatibility with the latest UDF standards and media types.
7. UDF in Various Industries
Case Studies and Real-World Applications
UDF’s versatility has led to its adoption in various industries:
- Media and Entertainment: UDF is extensively used in the production and distribution of movies and music on DVDs and Blu-ray discs, offering a universal format for consumers worldwide.
- Healthcare: In medical imaging, UDF is utilized to store high-resolution images like MRIs and CT scans, ensuring compatibility with various diagnostic equipment.
- Archival Systems: Libraries and government archives use UDF for long-term storage of digital records, valuing its stability and integrity over time.
Future Trends and Potential
Looking forward, UDF has significant potential in several areas:
- Cloud Storage Integration: As cloud services continue to grow, integrating UDF standards for data interchange between cloud and physical media could enhance data portability and accessibility.
- 4K and 8K Video Formats: With the rise of ultra-high-definition video, UDF’s capacity to handle large file sizes makes it ideal for future home entertainment and professional media formats.
- Data Preservation: In an era of digital preservation, UDF’s durability makes it a candidate for long-term archival solutions, especially for critical data that must outlive current hardware and software systems.
UDF’s adaptability and robustness position it well for continued relevance across various industries, adapting to emerging technological trends and storage needs.
8. Troubleshooting Common UDF File Issues
Common Challenges and Solutions
- Incompatibility Issues:
- Challenge: UDF files not being recognized by some operating systems or devices.
- Solution: Ensure the UDF version used is compatible with the target operating system. Update the device firmware or software if needed.
- Data Corruption:
- Challenge: Corruption of data stored in UDF format, often due to improper handling or disk errors.
- Solution: Use disk repair tools specific to UDF file systems. Regularly back up data to mitigate the risk of data loss.
- Slow Performance:
- Challenge: Slow read/write speeds when accessing UDF files, especially on older systems.
- Solution: Defragment the disk and check for system updates. If using rewritable media, consider a lower UDF version or a different media type.
- Formatting Errors:
- Challenge: Errors encountered during the formatting process in UDF.
- Solution: Verify the compatibility and condition of the optical media. Use reliable formatting tools and follow the recommended settings for UDF.
- Regular Updates: Keep software and firmware related to UDF file handling up-to-date for optimal performance and compatibility.
- Quality Media: Invest in high-quality optical media for better reliability and longevity of UDF files.
- Professional Tools: Utilize professional-grade tools for creating and managing UDF files, especially in a commercial or industrial setting.
Recap of Key Points
- UDF files offer unparalleled versatility across platforms, making them ideal for data interchange and storage.
- They are robust, supporting large files and capacities, and providing durability for long-term data preservation.
- UDF stands out against other file systems like FAT and NTFS, especially in its compatibility with different operating systems and media types.
- The file system has found applications across various industries, from media and entertainment to healthcare and archival systems.
Future Outlook of UDF Files
As we look toward the future, UDF files continue to hold significant potential. Their adaptability ensures they remain relevant in evolving technological landscapes, particularly with emerging media formats and cloud integration. The ongoing development of UDF standards will likely further enhance their capabilities, cementing their role as a key player in data storage and interchange. | <urn:uuid:41ab45a4-0c65-4716-be57-b962512b2eb5> | CC-MAIN-2024-38 | https://networkencyclopedia.com/udf-file-system/ | 2024-09-19T01:07:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00623.warc.gz | en | 0.895993 | 2,977 | 2.8125 | 3 |
What is Data Science?
Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured, like data mining. Put simply, it’s a way to make sense of all your data.
Why use Data Science?
Whatever your goals within your organization, chances are it will require doing more with your data. So along with a powerful in-memory database and clear data strategy, having data scientists to help you get the most value from your analytics will be of huge benefit to you.
Latest Data Science Insights
There is always something new to learn in software development. Our technical expert has put together a reading list for all enthusiastic developers.
Being able to predict future trends and needs with precision is key to success. So how can you solve the complex analytic challenges that are coming along with it? And how can you save money on your analytics tools?
Big data science is revolutionizing the way businesses create value from data. The ability to do high-performance in-database programming is the fundamental building block of big data science applications.
Interested in learning more?
Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data. | <urn:uuid:00ec1abc-ff94-4cb2-84b8-bf1f48e01e6f> | CC-MAIN-2024-38 | https://www.exasol.com/glossary-term/data-science-definition/ | 2024-09-18T23:53:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00623.warc.gz | en | 0.913427 | 284 | 2.828125 | 3 |
Uplink (UL or U/L for short) and downlink (DL or D/L for short) are two basic terminologies used in mobile communications. These are two communication links that allow the mobile phone and the mobile network to communicate with each other.
Uplink is the communication link that transmits the signal from the mobile phone to the network base station (eNodeB, gNodeB, etc.). Downlink is the communication link from the base station to the phone. In CDMA networks, the downlink is called a forward channel, and the uplink is called a reverse channel.
When you make a phone call using your cell phone, the phone is busy doing many things. The two most basic things the phone is doing is that it is continuously sending and receiving. The mobile network base station is also doing the same thing. What a mobile phone sends is received by the base station, and what a base station sends is received by the mobile phone.
The mobile phone and the mobile network are connected to each other even when you are not on a phone call. The mobile phone is always providing its status update to the mobile network. In simple terms, it is always telling the mobile network, “I am here”.
Since both the base station (cell tower) and the mobile phone (cell phone) need to communicate with each other, the communication is separated by creating two dedicated links. These links basically determine who is sending, the phone or the network. This is where uplink and downlink communication comes in.
Downlink communication is when a radio network base station transmits the radio signal from its antennas to the antennas of a mobile phone or cell phone. Radio network base stations are BTS, NodeB, eNodeB, gNodeB, etc.
Uplink communication is when a mobile phone or cell phone transmits the radio signal from the phone antennas to the antennas of the radio network base station (BTS, NodeB, eNodeB, gNodeB, etc.).
Uplink and downlink in CDMA networks
The terms uplink and downlink are used by most mobile networks in the world, including second-generation GSM (Global System for Mobile Communications), third-generation UMTS (Universal Mobile Telephone System), fourth-generation LTE (Long Term Evolution) and fifth-generation NR (New Radio).
However, the CDMA-based 2G and 3G mobile networks use different terminologies for these communication links. In CDMA networks, IS-95 (Interim Standard 1995), CDMA2000 and EVDO (Evolution-Data Optimized), the communication link from the base station to the cell phone is called a forward traffic channel or forward channel. The communication link from the cell phone back to the base station is called a reverse channel.
In summary, the downlink communication in CDMA networks (IS-95, CDMA2000 and EVDO) is called a forward traffic channel or forward channel, and the uplink communication in CDMA networks is called a reverse traffic channel or reverse channel.
Technology | Network-to-Phone communication | Phone-to-Network communication |
GSM (2G) | Downlink | Uplink |
UMTS (3G) | Downlink | Uplink |
HSPA (3G) | Downlink | Uplink |
IS-95 or CDMAOne (2G) | Forward Traffic Channel | Reverse Traffic Channel |
CDMA2000 (3G) | Forward Traffic Channel | Reverse Traffic Channel |
EVDO (3G) | Forward Traffic Channel | Reverse Traffic Channel |
LTE (4G) | Downlink | Uplink |
NR (5G) | Downlink | Uplink |
Why uplink and downlink?
The terms uplink and downlink are also used in satellite communications. The link from the satellite to the ground is called downlink, and the link from the ground to the satellite station is called uplink. To remember this concept in mobile communications, look at the picture below.
As you can see, base station antennas are generally always at a higher position than a mobile phone; therefore, they need to transmit the radio signal downwards, hence the term downlink. On the other hand, a mobile phone generally always has to send the radio signal in the upward direction, hence the term uplink. Of course, this situation may change if you are in a high-rise building, but the terminologies are based on general situations for the majority of the population.
The uplink and downlink separation – Duplexing
The uplink and downlink communication links are separate, but the way they are separated from each other may differ depending on the mobile cellular technology, e.g. GSM, UMTS, LTE, NR, etc. The technique used to determine this separation is called a duplexing scheme, which is a fundamental concept in mobile communications. The duplexing scheme determines whether the uplink and downlink communication shall be carried out on separate frequency bands or separate timeslots (of the same frequency band).
When the uplink and downlink communication takes place using separate dedicated frequency bands, it is called Frequency Division Duplex (FDD) because the separation or division is based on the frequency bands. When uplink and downlink communication happens on the same frequency band but using different slots, it is called Time Division Duplex (TDD) because the separation or division is based on the timeslots.
I have written a dedicated post on Frequency Division Duplex (FDD) and Time Division Duplex (TDD), which you can check out for more details. | <urn:uuid:568f218c-6896-417b-ab9c-6759526cb5f9> | CC-MAIN-2024-38 | https://commsbrief.com/uplink-and-downlink-in-mobile-communications/ | 2024-09-20T06:46:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00523.warc.gz | en | 0.925018 | 1,173 | 3.78125 | 4 |
As guardians of the nation’s critical infrastructure, we cannot wait for a critical event to occur before we take action. We must be proactive in developing well-vetted plans as well as preparing and rehearsing standard operating procedures. Only a few months ago, Hurricane Irene wreaked havoc across the Northeast. We were extremely lucky that major cities were not under water. Imagine sump pumps being inoperable during a hurricane due to inadequate maintenance or a lack of either utility or back-up power systems. With weather trends showing increases in the frequency and severity of storms, we must develop and perfect our disaster response plans, and more specifically, improve how we collaborate and communicate with first responders.
There are newly developed guidelines, methodologies, and standards that provide some collaboration and communication tools. Building on the 2003 “Interagency Paper on Sound Practices to Strengthen the Resilience of the US Financial System,” the U.S. Department of Homeland Security developed the National Incident Management System or NIMS in 2008. NIMS is a comprehensive, flexible methodology for incident management that is scalable at all levels and across many disciplines, providing an organized group of operational structures to facilitate coordinated actions between various organizations during emergency planning. While not a detailed plan in and of itself, NIMS enhances the cooperation among levels of government, the private sector, and the public at large and ensures interoperability among first responders and important stakeholders during a disaster. NIMS focuses on five key components:
• Preparedness: Ongoing preparation, organization, training, exercising, and evaluation of emergency management plans utilizing partnerships between the government and private sector.
• Communications: Use of flexible communications and information systems allowing emergency management and first responders to employ a common operating picture. It stresses the use of interoperability, reliability, scalability, and portability to streamline communication between various entities.
• Resource management: Efficient use and management of critical resources during emergencies.
• Command and management: Provides a flexible, uniform structure for incident management.
• Ongoing management and maintenance: Continuous improvement and integration of best practices and lessons learned.
NIMS provides some methodology, but in today’s complex regulatory environment, firms must be compliant with current policies and regulations in order to operate efficiently and ensure that their critical infrastructure is prepared for any emergency. The National Fire Prevention Association’s (NFPA) Standard 1600 on Disaster/Emergency Management and Business Continuity Programs establishes a “total program approach” for disaster management planning with first responders. It suggests that a disaster recovery plan should address the following specific areas: First, organizations should align their incident management systems to allow cooperation with first-responder agencies. Second, the plan should ensure that communication systems within the facility are tested and maintained such as radios, telephones, and internet notification systems. Finally, the standard recommends that first responders should be provided with adequate training, clothing, and equipment when responding to a facility emergency.
In addition to NIMS, Presidential Policy Directive 8 (PPD-8) calls for an integrated, layered, and all-of-nation preparedness approach. It calls for concrete, measurable, and prioritized objectives for mitigating risks. Frameworks include prevention, protection, mitigation, response, and recovery for particular threats and scenarios and provide recommendations for supporting preparedness planning for businesses, communities, families, and individuals. The goal of this directive is to address the largest amount of people as possible to mitigate the effects of a disaster.
A recent Heritage Foundation white paper analyzed Japan’s response to the early 2011 earthquake and tsunami and highlighted three “lessons to be learned” by the U.S. and the Department of Homeland Security. First, it suggested the adoption of a more decentralized emergency response and called for an end to “over-federalization” of decision making at the congressional level. Secondly, it highlighted the fact that community awareness and the effective communication of risk saved more lives in Japan than their extensive technological protection devices. Finally, it stressed the need for resilience of critical infrastructure, in particular its most “vital” element—the electric grid.
Excellent examples for powerful tools to manage situations such as this can be seen at the Rio de Janeiro command center and in VCORE Solutions fourDScape (see figure 1). The command center brings together data from 30 of the city’s agencies on multiple screens and allows the city to quickly coordinate and mange responses to crisis shortly after they occur. fourDScape integrates numerous sensor and video feeds into a virtual environment that can be accessed vertically throughout a chain of command from managers to upper level decision makers as well as horizontally throughout emergency response teams. This keeps all involved parties up to date with critical information, as it is available through an intuitive dynamic virtual user interface.
In recent years, we have experienced record-breaking earthquakes, floods, drought, lethal tornados, snowstorms, and the third most chaotic hurricane season ever recorded. In fact, the first half of 2011 saw $265 billion in economic losses, well above the previous record of $220 billion for 2005, the year Hurricane Katrina struck (see table 2).
These events established that the efficient interaction and communication with first responders is critical for an effective response to any disaster. When any emergency threatens our nation’s critical infrastructure, the situation becomes more and more dire. As mission-critical professionals we must ask ourselves the following questions:
• Do our plans include a contact list of emergency responders, such as police, fire, and EMT?
• Has a liaison for communications with emergency services and first responders been assigned?
• Does the plan define how to interface with first responders, utility companies, and other infrastructure and public authorities?
• Have we prepared a list of “critical facilities” to include any location where a critical operation is performed?
• Are any internal corporate risk and compliance policies applicable?
Developing a disaster response and business continuity plan is a cooperative task involving input and coordination from all departments within your organization, including key personnel from IT, facilities, human resources, and security, as well as vendors, external stakeholders, and corporate suite. Outside consultants should also be engaged to provide feedback and conduct peer reviews of new plans. Adequate budgets must also be allocated, with a financial plan that ensures funds are well spent.
A business continuity plan serves as a tool to effectively guide a company’s personnel through the impact and recovery of a disaster. Training and exercises should be conducted on a routine basis not only to test the plan but to ensure that key personnel are aware of their roles and responsibilities in regard to the plan. Now is a good time to ask: Have you determined which information from your continuity plan is needed by first responders in case of an emergency? Will your personnel know where/how to locate this information during a disaster? This may include building plans, electrical one-line diagrams, emergency operating procedures, and other emergency response plans. Once this information is compiled, it must be stored and maintained in a dedicated application hosted in a secure environment. This allows your personnel to support first responders by quickly providing the critical logistical information required during an emergency.
The questions highlighted in this article are among some of the more important ones to consider when preparing your organization for an emergency; however there are many more that should also be considered. Today, there is no business that is immune to disasters. Adequate preparation and training will allow the industry to benefit from firsthand emergency responder experience. While only one percent of our infrastructure might be classified as “critical,” it is imperative that first responders and mission-critical personnel identify this one percent and establish effective disaster recovery plans that will put it back in operation as soon as possible should disaster strike. Most of America’s critical infrastructure is operated by the private sector; are we prepared to protect it during an emergency?
Reprints of this articleare available by contacting Jill DeVries at devriesj @bnpmedia.com or at 248-244-1726. | <urn:uuid:9a804a13-2c16-4f9f-8359-bcac3d6af93f> | CC-MAIN-2024-38 | https://www.missioncriticalmagazine.com/articles/84845-tools--technology--and-techniques-for-mission-critical-professionals-and-first-responders | 2024-09-08T03:29:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00723.warc.gz | en | 0.942129 | 1,648 | 2.8125 | 3 |
Data Privacy Law and GDPR Implementation in Bulgaria
Bulgaria’s Protection of Personal Data Act 2002 or the Act for short is a data privacy law that was originally passed in 2002. Prior to the passing of the General Data Protection Regulation or GDPR in 2016, data protection within the country of Bulgaria was governed by the Protection of Personal Data Act 2002. However, as Bulgaria is one of the many nations that make up the European Union, the Personal Data Act 2002 was amended to implement the applicable provisions of the EU’s GDPR law into Bulgarian law, effectively laying the legal framework from which personal data may be collected and processed within Bulgaria, as well as the punishments that can be imposed against individuals and organizations who are found to be in violation of said framework.
What is the scope and application of Bulgaria’s Protection of Personal Data Act 2002?
In terms of the scope and application of Bulgaria’s Protection of Personal Data Act 2002, the personal scope of the law “applies to organizations that process the data of identifiable natural persons. Personal data of deceased persons may be processed only based on legal grounds.” Alternatively, the territorial scope of the “law applies in the territory of the Republic of Bulgaria and imposes obligations on controllers and processors who process the data of natural persons in Bulgaria.” Moreover, the material scope of the law “covers the rules for the processing of personal data by private organizations and public authorities, specific categories of personal data, and personal data by automated means.”
What are the requirements of data controllers and processors under Bulgaria’s Protection of Personal Data Act 2002?
The requirements of data controllers and processors under Bulgaria’s Protection of Personal Data Act 2002 are largely the same as those that are set forth by the EU’s GDPR law. Under the EU’s GDPR law, data controllers and processors are responsible for adhering to a number of data protection principles when collecting or processing personal data. These principles include but are not limited to lawfulness, fairness and transparency, and data minimization. Additionally, the two laws vary as it pertains to the age of consent concerning data collection and processing. Under the EU’s GDPR law, the age of consent is 16, while Bulgaria’s Protection of Personal Data Act 2002 lowers this legal age to 14.
Furthermore, Bulgaria’s Protection of Personal Data Act 2002 also varies from the EU’s GDPR law as it relates to Data Protection Impact Assessments, or DPIAs for short. Under Bulgaria’s Protection of Personal Data Act 2002, data controllers and processors within the country must conduct DPIA’s if they carry out any of the following types of data processing operations:
- Large scale processing of biometric data for the unique identification of the individual which is not sporadic;
- Processing of genetic data for profiling purposes which produces legal effects for the data subject or similarly significantly affects them;
- Processing of location data for profiling purposes which produces legal effects for the data subject or similarly significantly affects them;
- Processing operations for which the provision of information to the data subject pursuant to Article 14 of the GDPR is impossible or would involve disproportionate effort or is likely to render impossible or seriously impair the achievement of the objectives of that processing, when they are linked to large scale processing;
- Personal data processing by the controller with the main place of establishment outside the EU when its designated representative for the EU is located on the territory of the Republic of Bulgaria;
What are the penalties for violating Bulgaria’s Protection of Personal Data Act 2002?
Bulgaria’s Protection of Personal Data Act 2002 is enforced by the Commission for Personal Data Protection or the CPDP for short. To this end, the CPDP has the authority to “impose sanctions (fines), as well as compulsory administrative measures (such as the issuance of warnings, orders to comply with certain requirements, etc.).” Such sanctions include a monetary fine ranging from BGN 5,000 ($2,901) to €2.6 million ($2,947,776), as well as a fine that can amount to “double the amount of the initially imposed fine, but not more than the maximum envisaged in Article 83 of the GDPR.” What’s more, data controllers and processors also face fines ranging from 2% of a company’s global turnover; or €10,000,000.00 ($11,291,250), whichever is higher, and fines ranging from 4% of a company’s global turnover, or €20,000,000.00 ($22,583,400), whichever is higher.
Through the amendment of Bulgaria’s Protection of Personal Data Act, 2002 is in accordance with the provisions of the EU’s GDPR law, the data protection and privacy rights of Bulgarian citizens were legally guaranteed. As Bulgaria has a strong history of providing such legal protections to its citizens, the implementation of the General Data Protection Regulation is fitting advancement in terms of the protection of personal data and privacy in the 21st century. Through the passing of such legislation, the European Union continues to set an international standard with respect to personal data protection that will continue to influence other nations for years to come. | <urn:uuid:897ad95d-d425-4778-aa87-e35e5b3d4486> | CC-MAIN-2024-38 | https://caseguard.com/articles/data-privacy-law-and-gdpr-implementation-in-bulgaria/ | 2024-09-09T07:31:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00623.warc.gz | en | 0.931046 | 1,089 | 2.65625 | 3 |
Key benefits of using augmented reality in business
Augmented reality (AR), which enhances the perception of reality, is the integration of digitised information such as computer-generated audio, text, graphics, GPS data and other effects that augment or enhance users’ real-world experience in real time. AR blends the best of the physical and digital worlds to create distinct digital experiences. The technology allows overlaying of new information on an existing physical environment and enables users to experience reality in a more engaging and heightened way. Since headsets and mobile devices are the mediums of experience, AR is easily accessible to everyone who uses a smartphone or any other hand-held device.
Organisations worldwide are pouring in millions of dollars to develop more and more sophisticated applications based on AR and virtual reality (VR). The AR market alone is expected to reach USD 61.39 billion by 2023. Consumers are also getting accustomed to the benefits of shopping from a business that is AR-enabled. In fact, over 70 per cent of online shoppers report that they would buy more often from a business that uses AR. And 40 per cent of consumers are willing to pay more if they can customise a product using AR.
There are multiple benefits of implementing AR technology in business.
Creates distinct experiences
AR creates unique digital experiences without the need of any special software or hardware. Smartphones and even web browsers are enough, making AR accessible to almost everyone. AR places digital components over physical ones and creates an almost mirage-like effect. For example, a tourist can open the AR app and point her smartphone at an interesting object. Digital snippets will immediately pop up on the screen and deliver all the information about the object. This type of AR use has been utilised by various businesses.
Eliminates cognitive overload
Handling huge chunks of information about an unfamiliar subject is always frustrating. Cognitive overload happens when an individual’s working memory is forced to handle more than it comfortably can, leading to poor decision making. AR can be the helpful assistant here since it presents information in neatly summarised snippets. For example, a leading global car manufacturer has built an app that allows drivers to use their phones or tablets to get to know their cars well and carry out basic maintenance techniques without being overloaded by information from a paper manual.
Even professional car mechanics are known to use AR to locate defective parts and understand the repair process in easily digestible snippets. During the pandemic, when travel was restricted, industrial equipment, aircraft, power plants still needed to be serviced. At this juncture, AR-enabled visual instructions and remote virtual assistance came to the rescue of many businesses.
Increases user engagement
During the initial days of AR, it was labelled as a technology for the gaming and entertainment industry. Its uses have now expanded to include many aspects of doing business. User engagement is one of them. Considered to be the gateway to many other benefits, increased user engagement means increased brand loyalty and more spending. Interactive ads, scannable product labels, store signage and catalogues are all ways to engage users.
Enables competitive market positioning
Every brand tries to distinguish itself from other brands selling the same products. So far this differentiation has been created using traditional advertising channels. With the advent of AR, brands can do more to make consumers happy. For example, a leading global sportswear brand has developed an AR tool that helps buyers find the perfect shoe size and check how a shoe would look on the buyer. All that is needed is a smartphone with a camera. Similarly, a leading paint brand allows consumers to digitally try out a shade of paint before purchasing it. Furniture and homeware brands, car brands, apparel and cosmetic brands are all jumping on to the AR bandwagon to woo customers.
Builds immersive training modules
Organisations can now create immersive training modules for employees without having to break the bank. With AR-based tools, employees can easily visualise and simulate various situational scenarios during the training process. Using data analytics, an interactive AR-based evaluation system can also be used to evaluate training modules, operational efficiency, employee productivity or any other activity.
Implements real-time analytics
Data analytics provides organisations with actionable data that can be used for monitoring operations, making strategic decisions and for security. In fact, a report presented at the Gartner® Data & Analytics Summit 2022 stated that 80 per cent of the businesses surveyed reported higher revenues because of real-time data analytics.
AR can make the process even more efficient by integrating real-time analytics on site. Systems can be monitored in real time and necessary adjustments can be made to boost efficiency. When critical processes are integrated into a data analytics system, the AR app being used can create visual representation of the data along with valuable insights and that enables the operator to adjust process parameters on the go, and thus boost efficiency.
AR has become a critical tool for businesses today. In the retail world, it is only a matter of time before every popular brand adopts augmented reality in ecommerce for its shopping experience. AR can also change experiences for millions of people with different disabilities. For example, AR smart glasses can be used to gather product information by shoppers who cannot move their hands while AR smartphones can help the visually impaired by collecting all the information about products laid out on a shelf. AR-powered headsets can also help visually impaired users navigate a new space. All of these are undeniable business opportunities besides being tools to increase accessibility for persons with certain disadvantages.
* For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future. | <urn:uuid:0f81d656-9a22-4795-8152-286711957994> | CC-MAIN-2024-38 | https://www.infosysbpm.com/blogs/business-transformation/key-benefits-of-using-augmented-reality-in-business.html | 2024-09-09T09:09:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00623.warc.gz | en | 0.936422 | 1,260 | 2.609375 | 3 |
DAS is common terminology wireless industry professionals use to describe the infrastructure designed to distribute a cellular signal within a certain area, either indoor or outdoor. In most cases, the term references indoor DAS, however there are instances when outdoor DAS is discussed. Obviously, there is a practical need to distinguish between the two. You’ve probably guessed it by now: iDAS stands for indoor DAS and oDAS stands for outdoor DAS.
iDAS is the most common form of DAS infrastructure in use. Here, we will describe the individual aspects of an iDAS system. Remember the fiber backhaul? It’s a fiber optic network run by wireless carriers which connects their central office with remote hubs. As we explained, all signals are transported digitally on this network. Fiber optic cables allow carriers to transmit data long distances with little or no signal loss. Also, various optical technologies are utilized such as DWDM (Dense Wave Division Multiplexing) or CWDM (Coarse Wave Division Multiplexing). This all sounds very complicated, but it’s basically a way to send multiple signals along a single fiber. The actual optical fibers are often maintained by 3rd parties and wireless carriers sometimes have to lease them which can get very expensive. Using as few fibers as possible reduces that cost.
Wireless carriers usually position remote hubs based on demand. Imagine venues like sport complexes, hotels, airports, university campuses, malls and hospitals. These are typical establishments where wireless users congregate in large numbers. Setting up BTS in these venues makes the most economical and strategic sense for carriers. Basically, these BTS are designed to perform analog-to-digital and digital-to-analog signal conversions. BTS have intelligent software built into them and do everything from call processing to frequency hopping. Everything about these BTS are proprietary. You’d have to be a pretty resourceful individual to obtain specifications for these machines. BTS pumps out very high powered RF signals. These signals can be anywhere between 5 and 80 watts depending on the model and manufacturer.
iDAS doesn’t need high RF power, so some processing is done to increase efficiency. This loss of power can be thought of as an “inefficiency wagon”. Carriers use custom attenuators to pad RF power down to a manageable level, somewhere around 0 dBm (roughly equal to 1mW.) They also use RF diplexers to separate various technologies such as AWS, GSM and PCS from one another. (A diplexer is a passive component used to combine or separate two RF frequencies, depending on the direction of the signal.) Then each RF signal goes through a RF duplexer (not to be confused with the diplexer) which further separates the signal as either up or downlink. RF signals originating from a user’s handset are described as uplink and signals coming from a BTS or a donor antenna are known as downlink.
Let’s recap before continuing. The high power analog RF signals from the BTS are attenuated (signal power is reduced), diplexed (signals are combined or separated) and duplexed (signal is separated into its uplink and downlink constituencies). Now it’s finally ready to be fed into a DAS Head-End, which is broadband equipment designed to function as a neutral (multi-carrier, multi-frequency) distribution hub inside buildings. The system loses efficiency at this point (the wagon is rolling) because our processed analog RF signals are converted into optical signals at the DAS Head-End and then get split optically before being distributed over more fiber optic cables to multiple Remote Units (RUs). Each RU does optical-to-RF conversion and re-amplifies the signal before sending it to indoor antennas. Sometimes, multiple RFs are split and signals from one RU get sent to multiple antennas via coaxial cable. This is where RF engineers perform their magic.
Engineers use software like IBwave to calculate RF propagation inside the building in order to distribute signals. They pick and choose from various passive components such as splitters, combiners and directional couplers to balance every antenna equally inside the building. Don’t worry about these different components. Just know that engineers use them to manipulate the signal in order to balance each antenna. This is not an exact science at this point and there is no silver bullet that solves every coverage issue. Buildings are constructed differently and walls are erected at inconvenient spots that block good RF propagation.
If you think installing DAS inside a building sounds complicated, try doing the same thing in a football stadium. Believe us, it’s not an easy task. But nerds do have power and in some cases your happiness depends on them. Suppose your girlfriend wants to update her Facebook status from “Single” to “Engaged” using her mobile phone during an NFL game (because of your impromptu marriage proposal fueled by beer and the adrenaline of seeing your team score the winning touchdown.) Those friendly RF engineers made your special occasion a reality by designing a good DAS system and now your girlfriend is able to announce your engagement to the whole world before you can change your mind. Her phone worked and so did those of the 80,000 other spectators in the same RF jungle you call a stadium.
Usually RF engineers will divide the coverage area inside the stadium into multiple sectors and make sure RF signals propagate properly within a particular sector but don’t experience interference from nearby cell towers and the next DAS sector. Each sector functions independently and has its own DAS equipment. Engineers rely on their RF expertise and industry adopted RF propagation software. Wireless signal handoff occurs when a cellular user moves from one particular sector to another. Everything is controlled by software installed on the BTS. Transfer has to be seamless. Unless of course you like dropped calls and data outage. Similar concerns must be addressed in an oDAS environment, which will be discussed next.
oDAS functions similar to iDAS but requires Remote Radio Heads (RRHs), which are similar to the RUs in iDAS and use relatively high power. You typically see RRHs with 1 to 5 or even 20 Watt output RF power. These RRH nodes are housed in weather-proof outdoor enclosures and placed in public areas to enhance cellular coverage. oDAS deployments are relatively limited in the US because the same areas can be covered by macrocells. Remember macrocells are typically cell towers and owned and operated by turfing vendors and independent tower companies. Wireless carriers don’t own cell towers. Instead, they lease a space on the towers to mount their antennas and amplifiers. Space on a cell tower is prime real estate and carriers pay big money for it.
Another aspect of wireless coverage is emergency mobile deployments. During events like natural disasters, wireless carriers deploy mobile repeater stations (RATs) to provide coverage. There are also COWs. (We’re not sure who makes these names up either.) COW stands for Cell on Wheels and it’s a full-on high powered cell tower mounted on trucks. RAT means Repeater and Trailer. RATs are smaller in capacity compare to COWs, meaning they can’t handle as much signal.
iDAS Head-End equipment is supplied by manufacturers like Commscope, TE Connectivity, Corning/MobileAccess and Solid Technologies in the US. These manufacturers are commonly referred to as DAS OEMs (Original Equipment Manufacturer) or vendors. The DAS market is beginning to mature and has experienced some consolidation during the last 3-5 years. Commscope acquired Andrew, Tyco bought ADC, and MobileAccess became part of Corning. DAS OEMs equipment must be approved by carriers for installation by system integrators at various venues.
DASpedia is currently seeking written contributions to our growing knowledge base of DAS & Small Cell technologies. Articles, white papers and presentations are all valid considerations. Product data sheets are also welcome. Editorial credit will be given. If you have something you or your organization would like to share, please feel free to submit it to firstname.lastname@example.org | <urn:uuid:ea2499e6-297a-4335-b1c3-d90911c1ab37> | CC-MAIN-2024-38 | https://daspedia.com/archives/1926 | 2024-09-10T15:27:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00523.warc.gz | en | 0.951146 | 1,701 | 2.96875 | 3 |
With static NAT, a particular inside local address always maps to a particular inside global (public) address. Due to one-to-one mapping between addresses static NAT does not conserve public IP addresses. Although static NAT does not help with IP address conservation, it provides a degree of security by hiding the inside IP addresses from the outside world. Static NAT also allows an administrator to make an inside server available to clients on the Internet, because the inside server will always use the same public IP addresses.
Let’s configure router R1 performing NAT as depicted in Figure 10-3.
Figure 10-3 NAT Scenario
Here is how the configuration goes.
The ip nat inside source command identifies which IP addresses will be translated. In the preceding configuration example, the ip nat inside source command configures a static translation between inside local and inside global IP addresses as shown in Table 10-2 below.
Table 10-3 Static NAT Address Mapping
Inside Local Addresses | Inside Global Addresses |
192.168.1.2 | 188.8.131.52 |
192.168.1.3 | 184.108.40.206 |
192.168.1.4 | 220.127.116.11 |
You may also identify an ip nat command under each interface in the above configuration. The ip nat inside command identifies an interface as the inside interface. The ip nat outside command identifies an interface as the outside interface. The ip nat inside source command is actually referencing the inside interface with the inside keyword and the source address with the source keyword. The static keyword indicates a static one-to-one mapping between inside local and inside global addresses.
The ip nat inside source command is simply instructing the router to translate the source address of every packet entering the router at the inside interface. In order to ensure two-way communication return packets coming in the outside interfaces are also translated accordingly.
Once you finish your NAT configuration, you would usually want to verify if the configuration is working as expected or not. Also, you may need to monitor NAT translations in a production environment. It is quite tempting to use show running-config command to verify that the NAT configuration lines you entered are actually there in the running configuration of the router. But this does not tell you anything about whether actual translation of addresses is taking place or not.
The starting point for NAT verification and troubleshooting should be the show ip nat translations command:
The output above shows inside local addresses mapped to inside global address as configured on R1 and summarized in Table 10-2. What’s shown in the above output does not include any real translations so far because inside hosts did not send any traffic yet. What we see so far is the result of the three ip nat inside source commands in our configuration. In order to see some real translations we would generate some traffic by pinging the server 18.104.22.168 from each of the three inside hosts and run the show ip nat translations command again on R1:
In the above output, you can see three dynamic entries show in gray, each corresponding to a ping from an inside host to the server 22.214.171.124. Typically you would see several dynamic entries in the translation table for the same inside host communicating with several outside hosts using various protocols.
Another useful tool for NAT verification is the show ip nat statistics command:
This command provides NAT statistics including the number of translated packets or hits.
Please note that translation table entries eventually time out. However you may also use the clear ip nat translations command to clear specific entries from the translation table before they time out on their own. In order to clear all entries from the translation table, use an asterisk (*) at the end of the command:
As you can see in the above output, the ICMP translation entries are all gone following the clear ip nat translations * command. | <urn:uuid:c2eb20ff-56af-4bcb-9539-6a43fa1470f6> | CC-MAIN-2024-38 | https://www.freeccnastudyguide.com/study-guides/ccna/ch10/10-2-static-nat-configuration-verification/ | 2024-09-10T14:57:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00523.warc.gz | en | 0.899893 | 796 | 2.9375 | 3 |
The History and Future of Cellular Technology
Reviewing Cellular Technology EvolutionThe evolution of cellular technology over the past decades has been nothing short of miraculous. Coverage has dramatically improved, data is vastly cheaper, and wireless speeds with the latest devices rival landline broadband performance.
Even more exciting is that this rapid pace of advancement shows no signs of slowing down. It's quickening as carriers compete to build next-generation 5G networks.
But what exactly is 5G? Does it even matter how many Gs you have? And what is 4G/LTE, for that matter, and what came before?
Behind the scenes, the cellular world is filled with technical standards and protocols designed to push more data faster with each new generation. While the deeper technical details may be more than most need to worry about, this guide offers a high-level look into the evolution of cellular technologies and what to expect in the years ahead. MCA is at the forefront of these advancements, collaborating with top partners to lead the charge in the evolving landscape of cellular communication.
Where We Have Been?
Cellular technology has undergone four major generations and is expanding into the fifth. New cellular generations emerge roughly every decade but expand, improve, and iterate over their life-cycles.
Here's a look back at the old days, when even basic text messaging seemed exotic and new.
What Is a G?
Long ago, carriers adopted “G” for “Generation” as simple marketing shorthand. When you see terms like 3G, 4G, and 5G, it simply means the 3rd generation, 4th generation, and so on.
Devices of a given generation are usually compatible with at least one or two prior generations. This is an essential trait since cellular networks are often slow to upgrade, resulting in significant generational overlap. However, the reverse is not true—a third-generation device will miss out on all the speed and technical advancements of 4G technologies, and 4G/LTE devices cannot take advantage of the improvements of new 5G networks. This is why regularly upgrading your cellular devices is beneficial to avoid falling behind the technology curve.
The changes between different generations of cellular technologies come down to redefining the rules of how towers and devices communicate, allowing new gear to be developed under those new specifications. Think of it as back in the 1960s, when the maximum highway length for a vehicle was 35 feet. Nowadays, that maximum length is 45 feet, which results in larger vehicles transporting more commercial goods across the country, buses that move more people, and motorhomes with more luxuries.
In other words, evolving the standards to take advantage of newer technologies enables a new generation of more advanced devices that would have been impossible before.
The Olden Days – 1G, 2G, 3G
Verizon and TMobile use CDMA in the US, while most use GSM. Each of these two technologies has its own evolving and incompatible standards. CDMA and GSM competed through the 3G era, but like many technological standards, there could be only one battle.
The 4G Revolution
Early in the 4G era, Sprint bet big on a 4G technology called WiMAX and rushed to be the first to bring next-generation 4G service to market. Meanwhile, Verizon predicted that the future would be the next generation of GSM technology known as LTE (Long-Term Evolution) and began aggressively building out the first and largest 4G/LTE network in the United States. AT&T, also GSM-based, lagged behind Verizon in promoting and deploying LTE.
Seeing the LTE writing on the wall, Sprint stopped expanding its 4G WiMAX network and shifted to focus on LTE, effectively ending the standards battle—GSM and its successor, 4G/LTE, won the war.
LTE: One Unified Global Standard
LTE is a global 4G standard that nearly every phone manufacturer and cellular network embraces. While all carriers use the same standardized LTE technology, they often use different and incompatible radio frequencies.
Even though LTE emerged as the global standard, this standard operates on cellular frequency bands that vary by country and carrier. A smartphone or cellular device isn't truly globally compatible unless it supports most cellular bands in use globally.
Over the last decade, all carriers in North America built robust LTE networks and have now shut down their legacy 3G networks entirely to make room for the rapidly developing 5G.
LTE-Advanced and Carrier Aggregation
Devices that support LTE-Advanced carrier aggregation can combine multiple bands for faster speeds. Living up to its “Long Term Evolution” name, LTE networks were designed to evolve with new capabilities and speeds while remaining compatible with earlier LTE devices. These evolved capabilities are known as LTE Advanced or LTE-A.
The most significant feature that LTE Advanced provided is Carrier Aggregation, which lets an LTE-A radio combine multiple LTE channels from different discontiguous chunks of spectrum to create more bandwidth, supporting even faster speeds. Carrier aggregation is also an important core technology of 5G networks.
Early LTE devices that supported carrier aggregation could combine two 20 MHz channels for a peak theoretical cellular speed of 300Mbps. "LTE-Advanced Pro" takes this even further, supporting over ten data streams, pushing 4G cellular to deliver breathtaking gigabit speeds.
Think of it like a 100-lane highway through the sky!
LTE to 5G Cellular Evolution Transition
The fifth generation of cellular networks, 5G, is quickly expanding, with all carriers making significant network advancements. Like 4G/LTE before it, 5G will grow in capability over time as new standards and iterative improvements are developed and phased in.
However, LTE will remain a core cellular technology for many years. The transition from 4G to 5G will not be quick, and there will be a long overlap period. One key difference in this transition is that 5G and LTE are designed from the start to coexist, making the transition more seamless.
Deciding whether it's time to upgrade depends largely on your geographical location. Urban areas are seeing significant 5G upgrades, and it may be advantageous to take advantage of that. However, in more rural areas, even if 5G is available, speeds may not be much better than LTE, and you'll continue to receive excellent LTE service for years.
Network Retirements and Refarming for the Future
Refarming the Network
The refarming process requires shutting down older networks, which results in slower performance, reduced coverage, and eventual obsolescence for devices reliant on these outdated technologies. However, newer technologies promise faster speeds and greater efficiency, improving overall connectivity. As a result, upgrading devices and technology over time is essential to staying connected.
Transitioning to LTE and 5G
Earlier generation cellular networks have mostly been phased out in favor of LTE and 5G. Here’s an overview of the 2G and 3G shutdowns across major carriers in the United States:
- T-Mobile: Currently the only carrier still utilizing a 2G network in the U.S., T-Mobile initially planned to shut down its 2G network on January 1, 2023, but this has been rescheduled to April 2, 2024. T-Mobile’s 2G network persists due to its minimal spectrum usage and the slow upgrade of legacy 2G systems like wireless home alarms.
- Verizon: Ended its 2G network at the end of 2020.
- AT&T: Shut down its 2G network back in 2017.
- Sprint: T-Mobile shut down Sprint’s 2G network in 2022 alongside the Sprint 3G network shutdown.
All major carriers have decommissioned their 3G networks, relegating older devices to obsolescence. While some local and regional providers, as well as international carriers, may still offer 3G, they too are transitioning to newer technologies. Below is a summary of each major carrier’s 3G shutdown:
- Verizon: Officially shut down its 3G network on December 31, 2022.
- Verizon CDMA Network Retirement Consumer Support Article
- Verizon CDMA Network Retirement Business Support Article
- Verizon List of Incompatible Devices
- AT&T: Ended 3G services on February 22, 2022.
- AT&T 3G Shutdown Information
- Compatibility Lists:
- AT&T Phone and Hotspot Whitelist
- AT&T Unlocked Device Compatibility List
- AT&T IoT Certified Devices List/Database
- Cricket (AT&T’s Prepaid Subsidiary): Encourages customers to check compatibility using their BYOD IMEI Check System.
- T-Mobile: Officially shut down its 3G network on July 1, 2022.
- T-Mobile Network Evolution Page
- T-Mobile IMEI Check Tool
- Sprint: Retired its 3G CDMA network on May 31, 2022.
Devices from the pre-LTE era may not function on current networks or will have significantly reduced capabilities. While most 4G/LTE data-only devices were unaffected by the 3G shutdowns, some required firmware updates to maintain compatibility due to reliance on 3G for authentication. Early 4G smartphones, which depended on 3G for voice calls, were impacted unless they supported VoLTE (Voice Over LTE), ensuring continued functionality post-shutdown).
Upcoming 4G Shutdowns
4G and LTE services will continue to be supported by Verizon, AT&T, and T-Mobile well into the future, likely for another decade or more. However, T-Mobile shut down the Sprint 4G/LTE network on June 30, 2022, reallocating that spectrum for its 5G network.
The 5G Present and Future
The era of 5G is here and rapidly evolving. Despite the excitement and marketing hype surrounding it, it promises significantly faster peak speeds and much lower network latency, supporting a vast number of connected devices.
The ultimate goal of 5G networks is to achieve peak data rates exceeding 10 Gbps with network latency as low as 1 ms, representing a 50-fold increase in network capacity and throughput compared to typical 4G/LTE networks. This major generational shift requires new radio technologies and additional wireless spectrum.
Each carrier approaches 5G deployment differently based on available spectrum, eventually covering low, mid, and high band frequencies to balance range and speed.
Similar to LTE, 5G technology and performance are guided by specifications from the 3GPP, the global organization defining mobile broadband standards. Here are the current phases:
- 5G Phase 1 - 3GPP Specification Release 15: Covers most 5G devices from 2018 through 2022.
- 5G Phase 2 - 3GPP Specification Releases 16 & 17: Devices with these specifications started appearing in late 2021.
- 5G Advanced - 3GPP Specification Release 18+: Focuses on specifications through 2025.
As with 4G/LTE, the technical standards for 5G will continue to evolve, improving speed and capability over time.
What Is Beyond 5G?
The 5G standard is designed to last well beyond 2030, evolving and advancing compatibly over time. During the 2020s, carriers will gradually shift 4G capacity to 5G, dedicating those radio channels to 5G service or hybrid 4G/5G service. By the early 2030s, we may begin transitioning to 6G.
While cellular companies boast about the speed of their networks, the practical necessity for most mobile users remains modest, typically around 5 Mbps for streaming HD video. The true benefit of faster networks lies in increased capacity. Faster networks serve more users and devices efficiently, which is crucial in areas with limited spectrum and networks oversaturated.
Carriers have already released LTE-only devices that no longer support older 3G and 2G networks. In the future, 5G will make today’s fastest LTE devices seem outdated.
What About 6G??
The plans for launching 6G are still under heavy development. For more information about the work our partners at Nokia are doing to make 6G a reality, visit this article from our sister company, Infinity Technology Solutions.
About MCA and Our CNS Team
MCA is one of the largest and most trusted integrators in the United States, offering world-class voice, data, and security solutions that enhance the quality, safety, and productivity of customers, operations, and lives. More than 65,000 customers trust MCA to provide carefully researched solutions for a safe, secure, and more efficient workplace.
Our Cellular Networking Solutions (CNS) team (formerly known as USAT) is made up of certified experts in designing and deploying fixed and mobile wireless data connectivity solutions for public and private enterprises nationwide - complete with implementation, training, proof of concept (POC), system auditing, and on-site RF surveying services with optional engineering maintenance contracts.
Our extensive catalog of world-class routers, gateways, and software designed for remote monitoring and management in even the harshest environments allows us to deliver a full suite of reliable technologies capped with a service-first approach.
Share this Post | <urn:uuid:9f114d96-02f7-4e12-91e8-e4851d669782> | CC-MAIN-2024-38 | https://lte.callmc.com/the-evolution-of-cellular-technologies/ | 2024-09-11T19:47:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00423.warc.gz | en | 0.941241 | 2,708 | 3.046875 | 3 |
Cyber security training and policy management is imperative for businesses, according to the Hacking the Human OS report by Intel Security.
There is growing concern about the effect of cyber crime on the global economy, which is currently estimated to be around $445bn.
The report encourages businesses to address and educate employees on the “six levers of influence” being used in the digital world by hackers.
According to the report endorsed by the European Cyber Crime Centre (EC3), all businesses and employees should be aware of the basic persuasion techniques commonly used by cyber criminals.
These techniques are used by cyber criminals to manipulate employees to do things they would not usually do, often resulting in the loss of money or valuable data.
The report comes just days after spear phishing attacks were identified as the key initial step in what has been described in the most daring cyber heist to date, believed to have netted up to $1bn.
Bank computers and networks were breached by targeted phishing attacks, demonstrating the inherent weakness in the “human firewall” and the need to educate employees about the top persuasion techniques in use in the digital world, the report said.
Despite the huge role played by social engineering in cyber attacks, a recent study by Enterprise Management Associates found that only 56% of all workers had gone through any form of security or policy awareness training.
MORE ON SOCIAL ENGINEERING
- Pretexting: How to avoid social engineering scams
- Beating socially engineered malware with web browser security
- Socially engineered malware attacks: Enterprise defence best practices
- Social engineering attacks: Is security focused on the wrong problem?
- Combat social engineering attacks with these mantras
The scale of the problem is further underlined by the fact that two-thirds of the world’s email is now spam aiming to extort information and money.
The extent and severity of social engineering was highlighted by McAfee Labs, which identified a dramatic increase in the use of malicious URLs, with more than 30 million suspect URLs identified towards the end of 2014.
The increase is attributed to the use of new short URLs, which often hide malicious websites, and a sharp increase in phishing URLs.
These URLs are often “spoofed” to hide the true destination of the link and are frequently used by cyber criminals in phishing emails to trick employees.
With 18% of users targeted by a phishing email clicking on a malicious link, the increased use of these tactics is cause for concern, the report said.
This provides more incentive for consumers and employees to be on guard for the most prolific phishing and scam techniques currently in use, the report said.
Studies have indicated that around 80% of workers are unable to detect the most common and frequently used phishing scams.
“The most common theme we see when investigating data breaches today is the use of social engineering to co-erce the user into an action which facilitates malware infection,” said Raj Samani, chief technology officer for Europe at Intel Security and advisor at EC3.
“Cyber criminals have become expert at exploiting the subconscious of a trusted employee, often using many of the selling tactics we see in everyday life. Businesses must adapt and employ the right mix of controls for people, process and technology to mitigate their risk,” he said.
Paul Gillen, head of operations at EC3, said cyber criminals do not necessarily require substantial technical knowledge to achieve their objectives.
“Some well-known malicious tools are delivered using spear phishing emails and rely on psychological manipulation to infect victims’ computers,” he said.
“The targeted victims are persuaded to open allegedly legitimate and alluring email attachments or to click on a link in the body of the email that appeared to come from trusted sources,” added Gillen.
Six levers of influence
When people are provided with something, they tend to feel obligated and subsequently repay the favour.
People tend to comply when they believe something is in short supply. For example, a spoof email claiming to be from a bank asking the user to comply with a request or else have their account disabled within 24 hours.
Once targets have promised to do something, they usually stick to their promises because people do not wish to appear untrustworthy or unreliable. For example, a hacker posing as a company’s IT team could have an employee agree to abide by all security processes, and then ask them to perform a suspicious task supposedly in line with security requirements.
Targets are more likely to comply when the social engineer is someone they like. A hacker could use charm via the phone or online to win over an unsuspecting victim.
People tend to comply when a request comes from a figure of authority. This could be a targeted email to the finance team that might appear to come from the CEO or company president.
6. Social validation
People tend to comply when others are doing the same thing. For example, a phishing email might look as if it’s sent to a group of employees, which makes each employee believe that it must be okay if other colleagues also received the request. | <urn:uuid:e0fef697-5fc5-4898-a733-2642acc5c7c8> | CC-MAIN-2024-38 | https://www.computerweekly.com/news/2240240671/Intel-Security-warns-of-six-social-engineering-techniques-targeting-businesses | 2024-09-13T00:35:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00323.warc.gz | en | 0.963129 | 1,055 | 2.515625 | 3 |
The landscape of generative AI and large language models (LLMs) has undergone a seismic shift, dominating conversations from corporate boardrooms to living rooms. For example, the recent emergence of large-scale language models like GPT-4 has revolutionized natural language processing (NLP) by pushing the boundaries of understanding and generating human language. To fully harness the potential of these pre-trained models, it’s critical to simplify the deployment and management process for real-world application. This includes chatbots, recommendation systems, language translation services and medical diagnoses. Enter a new concept to help — large language model operations, or LLMOps.
Which begs the question: What exactly is LLMOps? And how does it impact responsible AI practices? Let’s dive in.
How LLMOps Helps Navigate the Unique Challenges of LLMs
LLMOps gets its name from combining two concepts: LLMs and MLOps (machine learning operations). In a nutshell, LLMOps is MLOps for LLMs. LLMs are foundational AI/ML models that can tackle various NLP tasks, such as generating and classifying text, answering questions in chatbots and virtual assistant conversations and translating content into different languages with minimal effort.
LLMOps, on the other hand, borrows principles of MLOps to address unique challenges posed by LLMs, like massive compute resources, data quantity and quality and deployment complexity. The aim of LLMOps is to optimize how efficiently and effectively we can use these LLMs in real-world language-related applications and make them production ready. In this sense, LLMOps and MLOps come together to streamline and enhance the smooth operation of processes, tasks and models, ensuring they stay aligned with the intended goals.
To level set, let’s review the five steps in the MLOps process that help make ML projects operational (Figure 1):
- Business understanding: Any AI/ML project starts with understanding a specific business problem. This will ensure that the AI/ML project has well-defined objectives and KPIs that provide value to the organization.
- Data preparation: This phase involves accessing, ingesting, cleansing and transforming raw data into a format suitable for ML algorithms.
- Model building and training: In this phase the right algorithms are selected and fed with preprocessed data, allowing the model to learn patterns and make predictions. By continuously training the models through hyper parameter tuning and repeatable processes, the accuracy of the model is continuously enhanced.
- Model deployment: In this step, the model is packaged so it’s scalable for predictions. Typically, this means exposing the model as APIs for integration into various applications.
- Model management and monitoring: The final step involves monitoring model performance metrics, detecting data and model drifts, retraining the model as necessary and keeping stakeholders informed about model performance.
Interestingly, the lifecycle for the MLOps flow is very similar to the LLMOps flow (Figure 2) as illustrated above. The difference? You can avoid expensive model training because the LLMs are already pre-trained. However, you must consider tuning the prompts (i.e., prompt engineering or prompt tuning) and, if necessary, fine-tune the models for domain specific grounding.
While this is all extremely helpful, none of it is possible without, you guessed it, data. The reality? The effectiveness and reliability of AI and ML models depend on the availability of high-quality data. If the data is inaccurate, the AI model’s behavior will be adversely affected during both training and deployment, which can lead to incorrect or biased predictions.
And this is no less true for LLMs. As you start adopting generative AI and LLMs, what will separate you from other organizations will come from fine-tuning, operationalizing and customizing your own LLMs with your data. And being able to drive scalable and responsible adoption of generative AI will depend on how effectively the data is managed to fuel LLMs. This is where data management comes in.
How Informatica Sets the Stage for Generative AI Success
Effective data management is a must-have for developing and training reliable ML models. That’s why you need to make sure your data is ready for AI with Informatica Intelligent Data Management Cloud (IDMC), powered by our CLAIRE metadata-driven AI technology.
Let’s review some key considerations for data management in LLMOps and explore how Informatica IDMC can help operationalize LLMs so you’re ready to compete in the AI-driven landscape.
- Business understanding: Informatica data governance capabilities can streamline documentation of processes, systems, data elements and policies for key business domains. This enables collaboration among data governance, data stewardship and subject matter experts, which can provide context and alignment into AI/ML projects.
- Data preparation: Ensuring high-quality and consistent data is essential for accurate LLM performance. IDMC provides comprehensive end-to-end cloud native data engineering capabilities. This enables data engineers to process and prepare big data workloads to fuel AI/ML and analytics — a must to win in today’s crowded market.
- Model training: The tools needed for model development are primarily those that are already familiar to the data scientist. Informatica INFACore, an IDMC service, can help by providing plug-ins into various interfaces, like Jupyter notebooks. This streamlines data integration, data quality and data transformation tasks. These plug-ins can enhance the data scientists’ productivity by enabling them to build and train the models efficiently.
- Model deployment and monitoring: To stay competitive, you need to be able to quickly put AI into action and at scale. Informatica ModelServe can help by operationalizing high-quality and governed AI/ML models built using just about any tool, framework or data science platform, at scale, within minutes. Once models are put into operation using Informatica ModelServe, they become accessible through scalable REST API endpoints. Informatica ModelServe ensures a highly available and low-latency service for deploying and keeping track of these models.
Effective LLMOps needs robust data management, and Informatica IDMC provides that foundation by offering a broad range of capabilities. This includes data integration, data quality, data transformation, data governance, data security and automation, workflow orchestration and model serving. These capabilities are instrumental in building and maintaining high-performance LLMs. And, more importantly, the ability to drive innovation and transformation in today’s fiercely competitive environment.
In a future blog, I will delve deeper into the practical implementation of LLMOps, such as assembling cross-functional teams, selecting the right LLM, refining a data strategy and ensuring ethical AI practices. In the meantime, check out these valuable resources:
- Transform your organization’s data management experience. Sign up for a private preview of CLAIRE GPT, our AI-powered data management tool that uses generative technology.
- The cloud data management conference of the year is hitting the road. Collaborate and learn about generative AI and other trends at Informatica World Tour. | <urn:uuid:f0d4c434-893d-4077-9a6a-552675a9a4d7> | CC-MAIN-2024-38 | https://www.informatica.com/blogs/how-to-successfully-navigate-the-generative-ai-frontier.html | 2024-09-13T02:10:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00323.warc.gz | en | 0.913867 | 1,482 | 2.5625 | 3 |
Anytime disaster strikes, emergency communications are of utmost importance, and that proved especially true on Sept. 11, 2001, when more than 343 firefighters lost their lives responding to the aftermath of terrorist attacks on the World Trade Center in New York City. And while little progress has been made in the last 13 years to improve communications for first responders, a new project led by the Department of Homeland Security with research by the Pacific Northwest National Laboratory hopes to leverage the Internet of Things to provide the life-saving communications capabilities our nation’s heroes need.
Earlier this summer, DHS introduced the Responder Technology Alliance, established by the First Responders Group within its Science and Technology Directorate.
“Essentially its purpose is to try to enable what we’re characterizing as the responder of the future, so that that responder is fully aware and able to sense both hazards and opportunities and is capable of responding to those in a manner that is safer than they are today,” said Steve Stein, director of PNNL and project lead for the lab. “It’s trying to envision that future and then developing solution sets that are essentially an integrated solution set that provides greater protection, greater awareness, better communication capabilities, particularly in those places where you’re not GPS enabled, and it enables them to do their jobs better.”
Under that larger program to advance the development of technologies critical to emergency response, PNNL recently submitted a request for information about a project called the Responders Integrated Communication System. Whereas emergency responders now have a collection of independent items like protective equipment and power systems with which they do their jobs, DHS and PNNL hope to bring those together in an interconnected environment.
Stein said they are looking for an “integrative piece, that integrative element.” What they found, he said, is that “communications is clearly the piece that is the integrative tissue.” The initial step is to give responders the ability to communicate more efficiently with their own devices. So, if a fireman had an oxygen tank, they could have a digital meter relaying the tanks levels.
After that, DHS and PNNL want the system “to be able to communicate back to a central command function, which may be a fire truck or a cruiser or a, if it’s a big event, a command location,” Stein said. “And so now we’re talking about systems that are Wi-Fi- and/or satellite-connected, or in other ways connected, to their vehicles that have the ability to boost a signal and send it even farther so that the responder now becomes not only somebody who is doing a job, but now is also becoming a sensor. You can envision this in grand terms as the Internet of Things.” Stein said they hope to leverage the work of the National Institute of Standards and Technology’s FirstNet to boost network operations during emergencies.
The RFI lists several mainstream technologies that the system must include, like voice activated two-way communication, Blue Force tracking, mobile use, noise canceling and so on.
While many federal agencies have tried to produce similar projects dependent on reaching deep into the pocket of taxpayers, the Responders Integrated Communication System is simply a vision with the rest left up to the private sector to run with.
“When we looked at all of this, our contention was that we didn’t need the money to actually build and enable this future. What we really needed to do was build a vision and build out the requirements for development so that everybody could understand what they are and, to the extent that we can get support for it, we get everybody to buy in,” Stein said. “Once that’s in place, industry can come forward and say, ‘We can build that piece, and we see the market and we see the ancillary markets. We really don’t want to pollute our intellectual property by taking federal money, so we’re going to build that component.’ And if we can get them in the door, we can understand what they’re gonna do, we can actually leverage the investments that they’re making, which means the federal government makes a smaller investment, industry makes a bigger investment, but industry actually captures the market and actually moves it to market more rapidly.”
That means instead of a traditional procurement, the contract winner will own the rights to the final product and be free do as it pleases. Essentially, DHS and PNNL are trying to build a successful set of specifications based around existing technology and on the expertise of first responders and then create a market around that.
“We’re looking to the private sector as the engine. We want to enable that engine, and to be brutally honest, we want to stay out of their way,” Stein said. “This isn’t a place where we need to invent. This is a place where industry is absolutely capable and they’re more capable of doing it than we are.”
DHS has the vision, PNNL has the technical knowledge and the private sector has the power to make it happen in a rapid manner, Stein said. In the next few weeks, DHS will release a request for proposal to procure and experiment with example solutions from the private sector to help them move forward with a model that is market-ready. The initial solicitation lists June 2015 as a deliverable date.
“The key in this to me is that this program is successful if it stimulates the private sector to create things the responder community needs in an integrated fashion,” Stein said. “If we can do that, this is a great success.”
DHS did not respond to requests for comment by publication. | <urn:uuid:068efff4-7331-49d1-8b92-0f9eb63775f9> | CC-MAIN-2024-38 | https://fedscoop.com/dhs-pursues-post-911-first-responder-communication-system/ | 2024-09-15T14:26:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00123.warc.gz | en | 0.959987 | 1,203 | 2.953125 | 3 |
What Is Demand Forecasting?
Demand forecasting is the process of predicting future customer demand for a product. It can be based on historical data, sales trends, and other factors closely related to the business.
Types of Demand Forecasting
1. Passive Demand Forecasting
Passive demand forecasting balances purely on historical sales data to project future demand. In this technique, future sales are expected to follow similar patterns as past sales have, so it is common within firms with stable sales histories and huge stocks of such records. Nonetheless, this method has several limitations, especially when external occurrences consider its dependability, like economic shifts or market trends taking place in dynamic environments.
2. Active Demand Forecasting
In contrast to passive forecasting, active demand forecasting considers multiple factors, including market research, marketing strategies, and external economic indicators, among others. It is helpful for fast-growing enterprises or firms operating in highly competitive markets. Moreover, it assists them in understanding the changing dynamics connected with the increasing or decreasing consumer demands across their product lines in those markets.
3. Short-Term Demand Forecasting
This involves looking at demand over a short period, usually weeks to one year. This is particularly relevant for businesses dealing with seasonal items or JIT (just-in-time) stock management systems. Short-term forecasts allow companies to manage inventories well enough to cope with rapid changes in the marketplace. Timely adjustments may be necessary as businesses analyze recent sales data to meet immediate customer needs without wasting resources.
4. Long-Term Demand Forecasting
Long-term demand forecasting considers demand for a longer period, typically over one year. This is crucial in strategic planning since it helps firms anticipate future expansion, capacity requirements, and seasonal patterns. Generally, long-term projections are based on wider market studies encompassing demographic shifts and economic indicators, just to mention a few. Thus, this process ensures that the company’s plans are made regarding expected changes in the external environment.
5. Internal Demand Forecasting
Internal demand forecasting entails an assessment of a firm’s internal capacity to satisfy forecasted demand levels. It involves determining whether a company can grow its operations, human resources base, and production facilities in response to growth estimates made by the marketing department. When faced with changing consumption patterns, organizations need to know their readiness level, hence the importance of internal demand forecasting. This process reviews a firm’s assets and constraints, pointing out bottlenecks that should be targeted toward improving operational efficiency.
6. External Macro Forecasting
This kind of prediction refers to examining outside changes such as economic situation, industry trends, and consumer behaviors that may affect demand. External macro forecasting helps firms understand their business environments, including market dynamics. Consequently, they can adjust their strategies by identifying possible opportunities and threats these companies face.
What Factors Impact Demand Forecasting?
- Economic Environment: Demand predictions depend on an economy’s overall conditions. These conditions directly impact GDP increase, unemployment rates, inflation, and consumer confidence in relation to purchasing power. For example, economic growth usually leads to increased disposable income for households, resulting in higher demand for goods.
- Competition: The level of competition in the market determines significantly product or service demand. High competitiveness may reduce the demand for a certain product as there are more options available to customers. Businesses should adjust their forecasts based on what other firms charge, products offered, and marketing systems used by competitors. Besides, it can also change customer preferences and market share, necessitating this approach to demand prognosis.
- Consumer Trends: A shift in consumer tastes and fashions can radically influence consumption patterns. Changing tastes, lifestyles and buying habits often lead to either increased or reduced demands for products depending on the circumstances involved. For instance, the rise of eco-friendly products based upon sustainability has been witnessed thereby reducing reliance on traditional ones.
- Price: Price is a crucial determinant of quantity demanded. As the price rises, the quantity demanded decreases, while when the price falls, the quantity demanded increases. This relationship is vital to companies when setting prices and for demand forecasting purposes.
- Product Availability: Availability significantly affects product demand. A lack of it could be disastrous, as shortages make people grab whatever is left with them, causing an enormous temporary surge in demand. Conversely, if a product becomes easily accessible, then its absence could lead to stability or decline caused by low sales volume per day/week/month, etc.
- Seasonality: Most products have seasons during which they tend to be bought more than others at different times throughout any given year. For instance, holiday items experience increased demand at times of the year. Businesses must recognize these seasonal patterns to make correct demand forecasts based on peaks and troughs throughout the year.
Demand Forecasting Trends
Demand forecasting is changing very quickly following technological advancement and market dynamics. Here are some main trends in demand forecasting for 2024:
- Integration of AI/ ML: These technologies can analyze big data, reshaping demand forecasting by predicting patterns and identifying trends. This assistance boosts the accuracy of forecasts by continuously learning from new data and adjusting predictions accordingly. Companies are increasingly using AI-based solutions to automate their traditional forecast tasks.
- Utilization of Big Data Analytics: Utilization of big data analytics is becoming indispensable in demand forecasting too. When combined with other relevant datasets, this enables firms to better understand driving factors for demands. Hence, it improves the precision of forecasts as well as adaptability to evolving customer tastes. It also makes it easier for businesses to sense changes in demand that prompt them to respond rapidly.
- Cloud-Based Demand Planning Solutions: Cloud technology is significantly enhancing the accuracy of predicting demands through real-time data sharing and collaboration at different sites. In addition to being scalable enough, cloud-based platforms help in fast adjustment when it comes to either market or supply chain shifts, hence ensuring accurate forecasts alongside optimized inventory management.
- Focus on Sustainability and Ethical Practices: Inclusion of sustainability factors into demand forecasting is visible as consumers become more eco-friendly concerned oriented. Companies consider how green practices, along with sustainable products, can impact consumer behaviors then align their prediction strategies with sustainability objectives.
- Increased Collaboration Across Departments: To ensure accurate predictions, organizations facilitate the interaction of diverse functions such as marketing, finance, and operations during the forecast period. This ensures that all relevant information may influence predictions, leading to better results.
- Emphasis on Agility and Flexibility: Due to rapid market changes, today’s demand forecasting paradigm has changed towards agility and flexibility. Responding by adopting more flexible techniques within their processes, they adjust forecasts based on real-time feedback and data to remain competitive within a constantly changing business environment.
Why is Demand Forecasting Important to Businesses?
- Supply Chain Efficiency: Effective demand forecasting helps businesses order supplies and stock at the right time, consequently enhancing supply chain overall management. This anticipation avoids hurried last-minute orders, ensuring that inventories arrive as they should, minimizing disruptions.
- Optimized Inventory Management: By predicting future demands, businesses can maintain optimal stock levels, thus avoiding overstocking or stockouts. This balance assists in reducing carrying costs and ensuring that customers can get products when they want them.
- Cash Flow Assurance: Knowing patterns of funds flow in a business may help predict financial fluctuations and, hence, avoid financial crises. This monetary vision is vital for maintaining operational equilibrium.
- More Accurate Budgeting: Demand forecasts give an idea of future sales, which helps companies develop better budgets. Consequently, they allow for better resource allocation for marketing, staffing, and operations, among other things.
- Optimized Sales and Operations Planning (S&OP): It also ensures alignment within different departments across organizations leading to improved communication resulting in higher operating efficiency. By having this collective approach there will be improved efficiency and hence increased net profit.
Demand forecasting is an essential process for any business. It provides insight into how to accurately predict consumer demands in the future, thus optimizing inventory management practices, streamlining production processes, and improving customer satisfaction levels. Companies use historical data market trends and AI machine learning, among other advanced technologies, to make informed choices that enhance operational efficiencies and strategic planning.
Share this glossary | <urn:uuid:f0ec1289-126a-4184-9097-63b9f5dc62cb> | CC-MAIN-2024-38 | https://kanerika.com/glossary/demand-forecasting/ | 2024-09-15T12:56:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00123.warc.gz | en | 0.935339 | 1,682 | 2.71875 | 3 |
Hyas Blog | Cyber Adversary Infrastructure Explained
Cyber threat actors rely on infrastructure hidden to most people not looking for it. Revealing such frameworks shines a light on how cyber adversaries operate.
- Cyber attacks don’t happen in a vacuum: Threat actors require complex infrastructure to deploy malware and ransomware, carry out phishing campaigns, and conduct attacks on supply chains.
- Cyber adversary infrastructure is hidden to those who don’t know how to look for it. But illuminating this infrastructure is the key to understanding how cyber attacks work.
- The right kind of cyber protection solution stops threat actors early enough in the kill chain before company-damaging steps occur, rendering the attack futile. To achieve this, organizations must critically ensure their protection addresses the constantly changing and evolving techniques from nefarious actors by addressing their attacks at the source – the infrastructure behind them.
Leading an up-and-coming cybersecurity organization teaches you a few things — such as how cyber adversaries are able to conduct a growing number of cyber attacks.
Three of the Fortune 5 (plus a number of Fortune 500 companies), as well as critical infrastructure around the world use solutions built by HYAS. The reason is because they understand the importance of cybersecurity in a rapidly changing cyber landscape. There are more threats now than there have ever been before — an upward trend that shows no signs of slowing down.
Most major threats to individuals, businesses and countries require adversary infrastructure. Cyber attacks ranging from malware and ransomware to phishing and supply chain attacks can cause devastating impact that ripples through society, and they all require communication with adversary infrastructure. Even new malwareless attacks require external communication with adversary infrastructure. Stopping attacks early enough in the kill chain requires both the visibility into this communication combined with a strong understanding of what is, and isn’t, adversary infrastructure.
What Is Cyber Adversary Infrastructure?
Everyone — both professionally and personally — receives phishing emails, which contain links that can cause harm if clicked on. Phishing attacks have been a threat for over two decades. More recently, we’ve all received examples of these on the mobile device as well, commonly called smishing for SMS-phishing.
Behind these simple emails and text messages with malicious links is a domain or some sort of infrastructure whether virtual or physical, and frequently a copycat website. Messages we receive may purport to come from an established bank, but that could be because bad actors have set the website up to look identical to the bank’s actual landing page, so that when someone clicks on the link, they’re none the wiser.
This is adversary infrastructure, and it’s set up in advance of a victim clicking a link to take advantage once they do.
First They’re Walking, Now They’re Talking
Malware, ransomware, and supply chain attacks all follow the same logic: a bad actor plants a digital “spy” inside an enterprise but still requires adversary infrastructure to be setup in advance
The initial infection could be via an email, on a physical USB stick, or through a compromised password, but the intention and outcome are the same: Once the spy has been planted, they can “walk around” the enterprise digitally — a process called lateral motion or movement, look for data to steal, and then escalate their attack.
How are threat actors able to move laterally via these planted spies and ultimately both exfiltrate and encrypt data? They communicate via instructions sent to the digital spy. The spy inside the compromised enterprise is on one end of this communication, and at the other end is the command-and-control (also known as C2) adversary infrastructure set up in advance by the threat actor.
Locating and Uncovering Threat Actor Infrastructure
The right security solution can identify the attack infrastructure as it is being built, providing organizations immediate visibility into what is going to happen. They are then able to take proactive action or otherwise block communication with this adversary infrastructure, perhaps even before the actual attack is launched, to render the eventual attack inert and protect themselves. HYAS solutions are predicated on this thesis. Our solutions collect unique, authoritative, and bespoke data from a variety of different sources, and organizes it into a graph database. This is special in two ways:
While it might sound fanciful, HYAS has data that other organizations — including cybersecurity-focused organizations — don’t have. A demonstration from HYAS will illustrate this. HYAS builds correlations and combinations between all the data points in the graph database which drive intelligence and ultimately decisions that link what has happened to what is happening now and what will happen in the future.
HYAS is fully compliant with the General Data Protection Regulation (GDPR), and our data collection and graph database is underpinned by a targeted understanding of how cyber adversary infrastructure is set up, and therefore from where the data needs to be gathered.
Anyone can detonate a piece of malware and figure out that particular version of malware’s command-and-control, and ultimately update an allow-and-deny list. But only HYAS knows what else that domain may be connected to in the graph database, and thus has the ability to not just update a risk score and verdict for a particular domain, but for a set of interconnected domains and infrastructure across a complete campaign.
HYAS Is Changing the Game
Once you understand the nature of cyber adversary infrastructure, you know how to tackle it. No matter how malware gets into a network, the HYAS solution can detect, identify, and block it before damage can occur.
HYAS gives our clients and partners visibility. Data is power. They gain real-time information — along with improved speed and effectiveness — not just about threat detection, but how to enable true business resiliency.
HYAS is changing the way the market thinks about cyber defense and offense. With HYAS, you can spot telltale signs of attacks early in the kill chain before they explore your organization and cause damage. HYAS stays ahead of bad actors who constantly change their tactics, but whose infrastructure is recognizable, so organizations can stay ahead of threats, preempting attacks with proper protection. They are then able to focus on business operations and resiliency, instead of worrying about the potential damage that could occur.
Don’t wait to protect your organization against cyber threats. Move forward with HYAS today. | <urn:uuid:dd3de7fc-571e-4281-9f8f-198b19728a04> | CC-MAIN-2024-38 | https://www.hyas.com/blog/cyber-adversary-infrastructure-explained | 2024-09-16T19:03:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00023.warc.gz | en | 0.942518 | 1,314 | 2.625 | 3 |
Payment Security 101
Learn about payment fraud and how to prevent it
Malware statistics in 2022 have demonstrated a serious financial loss for organisations and individuals. Malware, also known as a malicious software is designed to damage or disable computers and computer systems.
The malicious software can be spread in a number of ways, including email attachments, file sharing, and malicious websites. There are a variety of different types of malware, including viruses, Trojans, spyware, and adware. Malware can have a number of serious consequences for businesses, including data loss, system damage, and financial loss.
Cybercriminals often use malware to steal sensitive information or to extort money from businesses. It is important for CFOs and security professionals to be aware of malware and it’s several forms. Therefore, steps need to be in place to protect computers and networks safely. So, what do we know about malware in 2022? Keep reading to find the latest statistics.
Malware threats in the mobile space continue to grow year-over-year, with no end in sight. This is a trend that is only going to continue, as more and more people use their mobile devices for everything from banking to shopping to social media.
Malware is a type of software designed to harm or disrupt computer systems. The malware problem has become increasingly serious for Mac users in recent years. Security incidents can lead to the disclosure of confidential information, financial loss, and legal liability.
Trojan horse malware is a particularly insidious type of malware that masquerades as a benign program or file to access a computer system. Once it has gained entry, it can be used to damage systems, steal data, or allow someone else to take control.
While Trojan horses are typically spread via email attachments or malicious websites, they can also be spread by worms. BitDefender, the anti-virus company, states that Trojan horses are becoming more common.
Malware statistics demonstrate that attacks are common and can spread from business to business. According to data, Norton reports that hundreds of organisations are hit with targeted attacks from a range of countries. The United States was the country most affected, totaling approximately 303 attacks between 2015 and 2017.
All it takes for a Trojan to activate is a click of a button. The most destructive cyberattack occurred when recipients received an email with text attached “ILOVEYOU”. The malicious software was initiated when recipients opened the attachment, which caused script codes to be overwritten on every email in the user’s contact list.
Adware and malware are a constant threat to computer users. While many free programs can detect and remove these malicious files, they can be difficult to spot.
It is estimated that the average time between the exposure of a vulnerability and the creation of an exploit is 6.8 days. This means that an unprotected computer is likely to be attacked within an hour after being connected to the Internet.
The Zeus/Zbot malware package is a client-server program with deployed instances calling back to their home base, the “Zeus Command & Control” centre. They are also considered to be one of the top malware in 2022.
The Zeus software allows hackers (and possibly others) to access your computer and loot its information. The estimated number of computers infected in America alone surpassed 3 million – including ones owned by NASA or Bank Of America as well as various departments within the Department of Transport.
On May 12, 2017, a massive ransomware attack known as WannaCry began spreading across the globe. Within hours, 100,000 groups in 150 countries had been infected, with a total of 400,000 machines impacted.
A computer virus is a type of malware that is designed to spread from one computer to another. One of the ways that businesses can be exposed to the computer virus is through email. It only takes one person to open an email attachment or click on a malicious link to unwittingly infect their computer.
Malware attacks can occur on any device such as mobile, computers or even printers. Even with Mac’s defensive systems in place in their operating systems, they are still being attacked with updated malicious software.
This is largely due to .exe files can be automatically executed by many programs, whereas .pdf files require the user to take action to open them. As a result, it is important to be cautious when opening any type of file from an untrusted source.
Computer viruses are a serious threat to businesses and CFOs. More than 6000 new viruses are created every day, and old ones are continually evolving to become more sophisticated and more difficult to detect.
To boost security, businesses often test their security systems to make sure they are defended against sophisticated targeted attacks. The social media giant Facebook is offering a hefty reward to anyone who can find a vulnerability in its system. In doing this, Facebook is hoping to encourage people to report any potential security risks before they can be exploited.
It has been reported that employees are now spreading malware to other workers through different means. This could be because phishing attacks have become more sophisticated, while at the same time working from home may bring about distractions that lead people to behave erratically online.
Trojans known as Trojan Horse are malicious programs that disguise themselves as legitimate software to trick users into installing them.
Kaspersky’s products and technologies help to protect users from trojan attacks by detecting and blocking the malware before it can infect their system. With the number of trojan infections increasing, businesses and individuals must be cautious as infections are commonly spread through email attachments or infected websites.
According to Kaspersky, trojan attacks increased by 3.5 per cent in the past year. This malicious software has spread across the globe attacking locations like China, Bangladesh and more. No country is safe, therefore cybersecurity must be a number one priority.
A common objective for cybercriminals is to attain financial gains. Their strategy when attacking the financial industry is by using a Trojan horse to gain access to sensitive banking information. CFOs are hit hard when these attacks occur and can have a serious financial impact on the business.
When it comes to malicious software, Trojans are the most common. Almost 6 to 10 pieces of computer malware fall into the Trojan horse category. They are also becoming more and more experienced in attacking different sectors of business.
According to the Hub Security 2021 report, 70% of the 52% of attacks were Trojan malware. This attack targeted the Financial & Insurance sector.
Ransomware attacks and employees accessing sensitive information from their mobile devices pose a risk to company data. To protect company data, it is important to have resources available and to avoid paying ransom.
CFOs are under constant threat from cyber attacks. One of the vast majority and dangerous attacks is known as spyware. Spyware is one form of malware that is installed through email or a website on a CFO’s or employee’s computer without their knowledge.
From 2008 to 2019 malware infections saw a significant increase from 12.4 million to 812.67 million according to Purplesec. Viruses and infections are adapting to the modern business landscape with software types becoming smarter and harder to detect.
CFOs in small and medium-sized organisations are all too familiar with the high cost of downtime caused by “malware” security breaches. A recent study found that malware now represents 40% of all security downtime costs across all industries – and the problem is only getting worse.
Stalkerware can be installed on a victim’s phone without their knowledge and used to track their movements, listen to their conversations, and even remotely activate the camera. Stalkerware can be used in offices to steal confidential information or gain access to critical data.
Organisations can prevent stalkerware from being stalled by installing security applications to scan for malware or stalkerware apps.
When it comes to evaluating the true cost of digital fraud a combination of factors is involved. Other than the loss of data, disruption of business and reputational cost. Financial damages and impacted cash flow is hit hard on organisations and greatly impact CFOs.
Microsoft Corp. and the Washington state attorney general have filed lawsuits against anti-spyware software vendor ‘Secure Computer LLC’, claiming that their Spyware Cleaner product not only fails to remove spyware as advertised but also makes changes on your computer making the user more vulnerable.
Unfortunately, there are scammers and cybercriminals who try to get you into their fake antivirus software with a seemingly genuine security warning. Once installed, the software could compromise your computer giving the scammer access.
The newspaper website Minneapolis Star Tribune had served ads that were created with malicious intent. This directed users to a fake website that had prompted them with a pop up informing them that they had been infected.
The Office Deport and its tech support vendor agreed to pay for a settlement worth $35 million due to deceiving customers into downloading a free PC Health Check Program. In this case, there was no malicious intent. However, it was used to drive sales of the tech support vendor while the software did not operate with full functionality.
According to a report by Krebs on Security, ChronoPay, an internet payment service provider, was exposed by owning scareware companies and had paid for their domain names and other operations. The leaked records also depict how vigilant ChronoPay had worked in order to sustain these unethical work ties.
In the late 1980s, the “Morris Worm” was notably a notorious computer worm that had infected 1 in 10 internet-connected computers at the time. Thousands of other worms have since emerged, though none have compared to the Morris worm in terms of infectability.
Bot worms are designed to turn computers into zombies or bots, which can be used in coordinated DDoS attacks through botnets. Conficker infected millions of internet users and created vast pools for malicious purposes with its 2008 worm outbreak.
Similar to computer viruses, bot worms are malware infections that are capable of duplicating themselves and spreading between computers. This can be very troublesome to organisations as a typical bot attack can last up to weeks or months.
The Stuxnet computer worm was discovered in 2010 and created by the United States and Israel in order to target the Iranian nuclear power plant. The computer worm was successful and caused billions of dollars in damages by crashing 984 centrifuges in the facilities of the power plant, setting back production capabilities by 2 years.
Malware created in 2007, called Storm Worm, sought to take advantage of people’s fear and panic during times when they are most vulnerable. The email virus tells recipients that their computers have been taken over by hackers, who will then demand money or passwords for access. If no response is received within 24 hours, the hackers will threaten legal action.
Phatbot is a computer worm that has been known to cause extensive damage to systems it infects. The worm spreads by taking advantage of security vulnerabilities in Windows systems, and once it has infected a system it can give the attacker complete control over the PCs and devices.
The Flame virus, a deadly computer worm created as part of an international cyber program, shares many similarities with the Stuxnet worm. The Flame virus was designed to disrupt Iran’s nuclear weapon program by infecting thousands of computers and causing billions of dollars in damage. The virus continued to spread across the Middle East after it was first released.
There are different types of computer worms such as email worms, instant messaging worms, file sharing worms & internal worms. The SQL slammer was notably a destructive computer worm that targeted Microsoft’s SQL server in 2003. The SQL slammer was highly effective, approximately infecting 75,000 individuals throughout the globe.
There are many different types of malware, but the most common form is a virus. A virus is a type of malware that replicates itself and spreads to other computers. Virus infections can cause a variety of problems, from pop-ups to serious data loss.
A computer virus is a type of malicious code that is designed to replicate itself and spread from one computer to another. Virus infections can be caused by opening emails or attachments, downloading infected files, or visiting websites that contain malicious code. Much like other forms of malware, viruses can be spread through physical devices, such as when an infected USB drive is plugged into a computer.
According to Statista, Iran is currently leading in mobile malware infections in 2021. Followed by, Saudi Arabia, China, Algeria, India, Malaysia, Ecuador, Brazil, Nigeria & Bangladesh.
While it is difficult to estimate how many PCs are infected with malware at any given time, some studies suggest that one in three computers worldwide may be infected. This means that millions of people are at risk of having their personal information stolen or their systems damaged by malware.
Malware is a type of software designed to damage or disable computers. It can also be used to steal personal information, such as credit card numbers.
A VPN, or virtual private network, is a type of security software that encrypts your internet traffic and routes it through a secure server. This has the effect of making it much more difficult for hackers to intercept your data or track your online activity. However, a VPN cannot directly protect you from malware.
Essentially, a VPN cannot stop malware. That’s why it’s important to take precautions and maximise your security controls to minimise the risk of malware.
In today’s digital age, antivirus programs are an essential part of maintaining a secure computer. However, many people wonder whether these programs can really protect against all cyber threats. The answer is that no single program can offer 100% protection. However, a good antivirus program can provide a high level of security.
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error. | <urn:uuid:ccf613ac-a898-43b1-b549-7b06da2f0f33> | CC-MAIN-2024-38 | https://eftsure.com/en-nz/statistics/malware-statistics/ | 2024-09-20T10:21:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00623.warc.gz | en | 0.961626 | 2,862 | 2.9375 | 3 |
This post will explain the Agile development methodology, its performance, what types it has, and other essentials.
Table of Contents
ToggleMany experts, resources, documents, and meetings are required even for a small software development project. The question is, how to forecast and resolve potential issues that would probably arise?
Agile methodology and its frameworks
For these purposes, IT organizations use agile software development methodologies. They effectively speed up software delivery while being adaptable to change. We’ll have a closer look at Agile software methodology, approaches, and practices in this post and other essentials.
The Agile Methodology: Notion, Benefits, and Core Ideas
The Agile Manifesto defines Agile as a set of standards whose main aims are to make continuous improvements and provide a functioning product as soon as possible. This IT development methodology has many undeniable benefits that will be a plus for both customers and software building teams. We’ll list the most common Agile advantages below:
Effective risk mitigation and control. Because Agile concentrates on iterative delivery, there is always the opportunity to debug and enhance the performed project part after each iteration.
Improving Customer Satisfaction. Customers participate actively in the Agile-based building process, whereas they are typically involved in the planning stage during the traditional development flow.
Improved control. Project managers have better influence over the project workflow because Agile projects are transparent. As a result, throughout the development lifecycle, quality is assured, and all interested parties are engaged in the process.
Precise metrics. Compared to traditional models, Agile software development approaches employ more relevant and accurate indicators (e.g., lead time, cycle time, weak spots identification, etc.) to assess project performance.
Exclusive quality. The development team may focus on quality and cooperation by breaking down a large product into smaller components (sprints, Kanban activities). Furthermore, regular quality assurance testing and assessments after each iteration allow your team to identify and fix bugs early.
The vital advantages of Agile
The core ideas of all Agile frameworks are as follows:
- Create a proper atmosphere for the project’s development.
- Supply the most necessary tools.
- Everything may be altered at any time throughout the development flow.
- During the project development, customers and engineers must cooperate.
- Concentrate on the demands of the consumer rather than the instruments.
The above principles have helped to popularize Agile, resulting in the development of Scrum, Kanban, Lean, and other frameworks based on Agile concepts.
The Essence of Agile Frameworks
The Agile frameworks support the software creation workflow utilizing various development strategies. The Agile approach differs from its frameworks in that it’s also more official and adheres to stringent norms.
Let’s discuss the most popular Agile frameworks.
Scrum relies heavily on customer engagement and meticulous planning. It’s an excellent choice for MVP development that needs regular enhancements.
Scrum functioning scheme
This Agile framework is grounded on sprints (defined time intervals) when developers create a specific product feature. Such sprints begin with planning and conclude with delivering a predetermined product part.
The Crystal is the most basic and widely used Agile software development framework. It is more concerned with how team members connect than with tools and methods. The business priorities and project’s necessity determine the future strategic plan and team composition.
Crystal concentrates on:
- Minimizing obstructions
- On-time project delivery
- Increased client engagement
Kanban focuses on task competition without making a sprint backlog. The process goes the following way.
There are some assignments to do on the Kanban board. The Project Manager adds a new card with a task on the board once the staff works on the existing tasks. Stakeholders may see how long it takes to implement a feature by moving cards across the board. This way, a project’s duration may be predicted.
What is The Agile Team Composition?
Roles in the Agile team might alter depending on the project size, its peculiarities, and the chosen framework. Each employee can take on one or more roles or responsibilities and change them at any moment.
We’ll enlighten the primary responsibilities of each team member on the example of Agile Scrum staff composition.
Agile Scrum staff composition
- The team lead supervises the team, provides the appropriate atmosphere and resources, and eliminates project barriers.
- The staff member is in charge of developing and delivering the project.
- The project manager presents the consumer’s ideas clearly and maintains the Product Backlog, being responsible for project execution.
- The stakeholders are clients, sponsors, future users, or anyone participating in the development process.
- The architect in charge of the project’s architecture.
- The integrator is in charge of system integrations after sprints are finished.
The Agile Development Workflow
The Agile software engineering approach is an iterative one. It means the team is going on to the next step after each iteration until all demands are satisfied. Let’s find out more about the critical stages of the Agile development lifecycle.
Agile development workflow
Requirements. The specialists concentrate on the essential features that consumers utilize while eliminating rare ones.
Design. The designers construct a UI mockup to demonstrate an app prototype.
Development is about designing and building software in compliance with accepted specifications. It is the most time-consuming development phase because it is at the heart of the entire project.
Testing comprises QA testing as well as bug reports. The QA team runs tests to guarantee that the program is bug-free and consistent with the modifications made by the developers.
Deployment and implementation. The product is then put on servers and distributed to consumers for beta testing or actual usage. The final version is created after gathering initial feedback and fixing bugs.
Review and distribution. It’s time to show ready-made solutions to clients and stakeholders once all previous development phases have been completed.
Feedback. The team gathers feedback from consumers and utilizes it to enhance the solution in subsequent versions.
Agile development methodology helps IT staff deliver the project quicker to end consumers. However, its implementation will pass appropriately if you entrust it to a skilled software development company. The specialists will give the responses to all your disturbing business questions and help you bring into life the Agile methodology principles to enhance your company flows. | <urn:uuid:66596752-5ccc-49b7-ac17-bfaef436c0c4> | CC-MAIN-2024-38 | https://itchronicles.com/agile/how-to-implement-agile-development-methodology-and-not-to-fail/ | 2024-09-20T11:06:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00623.warc.gz | en | 0.922914 | 1,342 | 2.609375 | 3 |
What are Global Warming and what is it’s effects on our planet? Global warming is the gradual increase in the atmospheric temperature of our planet, usually due to the warming effect caused by human-produced greenhouse gases, carbon dioxide, and other pollutants, which contribute to climate change. If we do not take immediate action to curb greenhouse gas emissions, it will become more severe over time, with devastating consequences on our planet.
What are Global Warming and how serious is it to our world and our people? Global warming is the gradual increase of the atmospheric temperature of our planet, generally caused by the warming effect caused by human-produced greenhouse gases, carbon dioxide, and other harmful pollutants. The rapid increase in earth temperatures caused by global warming has been causing major disruptions in the natural climatic cycle, and the resultant ice-and-snow shifts can be devastating in terms of both land mass loss and sea-level rise.
How does global warming affect our natural climatic cycle? The warming effect of human-produced greenhouse gases is causing an increase in atmospheric temperature because they act like an exothermic chemical reaction. They cause molecules of greenhouse gases to absorb more energy than they release, resulting in an increase in atmospheric temperature. In short, the increased atmospheric temperature is causing atmospheric pressure to increase, which causes a rise in the earth’s atmospheric temperature.
What are global warming and how can we stop it? The effects of global warming on our planet are devastating, with the rising atmospheric temperatures increasing the likelihood of devastating droughts, which could lead to increased food prices, the collapse of crops and water supplies, reduced crop yields, and more frequent and more destructive hurricanes and storms.
How can we reduce global warming and decrease greenhouse gas emissions? Currently, there are no “silver bullet” ways to stop global warming, but there are some ways to slow it down significantly, reduce its impact, and eventually reverse the effects of global warming, if we can get our leaders to agree on effective policy. While some people claim that the best way to save the environment is to just stop using fossil fuels (i.e. the use of coal and oil) and instead build new solar and wind power facilities, there are some significant environmental benefits that will be achieved by making changes at home, such as installing carbon-absorbing windows and doors and insulation, replacing inefficient heating systems with more efficient models, and installing weather-resistant insulation.
How do we save the planet with green house policies? Greenhouse policies will help to significantly reduce global warming, as they will reduce global temperatures and decrease the amount of atmospheric greenhouse gasses (GHGs) in our planet’s atmosphere. Many greenhouse policies include switching to using energy-efficient appliances and cars, implementing better recycling policies, planting trees and planting greenery, using energy-efficient appliances, using alternative sources of energy, and improving our city’s energy efficiency. A number of these initiatives will help to reduce global warming and help prevent further environmental disruption from further global warming by making the world a safer place for future generations.
How do we stop global warming and how can we save the planet with green house policies? Greenhouse policies will also help to reduce global temperatures by making cities more environmentally friendly. By eliminating certain pollutants and increasing energy efficiency in our cities, these initiatives will help to reduce the need for energy-intensive electricity and gasoline, allowing the use of solar panels and electric cars, making our cities greener and more sustainable. by increasing energy production, which will also decrease the need for fossil fuels.
Is global warming a problem or is it a solution? Yes, global warming is real and it is a problem. The biggest problem is that scientists are not sure what causes global warming, although many experts agree that it is due to increased greenhouse gas emissions (especially from burning fossil fuels) into our atmosphere. Therefore, the solutions we have currently come up with are not enough to deal with the problem, especially if we continue to rely only on what we know about, such as global warming is caused by burning fossil fuels and that is why we should switch to solar and wind power. and green house renovation, but there is also another solution, which is a combination of both of home renovation and green house renovation that will save our planet and save our future.
Editor-in-Chief since 2011. | <urn:uuid:1e8b8897-6f02-4b88-98da-d682ee1d4504> | CC-MAIN-2024-38 | https://globalislamicfinancemagazine.com/what-is-global-warming/ | 2024-09-08T05:52:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00823.warc.gz | en | 0.932922 | 884 | 3.640625 | 4 |
Some wireless LAN applications may use multicast delivery of packets to and from wireless clients. For example, an application may stream (via multicasting) the same video signal over a wireless LAN to multiple wireless monitors (clients) located throughout an airport or hospital. With multicasting, each data packet is sent to a group of wireless clients simultaneously. This avoids the need to send each packet to every client one at a time, as is the case with the more common unicast packet delivery. As a result, multicasting is a much faster delivery mechanism when sending the same information to multiple clients.
Do You Have Multicast Traffic?
As an end user of 802.11/Wi-Fi applications, you’ll likely have no choice over whether to use multicast or unicast packet delivery. Often, this level of implementation is fixed by the company offering the products and applications that you’re using. The way you configure the WLAN, however, will significantly impact the operation of multicast applications.
Before getting too far with optimizing the configuration of your WLAN for multicast traffic, first check to see if you’re even running any applications that make use of multicasting. Sometimes the user manual or product specifications may identify the use of multicasting — or you can record a wireless packet trace while the applications are running, and look for clues on whether any packets are being sent via multicasting. For instance, none of the 802.11 data frames corresponding with the multicast applications will have acknowledgments, which can be clearly seen by observing a few minutes of a packet trace.
If any of your applications use multicasting, then take a closer look at the configuration of the access points to maximize performance and battery life.
DTIM Interval Makes a Difference
When any single wireless client associated with an access point has 802.11 power-save mode enabled, the access point buffers all multicast frames and sends them only after the next DTIM (Delivery Traffic Indication Message) beacon, which may be every one, two, or three beacons (referred to as the “DTIM interval”). The DTIM interval can be set in the access point configuration. 802.11 power-save mode is optionally set by the user on the wireless client device, generally via the wireless client card’s configuration utility.
The use of DTIM intervals was put in the 802.11 standard to allow sleeping stations implementing the power-save function to know when to wake up and receive multicast traffic (i.e. after every DTIM beacon) and to allow flexible configuration of the network (i.e. DTIM interval) that offers a trade-off between battery life and performance. If none of the 802.11 radios associated with the access point has power-save mode enabled, then the access point will send multicast traffic immediately and will not wait until after the next DTIM beacon.
If you haven’t considered multicasting in the past, then the DTIM interval on your access points is probably set to the default value, which is likely one, two, or three. If set to one, the access point will deliver multicast frames after every beacon. Don’t forget, the significance of the DTIM interval only applies if at least one wireless client associated with the access point has 802.11 power-save mode enabled. A DTIM interval of one, two or three is good for performance, but it will likely hurt battery life.
I’ve found through testing that some wireless clients with power-save enabled will receive the last multicast frame sent by the access point and then continue staying awake until after the next beacon. In other words, the client radio stays awake for the entire beacon interval. If the beacon interval is set to 100 milliseconds (the common default value) and multicast traffic is occurring within each 100 millisecond interval (typical for many multicast applications), then the flow of multicast frames will probably cause the client radios to stay awake indefinitely. This draws battery power as if power-save mode was not enabled. Even with DTIM intervals of two or three, battery life may still suffer. With DTIMs occurring every other frame (DTIM interval = two), for example, the radio may be awake 50 percent of the time.
Now, you might have changed the DTIM interval in the past, but it may be too high. For instance, I’ve seen some DTIM intervals set to 30, which means that wireless clients with power-save mode enabled will not receive multicast traffic until after 30 beacon intervals. This might be great for battery life, but it only allows the delivery of multicast traffic every three seconds (assuming the default 100 millisecond beacon interval). The impact of this is highly application-specific, though. For example, the transmission of voice would likely have long and annoying skips/quiet intervals, but the delivery of text messages may not experience any impairment noticeable to humans.
What DTIM interval is best?
Unfortunately, there’s no straightforward answer to this. In order to optimize the DTIM interval, you should gain a good understanding of how your multicast applications work. You’ll probably not find this in user manuals, and even the vendor may not want to tell you much about it, to avoid disclosing company secrets. In these cases, you might need to do some testing.
Most likely, the best DTIM setting will be the one that provides maximum battery life while allowing delivery of multicast often enough to provide good performance. To do this, you should determine the highest DTIM interval that can be set and still allow the multicast applications to work properly. First, try setting the DTIM interval to one, and then use the application while monitoring quality of the transmission (such as voice quality). In terms of DTIM settings, this should offer the best quality. You can use this as a baseline for comparison purposes when setting the DTIM interval to higher values. Now, try increasing the DTIM interval to higher values while monitoring transmission quality. Keep increasing the DTIM interval until the quality is just above the minimum tolerable level. At this point, you’ll have the highest DTIM interval that you can use to maintain required performance while also offering the best power savings.
Keep in mind that DTIM settings will not affect any of the unicast traffic. It also won’t impact the multicast traffic if none of the wireless clients associated with the access point have 802.11 power-save mode enabled.
Also, as with other wireless LAN configuration settings, such as fragmentation and RTS/CTS, the optimum DTIM interval is generally a trial-and-error process. You might need to tinker around a bit to find the best configuration for your applications!
Jim Geier is the principal of Wireless-Nets, Ltd. (www.wireless-nets.com), an independent consulting firm assisting companies and cities with the implementation of wireless network solutions and training. He is the author of the books Deploying Voice over WLANs (Cisco Press), Wireless LANs (SAMS), and Wireless Networks – First Step (Cisco Press).
Article courtesy of Wi-Fi Planet | <urn:uuid:9c763b5e-da67-46b0-8c68-69611c93f86a> | CC-MAIN-2024-38 | https://www.enterprisenetworkingplanet.com/standards-protocols/optimize-multicasting-on-your-wlan/ | 2024-09-09T12:42:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00723.warc.gz | en | 0.908136 | 1,484 | 2.6875 | 3 |
Scientists have developed tiny robotic tentacles that travel into the lungs to detect and treat cancer.
The device is just 2.4 mm in diameter and ultra-soft. It’s sent to the periphery of the lungs from the end of a bronchoscope — a thin tube with a light and camera.
During the journey, magnets adapt the robot’s shape to the body’s anatomy. As it moves, both its form and position and form are fed back to a clinician. After reaching its destination, an embedded laser fibre can deliver localised treatment.
The robot was developed at the University of Leeds’ STORM lab, which tested the system on a cadaver. They found that it can travel 37% deeper than standard equipment, while causing less tissue damage.
“This new approach has the advantage of being specific to the anatomy, softer than the anatomy and fully shape-controllable via magnetics,” said Professor Pietro Valdastri, director of the STORM Lab, in a statement. “These three main features have the potential to revolutionise navigation inside the body.”
In time, the team hopes to transform the treatment of lung cancer. The disease has the highest worldwide cancer mortality rate, causing around 34,800 deaths annually in the UK alone. Currently, treatment is managed through invasive approaches, such as surgery, chemo- or radiotherapy.
In early-stage non-small cell lung cancer — which accounts for around 84% of cases — surgical intervention is the standard of care. Typically, this removes a large portion of lung tissue, which isn’t suitable for everyone and can harm pulmonary function.
The robot could provide a minimally-invasive alternative. According to the researchers, the delivery method can reduce pain, discomfort, and recovery time, while improving precision and safety. It could also allow treatment to target only malicious cells, allowing healthy tissue and organs to continue normal function.
“Our goal was, and is, to bring curative aid with minimal pain for the patient,” said Dr Giovanni Pittiglio, the report’s co-author. “Remote magnetic actuation enabled us to do this using ultra-soft tentacles which can reach deeper, while shaping to the anatomy and reducing trauma.”
The team will now collect the data needed to start human trials.
You can read their open-access study paper in Nature. | <urn:uuid:da1d36ed-d993-4bdd-b89d-9d2d67174405> | CC-MAIN-2024-38 | https://cissemosse.com/these-robotic-tentacles-could-travel-into-the-lungs-to-treat-cancer/ | 2024-09-12T00:17:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00523.warc.gz | en | 0.945002 | 501 | 3.59375 | 4 |
Eco-Friendly Travel Tips for the Modern Explorer
In today’s world, where environmental concerns are at the forefront of global discussions, sustainable travel has become increasingly important. As more people seek to minimize their impact on the planet while exploring new destinations, eco-friendly travel practices have gained popularity. Whether you’re a seasoned adventurer or a first-time explorer, there are numerous ways to make your travels more eco-conscious. In this comprehensive guide, we’ll delve into eco-friendly travel tips for the modern explorer, offering practical advice and insights to help you minimize your carbon footprint and travel responsibly.
The Importance of Eco-Friendly Travel
With growing awareness of climate change and environmental degradation, travelers are increasingly conscious of the impact their journeys have on the planet. Eco-friendly travel is not just a trend; it’s a mindset and a commitment to sustainability. By making conscious choices and adopting responsible practices, travelers can minimize their environmental footprint and contribute to the preservation of natural resources and ecosystems.
Choosing Sustainable Transportation
Transportation is one of the largest contributors to carbon emissions, particularly in the travel industry. By opting for sustainable modes of transportation, travelers can significantly reduce their carbon footprint and support efforts to combat climate change.
- Embrace Public Transportation
Utilizing public transportation, such as buses, trains, or trams, is one of the most eco-friendly ways to travel. Public transit produces fewer emissions per passenger mile compared to individual car travel, making it a more sustainable option for getting around cities and regions.
- Choose Cycling or Walking
For short distances, consider cycling or walking instead of relying on motorized transportation. Not only are these modes of travel environmentally friendly, but they also allow travelers to immerse themselves in their surroundings and discover hidden gems off the beaten path.
Sustainable Accommodation Choices
Where you choose to stay during your travels can also have a significant impact on the environment. Fortunately, there are many eco-friendly accommodation options available for eco-conscious travelers.
Keywords: eco-friendly lodging, sustainable hotels, green accommodations
- Stay in Eco-Friendly Hotels and Resorts
Look for hotels and resorts that prioritize sustainability and environmental stewardship. Eco-friendly accommodations may feature energy-efficient lighting, water-saving fixtures, and recycling programs to minimize their environmental impact.
- Consider Alternative Accommodations
In addition to traditional hotels, consider alternative accommodations such as eco-lodges, guesthouses, or vacation rentals that have eco-friendly practices in place. These accommodations often blend seamlessly with their natural surroundings and offer unique experiences for travelers seeking sustainable lodging options.
Responsible Tourism Practices
Responsible tourism is about making ethical choices that benefit both the environment and local communities. By adopting responsible tourism practices, travelers can support conservation efforts, preserve cultural heritage, and minimize negative impacts on destinations.
- Respect Local Cultures and Customs
Before visiting a new destination, take the time to learn about the local culture, customs, and traditions. Respect cultural norms, dress codes, and religious practices, and seek opportunities to engage with local communities in a meaningful and respectful way.
- Support Local Businesses
Supporting local businesses, artisans, and entrepreneurs is an essential aspect of responsible travel. By purchasing locally made products, dining at locally owned restaurants, and booking tours with local guides, travelers can contribute directly to the local economy and support sustainable livelihoods.
- What is eco-friendly travel, and why is it important for modern explorers?
Eco-friendly travel refers to adopting sustainable practices and making conscious choices to minimize the environmental impact of one’s journeys. It’s important for modern explorers because it allows them to explore the world while preserving natural resources and ecosystems for future generations.
- How can I incorporate eco-friendly transportation into my travel plans?
You can incorporate eco-friendly transportation by opting for public transit, cycling, walking, or carpooling whenever possible. These modes of transportation produce fewer emissions compared to individual car travel, reducing your carbon footprint.
- What should I look for when choosing eco-friendly accommodations?
When choosing eco-friendly accommodations, look for hotels and resorts that prioritize sustainability and environmental stewardship. Features to consider include energy-efficient lighting, water-saving fixtures, and recycling programs.
- What are some alternative accommodation options for eco-conscious travelers?
Alternative accommodation options for eco-conscious travelers include eco-lodges, guesthouses, or vacation rentals that have eco-friendly practices in place. These accommodations often blend seamlessly with their natural surroundings and offer unique experiences.
- How can I practice responsible tourism during my travels?
You can practice responsible tourism by respecting local cultures and customs, supporting local businesses, and minimizing negative impacts on destinations. Engage with local communities in a meaningful and respectful way, and seek out opportunities to contribute positively to the places you visit.
- What are some examples of sustainable tourism initiatives?
Examples of sustainable tourism initiatives include eco-certifications for accommodations, carbon offset programs for travelers, and conservation projects in protected areas. These initiatives aim to minimize the environmental impact of travel and support local communities.
- How can I offset my carbon footprint when traveling?
You can offset your carbon footprint by investing in carbon offset projects or supporting initiatives that promote renewable energy, reforestation, or sustainable development. Many airlines and travel companies offer carbon offset programs that allow you to calculate and offset the emissions associated with your travel.
- What are some practical tips for reducing waste while traveling?
Practical tips for reducing waste while traveling include packing reusable water bottles, shopping bags, and utensils, and disposing of waste responsibly by recycling and composting whenever possible. Minimizing single-use plastics and opting for sustainable packaging can also help reduce waste.
- How can I support conservation efforts during my travels?
You can support conservation efforts during your travels by visiting destinations that are actively involved in conservation projects, participating in eco-tours or volunteering programs, and donating to organizations dedicated to environmental conservation.
Eco-friendly travel is not about sacrificing comfort or convenience; it’s about making conscious choices that benefit the planet and future generations. By choosing sustainable transportation, eco-friendly accommodations, and adopting responsible tourism practices, travelers can minimize their environmental impact while exploring the world. Together, we can pave the way for a more sustainable future of travel, where the beauty of our planet is preserved for generations to come.
Uma Rajagopal has been managing the posting of content for multiple platforms since 2021, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune. Her role ensures that content is published accurately and efficiently across these diverse publications. | <urn:uuid:bd6536e7-2f5a-4af1-8db4-365390fc0f46> | CC-MAIN-2024-38 | https://globalislamicfinancemagazine.com/eco-friendly-travel-tips-for-the-modern-explorer/ | 2024-09-13T04:33:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00423.warc.gz | en | 0.911183 | 1,437 | 2.78125 | 3 |
An attacker could exploit vulnerabilities found in solar panel components to shut down large parts of a power grid.
Security researcher Willem Westerhof discovered the flaws in his research surrounding something he calls the “Horus Scenario.”
Named after the Egyptian god of sky, the Horus Scenario refers to the possibility of a digital attack that could destabilize a power grid and cause service outages by targeting solar panel electricity systems, also known as photovoltaics (PV). Such an attack could have wide-reaching effects if it focused on interconnected power grids like those found in Europe.
Westerhof explains that a bad actor could achieve the Horus Scenario by undermining balance, a key ingredient to power grid stability:
“The power grid needs to maintain a constant balance, between supply of power, and demand of power. If supply exceeds demand, or demand exceeds supply, outages can occur. In order to maintain stability all sorts of countermeasures exist to prevent outages due to peaks or dips in demand or supply. Under normal circumstances, these countermeasures ensure grid stability. There is however a limit to these countermeasures. A maximum peak or dip value in a specific period of time. If an attacker is capable to go beyond this maximum peak or dip value, outages will occur. [sic]”
Theoretically, if an attacker manipulated the amount of PV power at an opportune time (say, around midday when the sun is shining the brightest), an attacker could take out a significant amount of a grid’s power supply and cause an outage.
So how could an attacker do something like this in a practical sense?
To answer that question, Westerhof analyzed the PV inverters made by SMA, a market leading solar panel brand.
The researcher found that the components, which convert direct current (DC) into alternating current (AC) on a PV plate and thereby help balance the grid, suffered from 17 vulnerabilities. 14 of those flaws received CVE IDs and CVSS scores ranging from 3.0 (Informational) to 9.0 (Critical).
Together, the bugs provide attackers with a complete kill chain all the way from initial (REMOTE) execution to the Horus Scenario. Here’s the worst that could happen if an attacker exploited the vulnerabilities:
“In the worst case scenario an attacker compromises enough devices and shuts down all these devices at the same time causing threshold values to be hit. Power grids start failing and due to the import and export of power cascading blackouts start occurring. Several other power sources (such as windmills) automatically shut down to protect the grid and amplify the attack further. Despite their best efforts power grid regulators are unable to stop the attack. It is only after the sun sets (or when there is no longer enough sunshine for the attack to take place) that the grid stabilizes again. Depending on the authorities way of dealing with this attack, this scenario may keep going for several days.”
An event such as the one described above that produced a 3-hour outage on a European power grid around midday in June would cause approximately 4.5 billion euros in damage, as the researcher found out using a blackout simulator tool.
Westerhof reported the vulnerabilities to SMA in December 2016. He’s been working with the company, power grid regulators, and government officials since then. SMA has agreed to fix the flaws, whereas actors from the energy sector and the government will discuss the findings at international conferences.
Hopefully, companies like SMA will see this story and use it as an opportunity to create a bug bounty program. Such frameworks help to create lasting partnerships with security researchers, collaborative efforts which could subsequently improve the security of PV inverters and other devices and thereby reduce the attack surface of power grids everywhere. | <urn:uuid:c5701ca9-ef8d-40e3-b19b-acc227d7e26b> | CC-MAIN-2024-38 | https://grahamcluley.com/attackers-shut-power-grids-abusing-solar-panel-flaws/ | 2024-09-13T05:10:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00423.warc.gz | en | 0.937282 | 782 | 2.703125 | 3 |
26 September 2023
Zero Trust Security: Exploring the Principles of the Zero Trust Architecture, Network, and Model
In the ever-evolving landscape of cybersecurity, the concept of Zero Trust Architectures (ZTA) has emerged as a pivotal strategy. Over the last decade, this approach has gained significant traction, underpinned by the fundamental principle: “Never Trust, Always Verify.” As digital transformation accelerates, understanding and implementing zero trust becomes paramount.
Understanding the Core Principles of Zero Trust
At its heart, zero trust security represents a significant paradigm shift from traditional security models. These models often operated on implicit trust, especially within the network perimeter. The zero trust model, however, challenges this notion by advocating for a security framework where trust is never assumed. This approach emphasizes the need to verify every access request, regardless of its origin. Access control, as a foundational principle, guides the zero trust policies that determine who gets what level of access. Zero trust is a framework designed to minimize implicit trust, emphasizing the key principles of verification and least privilege.
What is Zero Trust Architecture (ZTA)?
Zero Trust Architecture is not just a product or a service; it’s a comprehensive approach to network security. This architecture requires a robust identity verification process, ensuring users, systems, or applications are who they claim to be. It leverages advanced authentication methods, from certificates to multifactor authentication. Furthermore, ZTA emphasizes restricted network access, ensuring communication between systems is limited to only what’s necessary, thereby reducing the potential for lateral movement by malicious actors. Importantly, ZTA is not “trusted by default,” ensuring that every access request is authenticated and verified.
Zero Trust Use: Practical Applications in Today’s Digital Landscape
The concept of zero trust is not just theoretical; its practical use cases are evident in various sectors. From financial institutions to healthcare providers, organizations are realizing that a zero trust use approach is essential to safeguard their digital assets. One of the primary zero trust use cases is in remote work environments. With employees accessing company resources from various locations and devices, ensuring that every access request is authenticated and that data is encrypted becomes paramount. Zero trust provides a framework to ensure that only authorized users can access specific resources, enhancing security in distributed work environments.
The Zero Trust Model and Its Relevance to NMS Security
Network Management Systems (NMS) are often riddled with trust relationships, making them attractive targets for breaches. The zero trust security model offers a solution, ensuring that trust relationships are minimized and every access request is scrutinized. By applying zero trust principles, NMS deployments can significantly enhance their security posture, reducing the risk of data breaches and unauthorized access. Breaches often occur inside the network, making it crucial to have access policies that determine who gets access to what. Network segmentation is a key strategy in this context, limiting the potential for lateral movement within the network.
Image: A balance scale comparing “trust but verify” and “never trust, always verify” to visually represent the shift in trust philosophy.
Trust Principles: The Foundation of Zero Trust
While the core principles of zero trust provide a foundational understanding, diving deeper into its trust principles reveals the philosophy that drives this approach. Traditional security often operated on the “trust but verify” mantra. In contrast, zero trust firmly stands on the “never trust, always verify” principle. This shift is more than just a change in procedure; it’s a redefinition of how organizations perceive trust in the digital age. Trust, in the zero trust framework, is not a static concept granted once and forgotten. Instead, it’s dynamic, continuously evaluated, and never taken for granted. This continuous verification ensures that even if a breach occurs, the damage is contained and doesn’t spread across the network, showcasing the resilience of the zero trust philosophy.
Implementing Zero Trust in Network Management Systems
To effectively implement zero trust in NMS, organizations must adopt a two-pronged approach. First, they need to deploy Next-Generation Firewalls (NGFW) that offer visibility into applications traversing the network and enforce protocol compliance. Second, multi-factor authentication should be integrated, especially for privileged operations, to ensure user identity is verified before granting access. This combination not only fortifies the network architecture but also aligns with the zero trust security model’s principles. Zero trust strategies for NMS implementation are comprehensive, ensuring that every layer of the network is fortified against potential threats.
Benefits of Zero Trust in NMS Security
Adopting a zero trust approach in NMS security offers numerous advantages. It enhances network performance, reduces vulnerabilities, and improves breach detection times. Moreover, by eliminating implicit trust, organizations can better protect their network perimeters and reduce the risk of lateral movement by potential threats. Zero trust minimizes the attack surface, ensuring that threats are detected and mitigated promptly.
Core Principles of the Zero Trust Model
The zero trust model is built upon a set of core principles that guide its implementation. These principles emphasize the need for continuous authentication, least privilege access, and micro-segmentation. At its core, zero trust is designed to challenge the traditional belief that everything inside an organization’s network is safe. Instead, it operates on the assumption that threats can come from both inside and outside the network. By adhering to these core principles, organizations can ensure a more robust security posture, reducing the risk of breaches and unauthorized access.
Zero Trust Network Access (ZTNA) and Its Importance
ZTNA provides secure access to applications and services, differentiating itself from traditional VPNs. Instead of granting broad access, ZTNA operates on zero trust principles, denying access by default and only granting user access to applications and services when explicitly authorized. This approach ensures that users see only what they have permission to access, bolstering security and reducing the risk of breaches.
Here’s a comparison table that explains the differences between ZTNA (Zero Trust Network Access) and traditional access methods:
Feature/Aspect | ZTNA (Zero Trust Network Access) | Traditional Access Methods |
Access Philosophy | Deny by default, grant access based on strict verification. | Trust by default, especially within the network perimeter. |
User Verification | Continuous authentication and verification for every access request. | One-time authentication, typically at the start of a session. |
Visibility | Full visibility into user activities and data flows. | Limited visibility, especially for activities inside the network. |
Network Segmentation | Micro-segmentation, limiting users to specific resources. | Broader network access once authenticated. |
Access Decision Factors | Considers user identity, device, location, behavior, and real-time context. | Primarily based on user identity and role. |
Threat Response | Real-time response to anomalous behaviors, limiting potential breaches. | Reactive, often after a breach has occurred. |
Integration with Other Tools | Easily integrates with other security tools for a holistic security approach. | Might operate in silos, requiring manual integrations. |
User Experience | Seamless access to applications without the need for VPNs. | Often requires VPNs for remote access, which can be cumbersome. |
The Role of Visibility in the Zero Trust Enterprise
Visibility is a cornerstone in the implementation of a zero trust strategy. In a zero trust enterprise, it’s not enough to simply authenticate and verify; organizations must also have a clear view of all activities within their network. This means having insights into user behaviors, data flows, application interactions, and potential vulnerabilities.
Advanced tools, such as Endpoint Detection and Response (EDR) solutions and network traffic analyzers, play a pivotal role in enhancing visibility. They allow organizations to monitor system-level behaviors, detect anomalies, and respond to potential threats in real-time.
Furthermore, visibility ensures that organizations can audit and review access logs, ensuring compliance with zero trust policies and identifying areas for improvement. By maintaining a clear line of sight into all network activities, organizations can proactively detect and mitigate threats, ensuring a robust security posture in line with zero trust principles.
True Zero Trust Solutions for NMS Security
For an effective zero trust implementation, organizations must integrate various security controls. From Intrusion Prevention Systems (IPS) to Network Anti-virus solutions, a multi-faceted approach is essential. Additionally, integrating user identity solutions can offer varied access levels based on user roles, further enhancing security. Zero trust guidance from leading research firms like Forrester Research emphasizes the need for a holistic approach, ensuring that every layer of the network is fortified against potential threats.
The NIST Perspective on Zero Trust
The National Institute of Standards and Technology (NIST) has been instrumental in shaping the discourse on zero trust through its comprehensive guidelines. Here’s a brief overview of NIST’s perspective on Zero Trust Architecture:
- Definition of Zero Trust (ZT): NIST defines ZT as a cybersecurity paradigm focused on resource protection and the premise that trust is never granted implicitly but must be continually evaluated.
- ZT Architecture (ZTA): NIST’s ZTA is not about a specific technology but rather a set of guiding principles and ideal behaviors. It emphasizes that security solutions should be designed in a way that they make real-time trust decisions based on multiple factors.
- Access Control: NIST underscores the importance of dynamic access control, which should be determined based on real-time information. This includes the continuous authentication of both users and devices.
- Least Privilege: NIST’s guidelines stress the principle of least privilege, ensuring that users or processes can only access what they need and nothing more.
- Threat Awareness: NIST emphasizes that ZT assumes that bad actors are both outside and inside the network. Therefore, organizations must be continually aware of and prepared for potential threats.
- Network Locality: NIST points out that in a ZT model, access to resources is determined by dynamic policies and not necessarily by the network segment or location from which a user or device is connecting.
- Continuous Monitoring: NIST advocates for continuous monitoring and diagnostics to ensure that security postures are maintained and to detect any malicious activities promptly.
By aligning with NIST’s guidance on Zero Trust Architecture, organizations can ensure they’re adopting a well-researched, comprehensive approach to security, staying abreast of the latest advancements in zero trust security.
The Zero Trust Journey: From Concept to Implementation
Embarking on the zero trust journey requires an organization-wide commitment. From understanding the principles behind zero trust to implementing advanced security strategies, the journey is comprehensive. However, with the right approach and tools, organizations can fortify their security architecture, ensuring they are well-equipped to handle the challenges of today’s digital landscape.
Conclusion: The Imperative of Zero Trust in Modern Cybersecurity
In the intricate realm of cybersecurity, the emergence of Zero Trust Architectures (ZTA) stands as a testament to the industry’s evolution. As we’ve explored, the foundational principle of “Never Trust, Always Verify” is more than just a catchphrase; it’s a necessary shift in mindset. From understanding the core principles of zero trust to recognizing the significance of Zero Trust Network Access (ZTNA), it’s evident that traditional security models are no longer sufficient.
The zero trust model challenges the age-old notion of implicit trust within network perimeters, advocating instead for a security framework that assumes no inherent trust. With the rise of digital transformation and the increasing complexity of cyber threats, the need for robust security measures like ZTA has never been more paramount.
NIST’s guidelines further underscore the importance of this approach, offering a roadmap for organizations to navigate the complexities of zero trust. As we move forward in this digital age, it’s crucial for organizations to not only understand but also implement the principles of zero trust. By doing so, they position themselves to proactively combat cyber threats, safeguarding their assets, data, and reputation in an ever-connected world.
In essence, zero trust isn’t just a strategy; it’s the future of cybersecurity. As threats continue to evolve, so must our defenses, and zero trust provides the blueprint for that evolution. | <urn:uuid:3070298a-daf4-41c2-b160-333a6b18fc1c> | CC-MAIN-2024-38 | https://firstwave.com/blog/zero-trust-security/ | 2024-09-16T21:50:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00123.warc.gz | en | 0.911904 | 2,572 | 2.515625 | 3 |
Breakthrough from Nagoya City University Combining Classical & Quantum Computing Could Result in Massive Boost in Processing Power
(TechRadar) Researchers in Japan have made a discovery that could unlock opportunities to combine classical and quantum computing technologies in new ways, resulting in a massive boost in processing power.
Spearheaded by Professor Takahiro Matsumoto of Nagoya City University, the team of researchers was investigating a concept called quantum entanglement, whereby the interaction between multiple particles is such that they can only be described in relation to one another.
According to the research paper, the integration of proton qubits (a basic unit of quantum information) with modern silicon technology “could result in an organic union of classical and quantum computing platforms”.
Reportedly, this could allow for systems with a far larger number of qubits (10^6) than is currently possible (10^2) and thereby boost processing speeds dramatically across supercomputing applications.
Historically, scientists have found it difficult to manufacture quantum entanglement (which is fundamental to quantum computing) due to various engineering and logistical challenges. However, Matsumoto and his team recently observed a phenomenon that could provide the key: an entangled pair of protons on the surface of a silicon nanocrystal. | <urn:uuid:7ebfb5ce-e6cd-4dce-b268-5e67b7cb3d72> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/breakthrough-from-nagoya-city-university-combining-classical-quantum-computing-could-result-in-massive-boost-in-processing-power/ | 2024-09-16T21:36:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00123.warc.gz | en | 0.931705 | 262 | 3.03125 | 3 |
In OSPF, Link-State information, in other words, Routing information is exchanged between routers. This exchange is done with LSAs (Link State Advertisements). There are eleven different OSPF LSA types and each of them has a special purpose. Some of the OSPF LSA types are the main LSA types that are used in OSPF operations often. Let’s see all the LSA types:
Now, let’s talk about these OSPF LSA types detailly. Here, it is better to explain these LSA types with different area types and their LSA Adversisement requirements. So, the below diagram will be very useful for you.
You can also view OSPF Packet Types
Table of Contents
Router LSA (Type 1) is the LSA type used in standard areas. The aim of this LSA type is giving information about the router. This LSA includes information like Router ID, Router interfaces, neighbors, ip addresses and cost. Router LSA can not pass ABR, so it can not reach to the other areas.
You can also reach other Cisco CCNP ENCOR lessons
Network LSA (Type 2) is the other LSA type used in standard areas. This LSA is sent by DR. The main aim of this LSA type is listing the connected routers in the segment and informing the other routers. This LSA includes information like DR, BDR IP addresses, subnet masks. Network LSA can not pass ABR, so it can not reach to the other areas.
ABR Summary LSA (Type 3) is generated by ABR (Area Border Router) to advertise one Area’s networks to other Areas. ABR Summary LSA (Type 3) includes all prefixes available in the Area.
ASBR Summary LSA (Type 4) is generated by ABR (Area Border Router) to inform its areas about how to reach the ASBR (Autonomous System Border Router). ASBR Summary LSA (Type 4) includes ASBR’s Router ID.
You can also learn Cisco OSPF Configuration | <urn:uuid:f89c031a-59d9-4428-a6f2-ebddab324e04> | CC-MAIN-2024-38 | https://ipcisco.com/lesson/ospf-lsa-types/ | 2024-09-18T04:24:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00023.warc.gz | en | 0.906579 | 447 | 3.28125 | 3 |
If artificial intelligence was just an empty phrase it's unlikely that so much sustained discussion would be taking place over what has been a long period of time.
The reason why some of us may question its substance is because we don't really know what it is or what it means.
Even those who understand it explain that end users will only see the effects of AI when they use their devices, and mostly the AI providing the new and interesting and powerful new functions will be inconspicuous -- operating in the background.
Possibly the only time users will be aware of AI is when they communicate directly with Siri, or Alexa or the many chatbots they may come across when they browse websites.
The tech community is certainly excited by AI and clearly does not think of it as a fad or an empty phrase.
They are taking it very seriously. So much so that many of them are building custom chips to process the AI instruction that previously would have set them apart.
Now, not only are the tech companies hoping to outdo each other with their AI algorithms, they've also decided to compete on the hardware aspect as well.
Semiconductor manufacturing is said to be the most advanced, most complex endeavour in the industrial world today.
By semiconductor, we mean microchips, microprocessors, microcontrollers, and electronics components and systems of many kinds, including integrated circuits.
These are the areas that the big tech companies are looking to go into because the thinking is that they may be able to make their AI software work better on their own, self-designed chips.
Google and Apple have previously made it clear that they plan to make their own chips and already design and build some.
IBM has pretty much always been tinkering with chips, and has a fairly impressive range of new, highly advanced chips.
New entrants into this race include Facebook, and Alibaba. There are probably others we don't know about but it's likely we will hear about them more and more.
How this will affect specialist chipmakers -- such as Intel, Qualcomm, Nvidia, AMD and so on -- is probably too early to tell.
They may have to rethink their business model and find new markets, but in the short term, it may be difficult for them.
Qualcomm, for example, has been sent reeling by Apple's decision to go its own way and not rely on the chipmaker for its supplies.
Intel may also see a significant loss of business if Apple reduces its demand from the company.
What we may be seeing is that tech companies like Google, Facebook and so on incorporating more hardware products into their offerings, much like Apple does.
It's not an easy thing to do, but it's probably the best option for these companies to continue on their growth trajectories, now that some of them have probably reached the limit in terms of subscribers to their social networks or whatever other software-oriented services that they offer. | <urn:uuid:4f9b3cc0-3885-42e2-8f3f-8b9483035b50> | CC-MAIN-2024-38 | https://em360tech.com/tech-news/big-tech-companies-building-custom-chips-ai | 2024-09-08T12:57:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00023.warc.gz | en | 0.983489 | 600 | 3.078125 | 3 |
The industrial landscape is undergoing a profound transformation, driven by the relentless push toward manufacturing automation. This crucial shift, taking root at the heart of production processes, seeks to replace manual labor with sophisticated technology and equipment. Automation’s aim is crystal clear: to augment efficiency, productivity, and scalability across the board. The adoption of such innovative measures introduces a raft of advantages, from significant cost savings to enhanced data accuracy, from elevated safety standards to non-stop productivity. The transition toward a more automated, data-driven production environment is not merely an upgrade—it is an essential evolution. As the global market becomes more competitive, industries are recognizing that harnessing the power of automation is not an option but a necessity for survival and growth.
Defining Manufacturing Automation
Manufacturing automation is best understood as the strategic application of technology to streamline production processes. At its core, the objective is to reduce manual intervention as much as possible, thereby setting the stage for a host of improvements. Automation centers around the fusion of advanced software, robotics, and integrated systems—all purposed to recalibrate the manufacturing workflow into a leaner, more efficient beast. The essence of automation is transformation: from the way tasks are assigned and executed to the manner in which data is collected and analyzed. This shift towards an automated manufacturing floor is reshaping industries, pushing them towards immaculate precision, relentless efficiency, and a paradigm where setbacks like human error and labor shortages are minimized.
Categories of Automation in Manufacturing
Mass production’s foundation is built on the reliability of fixed automation, or “hard automation.” This type of system is synonymous with powerhouse machines specifically engineered for mass-producing particular goods at a high speed and with consistent output. The essence of these machines lies in their design and the structured flow of the production process, which sets the pace and order of operations.
Programmable automation is ideal for industries where production demands fluctuate and volumes are lower. This adaptable technology is engineered for batch production with the agility to reconfigure as per the requirements of different products.
The Perks of Automating Manufacturing
In the relentless pursuit of excellence, manufacturing automation emerges as a beacon of hope, promising a slew of unmistakable benefits. On the financial front, companies eyeing long-term sustainability find solace in automation’s promise for reduced operating costs, achieved by condensing and simplifying tasks through automated systems.
The Implementation Process of Automation
The journey toward manufacturing automation typically starts with the implementation of machine monitoring systems. These systems act as the eyes and ears of the factory floor, tracking the health and performance of machinery.
Advanced Tools in Automation
The vast arsenal of advanced tools dispels any lingering doubts about the efficacy of manufacturing automation. NC and CNC systems are the brain and brawn behind automated operations, offering unparalleled precision through exact numerical programming.
The Evolution and Future Prospects of Automation
The horizon for manufacturing automation is limitless, with echoes of AI starting to intertwine with the fabric of automation. As artificial intelligence matures and integrates further, the landscape is set for a metamorphosis—an advent of fully automated factories with AI at the helm. | <urn:uuid:73ae7959-e935-420c-b4f0-b353712cd3b0> | CC-MAIN-2024-38 | https://manufacturingcurated.com/manufacturing-technology/embracing-the-future-driving-innovation-with-manufacturing-automation/ | 2024-09-09T14:11:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00823.warc.gz | en | 0.922132 | 651 | 2.59375 | 3 |
Numerous sources have approached technology in education from the angle of its direct interaction with the learning process. We all aware that a new generation emerges, one that has grown up with technology all around. Educational strategists constantly explore, analyze, praise or dispute new ways of introducing modern tech into education.
Nevertheless the current and evolving technology also influences education in indirect ways that we may only be partially aware of.
One of the indirect results of technology having its say over education consists of the disruptions caused by the first on the labor market, in the field of professions, occupations or activities.
How does technology disrupt the workforce environment?
Regardless of whether the professionals are employees, business owners or freelancers, the disruptive effects of tech progress take their toll by causing the disappearance of jobs and by strongly re-modelling others. And by ‘disappearance” we don’t mean job cuts, but rather the fact that some functions are taken over by applications, software or automated hardware.
What happens when the educational process prepared the people who lose their jobs only for a particular, narrow field, in which they further specialized and gained experience and which is suddenly taken over by technology? The human element, as well as the social system meets a huge dilemma. Professional re-conversion is an option, but it takes time and money, and the people who go through this must be able to start all over again in a new situation. What if the next month or next year their new profession will be swallowed by an efficient robot or a progressive neural learning software – will they start all over again – one more time?
Even since a few years’ ago, the inherent difficulties to this situation have been accounted for by leaders, as well as by the specialists and by those affected. Here is an example of how the constantly chaotic modern workforce environment looks like via the VUCA concept, an acronym synthesizing:
If you thought such disruptive times might be over, think again. Although it is rather obvious that in a world talking about the impending Third Industrial Revolution change is yet to come, and it will probably manifest itself in sudden waves, the skeptical might take a look at a study-case approach to the present situation of everyday people, confronted with job volatility and with the new upgraded requirements in the work field. Even looking for a job can be a challenge, when it suddenly involves new tech skills that the candidates have to learn by themselves or pick up while they try joggle financial restraint, feeling of obsolescence or desperation and go through an active job-seeking process. It is also true that new functions and opportunities emerge, since many of today’s jobs (about a half of them) did not exist 25 years ago, but nevertheless adaptability to the new demands is the key as well as the burden for those who have to reconfigure their professional lives.
The agile model in education – a possible solution
In view of this volatile jobs’ situation, some have considered re-configuring education from its core, following an IT dev model – the agile education model. Serving precisely in anticipating constant changes and readjusting requirements, the agile model also migrated into marketing and also influenced other types of activities, since it tends to imprint a certain way of thinking on those who have worked with it.
Rather hard to describe, since it functions in sub-clusters and distributes information in a way that allows more re-grouping models later, according to the necessities, the agile model arrived into the educational process for a while now, although it feels more like a niche model.
Do you remember the previously quoted VUCA? The talent management specialists that put together this model also came with an agile solution, since it would prepare everyone for a wider range of options right from the start. Some of the characteristics of agile learning would be these:
- Training to solve unanticipated problems;
- Rapid individual and group learning;
- The self-obsolescence of processes that allows quick replacements and upgrades;
- Focus on innovation.
One of the basic ideas that the agile concept integrates is that while you cannot fight chaos, people and teams can adapt to it and learn how to manage themselves and their skills even in a constantly changing environment.
Developing an agile education mindset presumes:
- Divergent, anticipatory and creative thinking;
- Empathy (as a guiding tool through uncertainty and ambiguity);
- Strong social and EQ (emotional intelligence) capabilities and skills;
- An entrepreneurial view of the ensemble that would put all activities into perspective and support the creating of real value;
- An above-average way of thinking and solving problems that would make replacing the human factor by automated machines unfeasible;
Another active author on the topic of agile education is Steve Peha, whose 2011 article on agile schools underlines how education should prioritize individuals and interactions over processes and tools, meaningful learning over learning statistics and measurements, as well as the very important agile mechanism of responding to change over following a plan (the embrace-the change/chaos-and-work-with-it approach).
Another trait of the agile education concept consist in breaking the massive, centralized responsibility
into smaller, individual responsibility nodes where each person is making decisions for, is involved and accountable for the task it delivers. The group comes together through the unified upper goal of all the individual contributions, but these fragmented contributions are each holding an ultimate importance for the persons performing them. Re-configurations and changes just determine different arrangements within the structure, each group member continuing to bring its contribution and perhaps reassembling the gained knowledge and working at a micro-level.
Applying this in the educational process would mean no effort during the learning process would be wasted, just integrated into the needed structure once the proper mindset is also taught.
Although agile education has not fully entered the mainstream educational databases, the Agile Learning Centers network is functional and present in North America – as you may see here. If interested in further details on this educational model, the presentation page of this network is more than explanatory and also invites people to visit the nearest ALC (agile learning center). | <urn:uuid:6908d9aa-7034-4afa-b9a4-32a739c84fb4> | CC-MAIN-2024-38 | https://educationcurated.com/editorial/agile-education-in-a-volatile-jobs-world/ | 2024-09-15T19:13:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00323.warc.gz | en | 0.96135 | 1,254 | 2.90625 | 3 |
The three networking policies are users changing their passwords on a regular basis. Attackers can often work out the new password if they have the old one. And users, forced to change another password, will often choose a ‘weaker’ one that they do not forget. A hacker may attempt to access your account more than once over a period. Changing your password often reduces the risk that they will have frequent access. Second is users cannot connect unauthorized equipment to the network, such as USB memory sticks, smartphones, and tablets. The most well-known security risk associated with USB flash devices arises when the device is misplaced and 90 percent of employees utilize USB drives in the workplace, they are an attractive target for cybercriminals. Lastly, level of access is given, which allows only authorized users to access sensitive data and a regular backup procedure is in place. They are also known as security labels, permissions, roles, rights, or privileges. Access levels are often used to determine whether someone should be able to view certain documents or perform certain tasks. | <urn:uuid:2dd9ee55-44d7-42f0-af7b-985ce413d51f> | CC-MAIN-2024-38 | https://mile2.com/forums/reply/90761/ | 2024-09-15T21:23:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00323.warc.gz | en | 0.957137 | 211 | 3.125 | 3 |
Hysteria Over Misinformation Fuels California Bill to Dox Influencers: Proposed Law Sparks Privacy Debate
Understanding Social Media Anonymity Laws
Social media has become a battleground for free speech and privacy. A new bill in California aims to change how users interact online. This law would make platforms ask for personal info from popular users.
The bill targets “influential” and “highly influential” accounts. It wants their names, phone numbers, and emails. For some, it even asks for government IDs. This idea isn’t new. A presidential candidate previously proposed a similar plan.
Here’s what the bill wants:
- Platforms to verify user information
- Government IDs for top users
- Penalties for companies that don’t follow the rules
Why is this happening? Some worry about false think removing anonymity will help. But this view has problems.
Anonymity matters in the U.S., and it has a long history. Think of the Federalist Papers, which were written under fake names. Today, people still need to speak freely without fear.
Some reasons for online anonymity:
- Protecting jobs
- Sharing sensitive info
- Speaking against oppressive governments
The bill could hurt these important uses of social media. It might make people afraid to speak up. This fear goes against free speech ideals.
The bill’s reach is wide. It doesn’t just target political accounts. Any popular user could be affected, including fun accounts that have nothing to do with news or politics.
You might wonder if this will stop false info online. Experts doubt it. People disagree about what’s true all the time. It’s part of democracy. Trying to control this could backfire.
Free speech needs protection. Laws like this one could do more harm than good. They might stop people from sharing ideas freely. That’s not good for anyone.
As you think about this issue, consider:
- How much privacy are you willing to give up?
- Should the government decide who needs to reveal their identity?
- What might happen if your info gets out?
These are big questions. They affect how we all use the Internet, so staying informed about laws that could change your online life is essential.
Remember, the internet thrives on open communication. Laws that limit this could change how we all connect and share ideas. Keep an eye on these developments. They might affect your digital world more than you think.
The Role Of Social Media Influencers
Social media influencers play a significant role in today’s digital landscape as essential intermediaries in the information network between brands and consumers. They have emerged as a powerful marketing channel, leveraging their large and dedicated follower bases to amplify brand visibility and drive engagement. Influencers inform their audiences about new products, trends, and developments, often creating content that resonates with their followers across various platforms, such as social media posts, YouTube videos, and podcasts. For businesses, incorporating influencers into digital marketing strategies has become an effective way to increase follower count and potentially boost sales. Beyond commercial impact, influencers also play a powerful role in shaping societal trends and discussions, making them significant figures in today’s interconnected world. | <urn:uuid:52ae3af4-c97a-42c0-b3c9-53d71bc40f0f> | CC-MAIN-2024-38 | https://www.alvareztg.com/influencer/ | 2024-09-15T20:46:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00323.warc.gz | en | 0.946754 | 674 | 2.59375 | 3 |
Successful project management is vital for bringing any project to fruition, whether in construction or IT. Employing appropriate methodologies, utilizing cutting-edge tools, and deploying effective strategies are all quintessential for project managers to steer complex initiatives to success. Their role involves fostering teamwork and hitting essential targets throughout the life cycle of a project.
The journey of masterful project management begins at the drawing board where ideas take shape, followed by meticulous planning. It then moves into execution, where the plan is transformed into action. Monitoring the project’s progress is critical to staying on schedule, and making adjustments when necessary is part of the process to accommodate unforeseen challenges.
To lead projects to their intended outcomes, project managers must have a firm grasp of the pillars of project management. This includes understanding the scope, maintaining a realistic schedule, managing resources wisely, and handling risks. Communication plays a key role as well: keeping stakeholders informed and aligned with the project’s progress and having transparent discussions about any issues.
Adopting these practices leads to enhanced efficiency and a higher likelihood of project success. With a structured approach from inception to delivery, project management becomes an art that, when perfected, can lead to significant competitive advantages for organizations across industries.
1. Inauguration Phase
The journey of any project begins with its conception. During the initiation, or ‘Inauguration Phase,’ it’s essential to lay a solid foundation. This phase is about exploring the viability of the project, aligning it with business objectives, and engaging with stakeholders to shape a coherent vision. Project managers need to clarify the goals, understand the project boundaries, and determine the expected benefits. A clear project charter or initiation document is usually the outcome of this phase, serving as a formal record of the project’s commencement.
A detailed feasibility study is often part of the initiation to evaluate the practicality of the project. Alongside feasibility, risk identification is crucial; recognizing potential roadblocks early can save valuable time and resources. This phase sets the direction for the project and, as such, requires visionary planning and careful consideration to ensure the project’s alignment with the overarching strategic goals of the organization.
2. Strategy Development Phase
After the project gets the green light, it’s time to start planning in earnest—the ‘Strategy Development Phase.’ This stage is about translating the vision into an actionable blueprint. Comprehensive planning includes defining the work breakdown structure, developing schedules, estimating costs, and setting up communication plans. Effective project management methodologies, such as Agile or Waterfall, are chosen based on the project’s nature.
In this phase, project managers meticulously chart out every task and deadline to create a roadmap for the team to follow. Resources like manpower, materials, and money are allocated to ensure each team has what it needs when it’s needed, and risk management plans are established to hedge against potential issues. Proper planning acts as a guide for the entire project lifecycle, reducing uncertainties and providing a clear path to success.
3. Implementation Phase
The ‘Implementation Phase’ is where plans are put into action. It’s the most visible phase of the project where deliverables start to take shape. The project manager coordinates team members, assigning roles and responsibilities, and ensuring that everyone knows what’s expected of them. Regular status meetings facilitate transparency and keep everyone in sync.
Quality control is crucial; hence, deliverables are continuously assessed to ensure they adhere to predetermined standards. This stage is not without its challenges—unexpected complications may arise, and the project manager must be adept at problem-solving and decision-making to keep the project on track. The implementation phase is the testbed for the strategies outlined in the planning phase, requiring diligent oversight to ensure that execution aligns with the plan.
4. Surveillance Phase
As a project progresses, the ‘Surveillance Phase’ is crucial for consistent oversight. During this phase, diligent tracking against the established benchmarks for scope, time, and cost helps identify any discrepancies quickly, allowing for immediate corrective measures. This systematic review uses specific performance indicators to evaluate how well the project is meeting its goals and to pinpoint areas that may be straying from the plan.
Incorporating Agile methodologies can be particularly beneficial for this ongoing monitoring process. Agile’s iterative cycles and regular check-ins can enhance the flexibility and responsiveness of project oversight. However, the success of this phase does not exclusively depend on the project management approach but rather on the strength of the surveillance system in place. This system must be capable of effectively logging and addressing any alterations to the project’s scope or objectives.
Proactive surveillance is key, providing the opportunity to modify tactics as needed and maintain the project’s trajectory toward its targets. This approach isn’t just about fixing problems after they occur but anticipating challenges and adapting strategies in advance. Monitoring becomes an active and adaptive component of project management, ensuring that the project sails smoothly toward completion, even when encountering unexpected obstacles.
5. Termination Phase
The project’s conclusion, the ‘Termination Phase,’ is not merely about delivering the final product or service. It involves administrative closure, including contract finalization, financial reconciliation, and resource reassignment. This stage is also an opportunity for reflection. Teams should gather to review the project’s successes and challenges in a “lessons learned” session to refine future project management processes.
Recognizing accomplishments boosts team morale and recognizes the collective effort involved in reaching the finish line. Whether the project has been a resounding success or faced hurdles, the closing phase is essential for both documentation and celebration. It’s the period when the project manager’s role transforms from leader to historian, ensuring the project’s narrative is accurately captured for posterity.
In essence, mastering project management is an intricate dance of strategic planning, dynamic implementation, vigilant monitoring, and thoughtful closure. Each phase builds upon the previous one, forming a comprehensive guide that, when followed with diligence and adaptability, leads to the successful completion of any project. | <urn:uuid:5204769f-7a75-43a6-bf42-bdbd9aef3e26> | CC-MAIN-2024-38 | https://managementcurated.com/operations/mastering-project-management-a-comprehensive-guide/ | 2024-09-17T02:00:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00223.warc.gz | en | 0.918511 | 1,265 | 2.71875 | 3 |
If recent global security breaches impacting over 200,000 computers in 150 countries and costing millions are anything to go by, it could not be clearer that cyber security impacts businesses as a whole, not just IT departments.
The types of threats businesses face are changing too. Hacking software is becoming more sufficient, increasing the impact that hackers can have on a business. Cyber hackers are moving to more sophisticated agendas such as espionage, disinformation, market manipulation and disruption of infrastructure, on top of previous threats such as data theft, extortion and vandalism. Being able to mitigate these threats requires businesses to not only think of cyber security as a business risk, but to act on this too. Successful protection of a company requires the business to think about what these cyber risks mean for the business as a whole and for its customers.
Gone are the days when companies could pass the headaches of cybersecurity off to the IT department. Now, cybersecurity is also a business issue, especially since businesses are more digitized, meaning they are exposed to an increasing number of threats if they do not manage the risk of security properly.
Facing cyber security as a business risk, not merely a technology risk, is the theme for week 2 of National Cyber Security Awareness Month (NCSAM), appropriately named Cybersecurity in the Workplace is Everyone’s Business. Week 2 showcases how organizations can protect against the most common cyber threats by looking at the resources that help organizations strengthen their cyber resilience.
The US-CERT, United States Computer Emergency Readiness Team, encourages organizations and employees to review the following resources:
- NIST Cybersecurity Framework,
- DHS Stop.Think.Connect. Toolkit,
- National Cyber Security Alliance Workplace Tips, and
- US-CERT Home and Business Networks page.
NIST – Cybersecurity Framework
Creating a culture of cybersecurity is critical for all organizations—large and small businesses, academic institutions, non-profits, and government agencies—and is a responsibility shared among all employees. The National Institute of Standards and Technology (NIST) has published resources including standards, guidelines, and best practices to help organizations of all sizes to strengthen cyber resilience.
The Cybersecurity Framework, and Executive Order issued on February 12, 2013, is a “set of industry standards and best practices to help organizations manage cybersecurity risks. The resulting Framework, created through collaboration between government and the private sector, uses a common language to address and manage cybersecurity risk in a cost-effective way based on business needs without placing additional regulatory requirements on businesses” (page 1 of Framework for Improving Critical Infrastructure Cybersecurity).
The Framework is a risk-based approach to managing cybersecurity risk, and is composed of three parts: the Framework Core, the Framework Implementation Tiers, and the Framework Profiles. Each Framework component reinforces the connection between business drivers and cybersecurity activities. These components are explained as follows:
- The Framework Core is a set of cybersecurity activities, desired outcomes, and applicable references that are common across critical infrastructure sectors. The Core presents industry standards, guidelines, and practices in a manner that allows for communication of cybersecurity activities and outcomes across the organization from the executive level to the implementation/operations level. The Framework Core consists of five concurrent and continuous Functions—Identify, Protect, Detect, Respond, Recover. When considered together, these Functions provide a high-level, strategic view of the lifecycle of an organization’s management of cybersecurity risk. (page 4 Framework for Improving Critical Infrastructure Cybersecurity).
- Framework Implementation Tiers (“Tiers”) provide context on how an organization views cybersecurity risk and the processes in place to manage that risk. Tiers describe the degree to which an organization’s cybersecurity risk management practices exhibit the characteristics defined in the Framework. These Tiers reflect a progression from informal, reactive responses to approaches that are agile and risk-informed. During the Tier selection process, an organization should consider its current risk management practices, threat environment, legal and regulatory requirements, business/mission objectives, and organizational constraints.
- A Framework Profile (“Profile”) represents the outcomes based on business needs that an organization has selected from the Framework Categories and Subcategories. The Profile can be characterized as the alignment of standards, guidelines, and practices to the Framework Core in a particular implementation scenario.
How to use the Framework
An organization can use the Framework as a key part of its systematic process for identifying, assessing, and managing cybersecurity risk. The Framework is not designed to replace existing processes; an organization can use its current process and overlay it onto the Framework to determine gaps in its current cybersecurity risk approach and develop a roadmap to improvement. Utilizing the Framework as a cybersecurity risk management tool, an organization can determine activities that are most important to critical service delivery and prioritize expenditures to maximize the impact of the investment.
The Framework provides a means of expressing cybersecurity requirements to business partners and customers and can help identify gaps in an organization’s cybersecurity practices. (page 13, Framework for Improving Critical Infrastructure Cybersecurity).
DHS’s Stop. Think. Connect. Toolkit
Cyber criminals do not discriminate; they target vulnerable computer systems regardless of whether they are part of a large corporation, a small business, or belong to a home user. Cybersecurity is a shared responsibility in which all Americans have a role to play.
The Department of Homeland Security (DHS) Toolkit provides resources and materials for all segments of the community. Students of all ages, small businesses, government, law enforcement, older Americans, educators and parents, industries and young professional can all access a library of tools to stay connected, informed, and involved in protecting themselves against cyber threats. Each of the segments mentioned also have toolkit materials by cyber topic that directly relate to their specific audience, as well as a list of frequently requested publications supporting DHS’s cybersecurity priority and mission.
National Cyber Security Alliance – StaySafeOnline
- Post simple and actionable online safety tips around the office – for example, in the break room.
- Encourage your organization to get involved in the STOP. THINK. CONNECT.™ campaign by becoming a partner.
- Hold a brown bag lunch for employees to discuss your company’s IT security and acceptable use policies. Find talking points for employees here.
- Incorporate STOP. THINK. CONNECT.™ tips into employee handbooks and newsletters.
- Host an employee training on cybersecurity. Check out ESET’s free cybersecurity awareness training as a great resource.
- Lock down your login: Strengthen your company’s email and online accounts by adding extra layers of security beyond a username and password.
US-CERT Home and Business Networks
US-CERT’s home and business page offers articles, materials and tools about cybersecurity to secure home and small-business networks. Features such as 10 Ways to Improve the Security of a New Computer to Home Network Security to Virus Basics are relatable topics for consumers and organizations, and provide important information about security risks and how to guard against them.
Stay tuned for next week’s blog featuring NSCAM’s Week 3 theme, Today’s Predictions for Tomorrow’s Internet. | <urn:uuid:53fbb2d3-8ddc-4742-8acc-e0bb7b091052> | CC-MAIN-2024-38 | https://www.bluefin.com/bluefin-news/nscam-week-2-cybersecurity-workplace-everyone-business/ | 2024-09-18T07:38:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00123.warc.gz | en | 0.932863 | 1,482 | 2.96875 | 3 |
The digital revolution, ubiquity of the internet, and rise of Big Data have given government an unprecedented capability to produce, collect, utilize, and disseminate a vast array of information and data.
These trends have ushered in a new era of data-powered government innovation and citizen services based on the undeniable value in making government data widely available – to citizens, activists, companies, academics, and entrepreneurs.
This is often referred to as the “open government” era, which thrives on government transparency, public accountability, and citizen-centered services.
Consequently, the last 20 years have seen a transformation of public policies – legislative, regulatory, and administrative – grounded in the philosophy that access to and dissemination of government data is a public right and that any constraints on access hinder transparency and accountability.
While there is broad recognition of the need to maximize access to government data, the types of government data are increasingly diverse and complex.
For instance, there are many cases where the government collects or licenses private sector data, often combining this data with other data produced by the government.
These data sets are often referred to as “hybrid data” or “privately curated data” – data licensed to or collected by the government that comprises both public and private sources.
Access to and use of hybrid data is increasingly critical for government to transform data into actionable information.
Examples of curated, or hybrid, data sets include the integration of traffic-app data with US Department of Transportation information, the incorporation of private geographic mapping software into local government flood tracking, the federal award infrastructure’s use of the Dun & Bradstreet D-U-N-S® Number to administer and oversee a $1.2 trillion federal grant market, and peer-reviewed scientific and technical literature that is based on government-funded academic research but published in the private sector.
Subjecting this full range of information to unfettered “openness” requirements risks the availability and quality of these valuable data-driven resources.
Such requirements will ultimately harm the public interest when the inevitable “tragedy of the commons” scenario compromises the quality of the data set, as private-sector actors begin avoiding these government partnerships for fear losing control of their data.
Unfortunately, some current open data policies invite unintended consequences – specifically, well-intentioned but overly broad open data mandates that nullify intellectual property rights by extending to data produced in the private sector and collected by, or licensed to, the government.
In these cases, the pursuit of maximum data-driven transparency often conflicts with other important public-interest goals, such as rewarding data driven innovation, safeguarding individual privacy, protecting intellectual property, encouraging private-sector innovation, and promoting the government’s access to data-driven tools that enable smarter decision-making.
Therefore, policies and requirements for openness of government data must contend with these unique challenges and take care to avoid unintended consequences.
To be sure, resolving these tensions is not easy, as it requires the nuanced balancing of competing public interests (e.g., effective and accessible government versus open government), but it is possible – and urgent. | <urn:uuid:9d323102-1b61-4758-97a2-6a54da4ac20a> | CC-MAIN-2024-38 | https://www.cyrrusanalytics.com/single-post/2018/09/13/a-new-era-of-data-powered-government-innovation | 2024-09-18T09:36:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00123.warc.gz | en | 0.924056 | 647 | 2.75 | 3 |
What Are Scam Websites and How To Avoid Scam Websites
What are scam websites?
Scam websites are any illegitimate internet websites used to deceive users into fraud or malicious attacks. Scammers abuse the anonymity of the internet to mask their true identity and intentions behind various disguises. These can include false security alerts, giveaways, and other deceptive formats to give the impression of legitimacy.
Although the internet has numerous useful purposes, not everything on the web is what it seems. Among the millions of legitimate websites vying for attention are websites set up for an array of nefarious purposes. These websites attempt anything from perpetrating identity theft to credit card fraud.
How does a scam website work?
Scam websites work in a wide variety of ways, from publishing misleading information to promising wild rewards in a financial exchange. The end goal is almost always the same: to get you to relinquish your personal or financial information.
A website of this nature may be a standalone website, popups, or unauthorized overlays on legitimate websites via clickjacking. Regardless of presentation, these sites work methodically to attract and misguide users.
Attackers using scam websites will typically use these steps to deceive users:
- Bait: Attackers draw internet users to the website through various distribution channels.
- Compromise: Users take an action that will expose their information or devices to the attacker.
- Execute: Attackers exploit the users to misuse their private information for personal gain or to infect their devices with malicious software for various purposes.
While a given scheme may be more complex, most can be distilled to these three basic stages.
A scam website may lure internet users through many communication channels, such as social media, email, and text messaging. Search results are sometimes manipulated through search engine optimization (SEO) methods, leading to malicious sites appearing in top positions.
By appearing as an attractive offer or a frightening alert message, users are more receptive to these schemes. Most scam websites are driven by psychological exploits to make them work.
Understanding exactly how these scams trick you is an essential part of protecting yourself. Let's unpack exactly how they accomplish this exploitation.
How does a scam website exploit you?
At their core, scam websites make use of social engineering — exploits of human judgment rather than technical computer systems.
Scams using this manipulation rely on victims believing that a malicious website is legitimate and trustworthy. Some are deliberately designed to look like legitimate, trustworthy websites, such as those operated by official government organizations.
Websites designed for scamming are not always well-crafted, and a careful eye can reveal this. To avoid being scrutinized, a scam website will use an essential component of social engineering: emotion.
Emotional manipulation helps an attacker bypass your natural skeptical instincts. These scammers will often attempt to create these feelings in their victims:
- Urgency: Time-sensitive offers or account security alerts can push you to immediate action before thinking critically.
- Excitement: Attractive promises such as free gift cards or a rapid wealth-building scheme can trigger optimism that may lead you to overlook any potential downsides.
- Fear: False virus infections and account alerts lead to panicked action that often ties in with feelings of urgency.
Whether these emotions work in tandem or alone, they each serve to promote the attacker's goals. However, a scam can only exploit you if it feels relevant or relatable to you. Many variants of online scam sites exist specifically for this reason.
Types of scam websites
Scam websites, like many other scam types, operate under different premises despite sharing similar mechanics. As we detail exactly what types of premises a scam website might use, you'll be better equipped to spot future attempts. Here are some common formats of scam sites:
Phishing Scam Websites
Phishing websites are a popular tool that attempts to present false situations and get users to disclose their private information. These scams often pose as legitimate companies or institutions such as banks and email providers.
Attackers typically bait users to the website with emails or other messages claiming an error or another issue that requires your action to proceed. The scam presents a situation that asks you to provide an account login, credit card information, or other sensitive data. This culminates in the misuse of anything obtained from victims of these attacks.
Online Shopping Scam Websites
As one of the most prevalent schemes, online shopping scam websites use a fake or low-quality online store to collect victims' credit card information.
These scams are troublesome as they can sometimes deliver the products or services to create the illusion of trustworthiness. However, the quality is inevitably subpar. More importantly, it is an uncontrolled gateway to obtain your credit card details for excessive and unpermitted use.
Scareware Scam Websites
Scareware website scams involve the use of fake security alert popups to bait you into downloading malware disguised as an authentic antivirus program. They do this by claiming your device has a virus or malware infection, fear and urgency may drive you to download a solution.
Owning a real internet security suite would help prevent malware downloads, but users who don't have it may fall prey to this.
Sweepstakes Scam Websites
Sweepstakes scams involve giveaways of large prizes that entice users to engage, ultimately providing financial information to pay a false fee.
This fee may be presented as taxes on the prize or a shipping charge. Users who provide their information become vulnerable to fraud and never receive the prize.
Examples of scam websites
Past internet scams have frequently involved the use of dedicated scam websites in their efforts. To help you spot future attempts, here are some notable examples:
COVID-19 Vaccine Trial Scam Websites
In mid-to-late 2020, reports of false COVID-19 treatments appeared. These COVID-19 scams involve gathering payment information or valuable details like your social security number (SSN) in exchange for an entry into the trial testing of a COVID-19 vaccine.
While authentic vaccination trials may offer payouts and ask for personal information, no compromising information is required to participate. Payouts for clinical trials are often done via gift card, whereas the scam may ask for your card details or even your bank account number. Basic personal information is also commonly provided in real trials but never includes your SSN or other intimate details.
DMV Phishing Scam Websites
In October 2020, phishing scams have taken advantage of a move to online services by posing as the Department of Motor Vehicles (DMV). Creating websites that mimic legitimate DMV sites has allowed scammers to take fraudulent vehicle registration payments and more.
How to identify fake websites
Fortunately, there are several simple ways to protect yourself from scam websites to ensure your family and your wallet stay safe as you navigate the World Wide Web.
By following the tips below, you can better protect against these threats:
- Emotional language: Does the website speak in a way that may heighten your emotions? Proceed with caution if you feel an elevated level of urgency, optimism, or fear.
- Poor design quality: It may sound a little obvious but look closely at how a site is designed. Does it have the type of design skill and visual quality you would expect from a legitimate website? Low-resolution images and odd layouts can be a warning sign of a scam.
- Odd grammar: Look for things like spelling mistakes, broken or stilted English, or really obvious grammar errors, such as the incorrect use of plural and singular words.
- Absence of identifying web pages: Additionally, a proper business website should have basic pages, such as a "Contact Us" page and an "About Us" page. If you're uncertain, give the business a call. If the number is a mobile phone or the call isn't answered, be on guard. If a business seems to want to avoid verbal contact, there's probably a reason.
How to avoid scam websites
Avoiding scam websites requires moving through the internet with caution and care. While you may not be able to completely avoid these sites, you may be able to behave more effectively to keep them from affecting you. Here are some ways you can stay away from these scams.
Check the domain name
Sites set up to spoof a legitimate site often use domain names that look or sound similar to legitimate site addresses. For example, instead of FBI.gov, a spoof site might use FBI.com or FBI.org. Pay special attention to addresses that end in .net or .org, as these types of domain names are far less common for online shopping sites.
If you want to dig a little deeper, you can check to see who registered the domain name or URL on sites like WHOIS. There's no charge for searches.
Be careful how you pay
One good practice is to never pay for anything by direct bank transfer. If you transfer funds into a bank account and the transaction is a scam, you will never get a cent of your money back. Paying with a credit card offers you some degree of protection should things go wrong.
Too good to be true?
The promise of luxuries beyond your wildest dreams in exchange for a moment of your time or minimal effort is a successful fraudster practice. Always ask yourself if something sounds too good to be true.
Is the site selling tablets, PCs, or designer trainers for what is clearly a hugely discounted, unbelievable price? Is a health product's website promising larger muscles or extreme weight loss in just two weeks? What about a fool-proof way to make your fortune? You can't go wrong if you assume something that sounds too good to be true is not true.
Do an internet search
If you still can't make up your mind about a website, do some searching to see what other people on the internet are saying about it. A reputation — good or bad — spreads widely online. If others have had a bad experience with a website, they are probably talking about it online. Look for reviews on sites such as Trustpilot, Feefo, or Sitejabber to see if a site has scammed anyone in the past.
If you can't find a poor review, don't automatically assume the best, as a scam website could be new. Take all the other factors into consideration to make sure you aren't the first victim.
Always use a secure connection
When you visit a legitimate site that asks for financial or secure data, the company name should be visible next to the URL in the browser bar, along with a padlock symbol that signifies you're logged into a secure connection. If you don't see this symbol or your browser warns you the site doesn't have an up to date security certificate, that is a red flag. To increase your level of personal protection, always use first-rate security software to ensure you have an added layer of protection.
Also, take nothing for granted and don't just click links to open a web site. Instead, type in the web address manually or store it in your bookmarks. Malicious criminals will often buy domain names that sound and look similar at first glance. By typing them in yourself or storing the one you know is accurate, you give yourself added protection.
What to do if you become a victim of a scam website
If you fall victim to one of these malicious sites, you'll want to take immediate action. The chance to limit the attacker's ability to exploit you is still within your hands. These are a few ways you can reduce the damage of a successful scam:
- Stop communication with the scammer if you've been in touch.
- Find and halt any pending or ongoing payments to scammers.
- Cancel any compromised credit cards to prevent further unwanted charges.
- Update your most essential passwords and PINS, including banking and email accounts.
- Freeze your credit to keep scammers from misusing your identity for new account fraud.
- Report the scam to any service providers and institutions that may be able to help.
When attempting to stop future scams to yourself and others, notifying the appropriate authorities is crucial.
How to report scam websites
Knowing how to report a website is just as important as doing it, so be sure to information yourself.
Above all else, be sure to report the scamming incident to any affected services like:
- Your banking institution and/or credit card company.
- The United States Internal Revenue Service (IRS).
- Online account providers, such as Google and Apple.
- E-commerce stores, like Amazon and eBay.
Google works to avoid promoting malicious results, but be sure to report the site to help their efforts as well.
Finally, be sure to reach out to your local police as they may be able to investigate locally sourced scams of this nature.
Kaspersky Internet Security received two AV-TEST awards for the best performance & protection for an internet security product in 2021. In all tests Kaspersky Internet Security showed outstanding performance and protection against cyberthreats. | <urn:uuid:fa94c610-6b1a-498e-ad69-ea54b2e475a8> | CC-MAIN-2024-38 | https://www.kaspersky.com/resource-center/preemptive-safety/scam-websites | 2024-09-18T07:39:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00123.warc.gz | en | 0.940434 | 2,675 | 2.65625 | 3 |
The need to secure your organization’s information has gone from an operational job to a strategic imperative. After all, the digital age has anchored data as the most important asset for any entity, no matter your industry.
Because of this, many bad actors want to get their hands on your data, through hacking, social engineering, and other techniques. And with Cybersecurity Ventures expecting that the cost of cybercrime will reach $10.5 trillion annually by 2025, there is little wonder that the World Economic Forum reported cybersecurity threats and IT infrastructure breakdown as some of the highest impact global risks in this decade.
On top of that, there are the regulations on personal data protection coming with hefty fines for violations, leaving very limited options for organizations who don’t see information security as a top priority.
(This article is part of our Security & Compliance Guide. Use the right-hand menu to navigate.)
What is Information Security?
The ISO/IEC 27000:2018 standard defines information security as the preservation of confidentiality, integrity, and availability of information. Often known as the CIA triad, these are the foundational elements of any information security effort.
It also considers other properties, such as authenticity, non-repudiation, and reliability.
The InfoSec CIA Triad
Let’s take a brief look at each property:
- Confidentiality. The property that information is not made available or disclosed to unauthorized individuals, entities, or processes. Here, we consider that data that should only be seen by those who have the right authorization; those who don’t are restricted. Think about your health records, HR payroll, or a company’s strategies falling in the wrong hands.
- Integrity. The property of accuracy and completeness of information. Here, we do not want to find our data is different from what we expect. For example, someone adding several zeros to their bank account balance, or changing the delivery address in your e-commerce account.
- Availability. The property of being accessible and usable on demand by an authorized entity. If information isn’t available, then the organization or its customers are hindered from achieving their objectives. For example, if you can’t access your emails, or a hospital cannot access patient health records.
- Authenticity. The property that an entity is what it claims to be. Here, we expect a person, application, or process is exactly who or what it identifies itself to be. For example, the user Michael has logged in with his username and password or token—and it is actually Michael.
- Non-repudiation. The ability to prove the occurrence of a claimed event or action and its originating entities. This involves providing undeniable proof that someone did something, or something happened. This property is useful particularly for assigning responsibility for actions for instance who created or approved a transaction.
- Reliability. The property of consistent intended behavior and results. Here, we expect our information systems to work as they should and be able to process the data in the right way over an expected period of time. Nobody likes a system that crashes when you need it most or is too slow to suit our needs.
Implementing Information Security
The following steps can help you effectively implement information security in your organization.
1. Identifying your assets
An asset is anything that is of value to an organization, such as people, systems, processes, office buildings etc.
In information security, asset identification is all about figuring out which assets handle the data and information that is critical to the success of the organization. Of course, in the digital age, the prime assets will be computing devices, whether on-premise or cloud based.
A formal asset management process will ensure that assets are identified, documented, and ownership assigned for purposes of accountability. Identifying assets is the first point in information security management. A business impact analysis exercise can be used to identify criticality of assets and hence prioritize security efforts.
2. Assessing risk & vulnerability
Once you’ve identified the assets, you can then identify threats to the information contained in them. A risk and vulnerability assessment exercise will go a long way towards this effort:
- Risks are any effect of uncertainty on objectives and can be considered opportunities if positive or threats if negative.
- A vulnerability is defined as a weakness that can be exploited by one or more threats.
Risks would generally be documented in a risk register along with information such as:
- Likelihood and impact if the risk materializes
- Priority of the risk based on an agreed evaluation criteria
- Owner of the risk
- Proposed risk treatment plan
- Residual risk following the treatment
You would also document where risks are accepted by the organization in the risk register.
Vulnerabilities would be included as part of asset risk registers and would include information on how to address or contain any exploitative threats. To find public information on well-known vulnerabilities, refer to the Common Vulnerabilities and Exposures (CVE) references online.
(Learn more about risk and vulnerability assessments.)
3. Implementing controls
Once you’ve assessed your risks and vulnerabilities, treating the risks is the logical next step. You want to ensure the properties listed earlier are maintained.
There are three categories of InfoSec controls:
- Physical controls. These address risks that impact physical locations such as offices and data centers. They include gates, locks, guards, mantraps, CCTV, and biometric access passes, among others.
- Technical controls. These are technology centric controls that address vulnerabilities or contain risks. They include Intrusion Detection and/or Prevention Systems (IDS/IPS), firewalls, encryption, anti-malware software, and Security Information and Event Management (SIEM) solutions.
- Administrative controls. These are the people-centric controls which include information security policies, access rights reviews, segregation of duties, and business continuity plans, among others.
A different approach to looking at InfoSec controls is based on their position in dealing with threats that materialize. For instance:
- Preventive controls try to stop the threat from materializing. Examples are firewalls, access control and acceptable use policies, fences, etc.
- Detective controls spot a threat immediately it materializes such as CCTV, IDS/IPS and SIEM.
- Corrective controls reverse the impact of the materialized threats such as anti-malware software.
Organizations usually deploy multiple types of controls to cover all bases. This approach is called defense-in-depth, where layers upon layers of controls are used to limit the impact of a materialized threats or exploited vulnerabilities.
For instance, firewalls block access by malicious actors, but if penetrated, then you have a segregated network and encrypted information which is backed up in an alternate location.
4. Testing & training
Once the controls are in place, there must be a mechanism to regularly test the controls to ensure they remain effective in the face of evolving threats. This can include:
- Audit tests
- Penetration tests
- Disaster recovery tests
Results of these tests should be documented. Following review, you’ll want to agree on remedial or improvement actions and then track these through to implementation.
The human firewall remains the best form of defense, and failure to keep one’s staff and customers aware of information security can render the best security controls useless. A regular program to educate users on threats to information security is critical. A variety of means can be employed, such as:
- Online training
- Security bulletins
- Phishing tests
- And more
InfoSec is a critical policy
For any business or organization, there are a few IT policies that are absolutely critical—and InfoSec is one. Don’t wait until it’s too late to protect your data and information assets. | <urn:uuid:06176576-db52-4af5-be17-b66e521ec458> | CC-MAIN-2024-38 | https://www.bmc.com/blogs/infosec-information-security/ | 2024-09-20T17:58:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00823.warc.gz | en | 0.937942 | 1,623 | 2.59375 | 3 |
Technology has transformed the world we live in. Every organization and company needs to protect its network so that proprietary information can stay safe from any type of cyber-attack. Network security is also beneficial for your personal information, especially if your use your network for both personal and business purposes.
Therefore, there are several things you should know about your network’s security and how you can protect yourself from cyber-attacks and breaches.
What You Should Know About Network Security
Network security encompasses all activities that are intended to ensure the integrity and functionality of your network and data, including hardware and software. Effective network security will identify cyber threats and prevent them from having an impact on your network.
There are several types of security for your network, but you should utilize the type of security that is most beneficial to your business.
- Network access control is one type of network security that only allows access to certain users. This is beneficial because you can block access or certain users from various parts of your network or data.
- Several types of applications also exist that are designed to increase the security of your networks. Anti-virus and anti-malware software are designed to scan for harmful malware and scan for malware upon entry, but also continuously track files afterward to find anomalies, remove malware, and fix any damage.
- There are also firewalls, VPNs, and other forms of network security that can protects your network and sensitive data.
Overall, security on your network helps protect your proprietary information. Because your business depends on this information, having appropriate security in place ultimately protects your business and reputation. | <urn:uuid:d88ee8a8-ee94-48af-aca5-5c9984f95690> | CC-MAIN-2024-38 | https://www.ccsipro.com/blog/know-about-your-network-security/ | 2024-09-20T17:46:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00823.warc.gz | en | 0.942764 | 325 | 2.84375 | 3 |
08 Aug Rebooting: What it is and How You Should Do It
Understanding the different rebooting options and their implications allows users and IT support professionals to maintain the stability and efficiency of devices while safeguarding critical data and files.
What is Rebooting?
Rebooting refers to the process of restarting a computer or electronic device to refresh its operating system, clear temporary data, and resolve various software or hardware issues.
What is the Benefit of Rebooting?
It is a common troubleshooting step to resolve problems such as system freezes, slow performance, or software glitches. Rebooting can also be performed as part of routine maintenance to keep the system running smoothly.
How Many Different Types of Reboots Are There?
There are two primary types of reboot protocols:
- Hard reboot
- Soft reboot
A hard reboot, also known as a hard reset or force restart, involves forcibly powering off the device and then turning it back on. Hard reboots do not shut down the operating system or close applications correctly, leading to potential data loss or file corruption. However, the hard reboot is effective in situations where the system is unresponsive or frozen.
The soft reboot, also known as simply “reboot,” involves restarting the device using software commands. The process allows the operating system to perform a proper shutdown by closing running applications and saving necessary data before restarting. Soft reboots are less likely to cause data loss or corruption compared to hard reboots.
What is Sleep Mode?
In sleep mode, the computer or device enters a low-power state. While in this mode, the screen and other components are turned off to save energy. The system remains in a partially active state, allowing quick wake-up times.
During sleep, the operating system and running applications are stored in memory, so the device can quickly resume its previous state when awakened. You’ll be able to pick right back up where you left off with minimal downtime since you don’t have to wait for a time-consuming reboot.
Sleep mode consumes minimal power and is most useful for short periods of inactivity. However, for standard machines, too much sleep might not be great for overall performance.
Turning your computer off completely at least once a week is a best practice to refresh your machine’s operations.
What Happens When My Computer Hibernates?
Hibernate is a deeper power-saving state than sleep. When a computer hibernates, the contents of its RAM (random-access memory) are saved to the hard drive or SSD, allowing the system to power down completely.
Powering down helps to conserve energy while preserving the open applications and data. When the computer is powered back on, it restores the saved state from the disk to RAM, resuming the system to where it was before entering hibernation.
What Happens During the Reboot Process?
During the reboot process, whether it’s a hard or soft reboot, the operating system goes through its boot sequence:
- initializing hardware
- loading essential system files
- launching necessary services
This process helps the system regain reliability by clearing temporary data and freeing up resources that might have been tied up by malfunctioning applications or processes. It also allows the operating system to start fresh, potentially resolving any software-related issues.
Are There Risks Associated with Rebooting Your System?
As mentioned earlier, a hard reboot doesn’t allow applications and the operating system to save data properly before shutting down. This sudden interruption can lead to data loss or corruption, especially if important files were not saved at the time of the hard reset.
Potential loss of critical data is why it’s generally recommended to use soft reboots whenever possible, because they allow the system to shut down properly, reducing overall risk.
How Can Virtualized Desktops Reduce Reboot Related Data Risk?
When end users are using virtual desktop infrastructure (VDI) solutions, the actual computing and processing occur on remote servers rather than their local machines. When using VDI, the end user interacts with a virtual desktop hosted on a server, which allows for better management, security, and data protection.
Since the virtual desktop environment is hosted remotely, there isn’t direct access to the underlying hardware or power controls.
This means users cannot perform hard resets or forcibly shut down their virtual machines like they might do with traditional physical PCs. Instead, they typically have controlled ways of shutting down or rebooting the virtual desktop through the VDI software, allowing for a more structured and less risky process.
Rebooting is a fundamental process used to refresh, troubleshoot, and maintain the reliability of personal computers and electronic devices.
A hard reboot, though effective in resolving unresponsive systems, carries the risk of data loss and file corruption due to its abrupt nature while a soft reboot offers a more graceful approach, reducing the likelihood of data loss by allowing applications to close properly before restarting.
Sleep and hibernate modes are power-saving alternatives, with hibernate offering greater energy conservation while preserving the system’s state.
The adoption of virtualized desktops means users are less likely to unintentionally cause downtime and data loss, as there are controlled methods for shutting down or rebooting virtual machines.
We Have Experts on Staff to Answer Your Questions Optimizing System Performance
Interested in finding out how partnering with an award-winning cloud hosting provider can help you achieve your business goals while reducing your overall technology costs?
Our trained team of cloud computing experts can help by answering all your questions about how to align your technology with industry guidelines and while maintaining strict compliance standards.
We’ll show you how to use our efficient cloud-based storage and processing solutions in your business to improve your security, productivity, and profitability.
Let us demonstrate exactly what CyberlinkASP can do for you – using your own data and workflows. | <urn:uuid:f98a3905-a087-4a46-a1fd-b46b6f9dbfe8> | CC-MAIN-2024-38 | https://www.cyberlinkasp.com/insights/rebooting-what-it-is-and-how-you-should-do-it/ | 2024-09-08T17:09:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00187.warc.gz | en | 0.920096 | 1,213 | 3.21875 | 3 |
Learning management systems (LMS) use cases
Learning management systems (LMS) are utilized across various industries to achieve educational and organizational objectives. Here are three examples:
A manufacturing company implements an LMS to provide factory workers with mandatory safety training and compliance certification.
The LMS delivers interactive modules, simulations, and assessments to educate employees on safety protocols, hazard identification, and emergency procedures.
Improved workplace safety standards, reduced accident rates, and enhanced regulatory compliance demonstrate the LMS’s impact on workforce safety and operational efficiency.
Financial services industry
A financial services firm adopts an LMS to deliver regulatory compliance training, risk management courses, and professional development programs for employees.
The LMS offers self-paced modules, virtual classrooms, and certification exams to educate staff on industry regulations, ethical standards, and financial best practices.
Enhanced compliance adherence, reduced regulatory risks, and increased employee competence contribute to the firm’s reputation and operational resilience.
Higher education institutions
A university deploys an LMS to support distance learning, facilitate online courses, and enhance student engagement through interactive learning experiences.
The LMS integrates multimedia content, discussion boards, and assessment tools to deliver academic curricula, support collaborative projects, and assess student progress.
Expanded access to educational resources, improved student retention rates, and enhanced academic outcomes illustrate the LMS’s role in modernizing higher education delivery. | <urn:uuid:cca98031-abe1-46d8-8fdb-764b8a7a26c1> | CC-MAIN-2024-38 | https://www.digital-adoption.com/glossary/learning-management-system-lms/ | 2024-09-08T16:56:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00187.warc.gz | en | 0.910276 | 287 | 2.828125 | 3 |
What are the board of directors’ responsibilities to their shareholders?
The structure of publicly listed corporations and the marketplace is strategically designed to benefit all parties and society in general. Board directors, managers, shareholders and stakeholders all play a specific role in the marketplace. Boards of directors have specific responsibilities to their shareholders.
Each group has specific duties and responsibilities that correspond to their role. Occasionally, there is a slight overlap in roles. For example, shareholders are demanding more say in issues that have traditionally been board matters. As their title suggests, board directors have many duties related to directing the operation, so it seems fitting that they have many responsibilities to their shareholders. As a rule, when everyone stays in their own lane, everyone stands to profit in one or more ways.
Private and public corporations offer shares of their companies to investors, which provides their companies with operating revenue. The relationship between private companies and their shareholders is most notably outlined in the corporate charter, shareholder agreements and other shareholder provisions.
In public and privately owned corporations, most shareholders have common rights and an established relationship with the company and its board related to their shareholder ownership.>> Learn More On Our "Modern Governance: The How-To Guide" Whitepaper
Clarifying Roles and Ownership Perspectives Between Board Directors and Shareholders
To clarify roles in corporate partnerships in the most simplistic of terms, the board of directors is responsible for overseeing the affairs of the company and protecting the interests of the shareholders. Senior managers of the company are responsible for managing the day-to-day operations of the corporation. Shareholders are most interested in making a financial return on their investment.
Shareholders are often described as being 'owners' of the corporations in which they invest. That is partially true, and perhaps wholly true, depending on how one defines 'owner.' In the strictest sense of the word, shareholders are only partial owners of a company because they don't solely retain full rights and responsibilities. Shareholder rights have been increasing as a way of ensuring good governance. For example, shareholder meetings provide a venue for shareholders to get in on the loop of pertinent issues and vote on them. Shareholders of private companies are even less entitled to information because private companies aren't bound by the same rigid federal regulations that apply to publicly held companies.
While boards of directors maintain the bulk of control over corporations, shareholders of private and public companies can often vote out directors if they garner a majority and make a strong enough push for it. While shareholders lack director control over the corporations they invest in, their degree of ownership gives them some degree of power over board director nominees and compensation issues.
Board of Directors' Responsibilities to Shareholders
The primary responsibilities of board directors to shareholders relate to their fiduciary duties, including the duty of care, duty of loyalty and duty of obedience. These duties require board directors to place the best interests of the company ahead of their own. They must make decisions for the company and act in a manner that an ordinary, prudent person would. The duty of obedience requires boards to ensure that the company remains in compliance with all laws and regulations.
Another responsibility that board directors have to shareholders is to compose and maintain a diverse, independent and highly competent board. Sound decision-making only comes from a wide variety of perspectives. Shareholders are entitled to know that the board overseeing the company's operations is well-qualified and up to the task.
Boards are required to take minutes of their meetings to detail the issues that they're working on. Shareholders may request copies of board meeting minutes if they're looking for assurance that the board is actively fulfilling their duties of oversight and strategic planning. This doesn't mean that shareholders have any say in directing the issues the board chooses to tackle or the way they prioritize issues.
Shareholders look for assurance that companies are financially strong currently and will continue to grow and prosper. The board of directors has an explicit responsibility to form a short-term plan of one to two years to ensure sustainability. In addition, shareholders are interested in long-term growth for continued security and prosperity.
Board directors also have a responsibility to oversee all departments and aspects of the corporation. The responsibility includes making sure operations are running efficiently, company operations are in alignment with the organization's purpose, there are no incidences of fraud, they communicate the corporate culture throughout the organization, and they conduct oversight over all departments and operations of the company.
The annual audit gives the shareholders a clear picture of the company's financial status and outlook. Boards of directors are accountable to shareholders to conduct an annual audit by independent directors that is accurate, complete and timely. In today's climate, shareholders also expect financial records that are concise, readable and easily understandable.
The board of directors is responsible for hiring, monitoring and firing the CEO and other senior management executives. Shareholders expect C-suite-level managers to be competent, knowledgeable and capable of carrying out the board's strategic plans. Boards owe it to their shareholders to provide the necessary oversight of senior management.
The company's reputation is an important concern for shareholders. They rely on the board of directors to protect the company from fraudulent practices, bad press and other issues that can harm a company's reputation. Boards must work to identify reputational risks that could result in lost revenue, increased operating expense, capital or regulatory costs, and destruction of shareholder value.
In addition to shareholders having more say in board decisions, another place where roles become slightly blurred is that major shareholders are often also part of upper management. Shared roles can become problematic in the boardroom when boards and shareholders don't share the same perspectives on short-term strategies, long-term strategies or both.
A board portal system by Diligent Corporation is the best way for boards to manage their many responsibilities to their shareholders. Governance Cloud is a suite of governance software tools that assist boards in governance activities and responsibilities with tools like board self-assessments, entity management tools, secure messaging, agenda and minutes software, D&O questionnaires and more. As governance best practices, laws and regulations continue to evolve, Diligent software designers are staying ahead of the curve with new features and products to fully support good governance at every stage. | <urn:uuid:a3c5b311-37c6-40ab-8bc7-5ee640157082> | CC-MAIN-2024-38 | https://www.diligent.com/resources/blog/what-are-the-board-of-directors-responsibilities-to-their-shareholders | 2024-09-08T15:25:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00187.warc.gz | en | 0.967206 | 1,260 | 3.015625 | 3 |
March 27, 2023
IT leaders offered tributes to Intel co-founder Gordon Moore, who died on Friday at his home in Hawaii. Moore, Intel’s longest-serving chairman and CEO, is credited with commercializing PCs and laptop computers.
Gordon Moore/Courtesy: Intel Corporation
Nevertheless, Moore’s Law set the pace of change in the industry for decades. Moore famously predicted in 1965 that the number of transistors on a circuit would double yearly. In 1975, Moore revised that prediction to every two years.
“For over 40 years, Intel engineers have continually innovated to squeeze more and more transistors onto ever-smaller chips and maintain the pace of Moore’s Law,” Intel executive VP and general manager of technology development Ann Kelleher wrote last year.
Moore co-founded Intel with Robert Noyce in 1968. According to his obituary posted by Intel, Moore began his career conducting research at Johns Hopkins Applied Physics Laboratory. In 1956, Moore joined Shockley Semiconductor, and a year later, Noyce and six other Shockley colleagues co-founded Fairchild Semiconductor. Eleven years later, Moore and Noyce co-founded Intel.
Intel’s Pat Gelsinger
“He was instrumental in revealing the power of transistors and inspired technologists and entrepreneurs across the decades,” Intel CEO Pat Gelsinger said in a statement. “We at Intel remain inspired by Moore’s Law and intend to pursue it until the periodic table is exhausted.”
A ‘Giant’ in the Computer World
Tributes to Moore poured in on social media over the weekend. Industry analyst Tim Bajarin, founder and president of Creative Strategies, posted on Facebook: “He was a giant in the semiconductor and computer world and leaves behind an amazing legacy.”
On Sunday, IBM chairman and CEO Arvind Krishna recalled in a post on Twitter Moore’s impact on computing. “Today, all of IBM is celebrating the remarkable life of Gordon Moore, the self-described ‘accidental entrepreneur’ whose $500 investment in the nascent microchip industry helped build both the digital world we know — and the one that’s still on the horizon” Krishna wrote.
Many who knew Moore and those who followed his work also weighed in on Twitter throughout the weekend. Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania, shared in a post that he wrote his first academic paper on Moore’s Law. “He was right,” Molick noted. “And led to our accelerating world.”
Responding to Mollick was Ryan Shaw, a software developer, who shared that he was born in the early 1980s. “I’ve been using personal computers since I was a kid, and I work as a software dev,” he noted. “Moore’s Law is the story of my life.”
Sutter Hill Ventures managing director Sam Pullara recalled: “I once got the opportunity to ask Gordon Moore a question, ‘What is the equivalent of Moore’s Law for software?’ His response, ‘number of bugs doubles every year.’ Great man.”
Industry colleagues also recalled Moore’s legacy. Lisa Su, CEO of Intel’s largest rival, AMD, shared her respect for Moore. “Gordon Moore and ‘Moore’s Law’ inspired all of us as students, engineers, and leaders in the semiconductor industry,” Su noted. “He was a true visionary and deeply admired.”
Former AWS and IBM channel executive Sandy Carter agreed. “Gordon Moore’s predictions in 1965 about the exponential growth of computing power have stood the test of time,” she wrote. “And I honor his life as his insights continue to drive innovation and progress in the tech industry.”
Gordon Moore Life Timeline
1950: Moore earned his bachelor’s degree in chemistry from University of California at Berkeley. Later that year, Moore married Betty Irene Whitaker. In 1954 he received his Ph.D. in chemistry from the California Institute of Technology.
1956: Moore joined Shockley Semiconductor but left the next year with Robert Noyce to form Fairchild Semiconductor.
1965 Moore’s Law, which predicted the future of electronics, in which the number of transistors on a circuit would double every year, is published. A decade later he would revise that to every two years.
1968: Moore and Noyce co-founded Intel launches the company’s first product, the first functional 64-bit random access memory a year later. In 1971 Intel launched Intel4004, the first microprocessor. In 1975, Intel named Moore president, and added the CEO title in 1978, and chairman in 1979, which he held until 1987. Andy Grove succeeds Moore as CEO.
2002 U.S. President George W. Bush awards Moore with the Presidential Medal of Freedom award.
2022 Ronler Acres campus in Oregon gets a new name — Gordon Moore Park at Ronler Acres
Read more about:
VARs/SIsAbout the Author
You May Also Like | <urn:uuid:c8968219-9128-406d-8dcc-c3465c8e94f2> | CC-MAIN-2024-38 | https://www.channelfutures.com/data-centers/computer-industry-mourns-death-of-intel-co-founder-gordon-moore | 2024-09-11T00:54:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00887.warc.gz | en | 0.959421 | 1,108 | 2.8125 | 3 |
Surf Smarter To Stay Secure Online
Protecting your security and keeping your privacy online is possible. It takes more of a commitment than just keeping your antivirus software updated, however.
John Okoye, of Techopedia, suggests that your own browsing habits have as much to do with security as your security software. Here are some of the ways you can protect yourself.
- Understand Your Browser
Do a little research and discover how the internet browser you’re using stores your data. It may be tracking your history and selling it to advertisers without your knowledge. However, many browsers have options to surf privately without saving your history or data.
- Proper Spam Techniques
Even if you are extremely careful about who you give your email address out to, you’ll still receive your fair share of spam emails. When one appears in your inbox, don’t respond. That includes following the ‘unsubscribe’ link. Once spammers learn that your email is active, you’ll actually receive more spam than before. Also, be sure to mark the email as spam, rather than just deleting. it. If you find that more spam emails are making through your spam filter, consider adding additional rules, or changing email providers.
- Be Careful With Social Networks
Social media profiles are a resource for hackers. By learning your birthday, address, phone number and email address, they can intelligently hack into other accounts, or send you phishing scams. Be sure to take advantage of security options to keep your information private and don’t over share. There’s usually no reason to include a phone number on your Facebook page.
- Be Smart About Email
Do some research and find an secure email provider. One that protects you from spam and doesn’t save your emails in a log. Your email should also be encrypted to ensure that no one but the intended recipient is reading them. You may also consider having multiple email accounts. That way, when registering for accounts on ecommerce sites or anywhere that you don’t want to have your primary or business email, you can use a secondary account.
These are just some of the ways you can take action to stay safer and more secure online. To beef up the security for your home PC or your business network, call Geek Rescue at 918-369-4335.
August 30th, 2013 | <urn:uuid:168e320c-08cf-4bc2-9352-1783e9496fbd> | CC-MAIN-2024-38 | https://www.geekrescue.com/blog/2013/08/30/surf-smarter-to-stay-secure-online/ | 2024-09-12T06:37:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00787.warc.gz | en | 0.939757 | 494 | 2.53125 | 3 |
This lesson will help the viewer understand why personal data is valuable, what value it has and how it is being treated in general in our society nowadays.
Open access to personal data, doxing, cyberstalking and cyberbullying are tied together.
Let us explain how.
Doxing isn’t always done with negative intent – it often serves legal purposes. For instance, for advanced policing techniques.
However, regardless of the intent, the practice of abusing a person’s privacy is unethical and the harm suffered is very real.
Here are some recommendations to stay ethical:
The combination of harassment, cyberbullying and cyberstalking that often accompanies doxing takes a huge emotional toll on the person targeted.
Anyone considering the act of doxing should appreciate the potential harm it can cause the victim before acting.
Your data is valuable, and, in fact, there are some people looking to profit off yours. In our next lesson, we will tell you more about those who sell and buy information.
Which of this data is ethical to share without asking for consent?
Photos from a party | <urn:uuid:3d324ab9-2834-4b65-ba4d-3e2eb9ad3cae> | CC-MAIN-2024-38 | https://education.kaspersky.com/en/lesson/53/page/293 | 2024-09-15T22:09:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00487.warc.gz | en | 0.956952 | 232 | 3.5625 | 4 |
Though there are many benefits of the new phenomenon that is ChatGPT, it has also caused everyone’s exposure to cybercrime to significantly increase. Florian Malecki, Executive Vice President Marketing, Arcserve, explores the urgent security questions ChatGPT poses and why organisations should invest in solid cybersecurity measures.
ChatGPT, developed by the Artificial Intelligence lab, OpenAI, is a humanoid chatbot causing a global sensation. It is now the fastest-growing app in history, hitting 100 million active users in just two months — way faster than the nine months it took previous record-holder, TikTok, to reach that mark.
This powerful, open-source tool can do whatever you ask, including writing school essays, drafting legal agreements and contracts, solving complex maths problems and passing the medical licensing exam. It also has the potential to revolutionise the way businesses operate. With ChatGPT, you can generate reports quickly and handle customer service requests efficiently. The tool can even write code for your next product offering, conduct market analysis and help build your company website.
But while ChatGPT offers many benefits to businesses, it poses urgent security questions. One of the critical risks associated with this technology is the power it gives cybercriminals with no coding experience to create and deploy malicious software. With ChatGPT, anyone with bad intentions can quickly develop and unleash malware that wreaks havoc on companies.
Security firm, Check Point Research, reported that within weeks of ChatGPT’s release, individuals in cybercrime forums, including those with limited coding skills, utilised it to create software and emails for espionage, ransomware attack and malicious spamming. Check Point said it’s still too early to tell if ChatGPT will become the go-to tool among Dark Web dwellers. Still, the cybercriminal community has demonstrated a strong interest in ChatGPT and is already using it to develop malicious code.
In one example reported by Check Point, a malware creator revealed in a cybercriminal forum that they were using ChatGPT to replicate well-known malware strains and techniques. As evidence, the individual shared the code for a Python-based information stealer that they developed using ChatGPT. The stealer searches, copies and transfers 12 common file types from a compromised system, including Office documents, PDFs and images.
ChatGPT increases everyone’s exposure to hacking
Bad actors can use ChatGPT and other AI-writing tools to make phishing scams more effective. Traditional phishing messages are often easily recognisable because they are written in clumsy English. But ChatGPT can fix this. Mashable tested ChatGPT’s ability by asking it to edit a phishing email. Not only did it quickly improve and refine the language, but it also went a step further and blackmailed the hypothetical recipient without being prompted to do so.
While OpenAI says it has strict policies and technical measures in place to protect user data and privacy, the truth is that these may not be enough. ChatGPT scrapes data from the web — potentially data from your own company — which brings security risks. For instance, data scraping can result in sensitive information, such as trade secrets and financial data, being exposed to competitors. There can also be reputational damage if the information obtained through data scraping is inaccurate. Moreover, when data is scraped, it can open systems to vulnerabilities that malicious actors can exploit.
Given that the attack surface has dramatically expanded due to the advent of ChatGPT, what impact does this have on your security posture? Previously, small and mid-sized businesses may have felt secure, thinking that they were not worth the trouble of hacking. However, with ChatGPT making it easier to create malicious code at scale, everyone’s exposure to cybercrime has significantly increased.
ChatGPT demonstrates that while the number of security tools available to protect you may be increasing, these tools may not be able to keep pace with emerging AI technologies that could increase your vulnerability to security threats. Given the spiralling threat of cybercrime, every business needs to be aware of the potential risks posed by ChatGPT and other advanced AI systems — and take steps to minimise those risks.
Measures you can take to protect yourself
Your first step is to understand just how vulnerable you are. Penetration testing, also known as pen-testing, can help protect your data by simulating a real-world attack on your company’s systems, networks, or applications. This exercise aims to identify security vulnerabilities that malicious actors could exploit so you can close them. By exposing your weaknesses in a controlled environment, pen-testing enables you to fix those weaknesses, improve your security posture and reduce the risk of a successful data breach or other cyberattacks. In the new world of ChatGPT, penetration testing can play a crucial role in helping you safeguard your data and ensure its confidentiality, integrity and availability.
Companies must also double down on their data resilience strategy and have a solid data protection plan. A data resilience plan outlines the steps a business should take to protect its critical data and systems and how it will restore normal operations as quickly and efficiently as possible if a data breach occurs. It also provides a roadmap for responding to cyberthreats, including detailed instructions for securing systems, backing up data and communicating with stakeholders during and after an incident. By putting a data resilience plan in place, businesses can minimise the impact of cyberthreats and reduce their risk of data loss, helping to ensure their organisation’s ongoing success and survival.
Another way of stopping ChatGPT-enabled script kiddies and bad guys is through immutable data storage. Immutability means data is converted to a write-once, read many times format and can’t be deleted or altered. There isn’t any way to reverse the immutability, which ensures that all your backups are secure, accessible and recoverable. Even if attackers gain full access to your network, they will still not be able to delete the immutable copies of your data or alter the state of that data.
While ChatGPT offers benefits to businesses, it also poses significant security risks. Companies must be aware of these risks and take steps to minimise them. They should invest in solid cybersecurity measures and stay informed about the latest security trends. By putting the proper protection in place, organisations can realise the many benefits of ChatGPT while defending themselves against those who use the tool for malicious purposes.
Click below to share this article | <urn:uuid:860da8eb-116a-451d-a168-504b5e21e765> | CC-MAIN-2024-38 | https://www.intelligentciso.com/2023/03/20/the-security-risks-of-chatgpt-how-businesses-can-safeguard-their-data/ | 2024-09-15T23:24:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00487.warc.gz | en | 0.942646 | 1,333 | 2.5625 | 3 |
All over the world, indigenous and non-indigenous languages face extinction. The Iroquoain language, Oneida, which finds a number of its speakers in New York, the Jewish language, Yiddish, as well as Sicilian, and Belarusian are all in danger.
The province of British Columbia in Canada is unique in that it is home to 60% of the country’s indigenous languages. Unfortunately, only about 4% of the indigenous people in British Columbia are fluent in their native languages, and most of those who are fluent are elderly. All of the languages face threats to their vitality.
In order to prevent the extinction of these languages, the First Peoples’ Cultural Council (FPCC), joined with the communities they advocate for, built a language revitalization platform called FirstVoices.
FirstVoices was developed to allow language revitalization champions to document their indigenous languages online and allow community members to access the information. It’s built on top of Nuxeo’s open source cloud-native content services platform and houses 38 dialects of 28 language groups on this single integrated platform.
All of the data is collected by community language champions, elders, and linguists, who are paid by FPCC-raised grants, and then it is housed in Canada on an Azure cloud.
“We’re spending our money in the communities, instead of giving thousands of dollars to developers,” says Tracey Herbert, CEO of the First Peoples’ Cultural Council. “I really think collaboration is key to language revitalization, sharing information, ideas.”
The FirstVoices’ web-based platform, which is responsive on mobile devices, stores around 400,000 JSON objects. Those objects include words, phrases, songs, stories, audio recordings, pictures, and videos.
“The audio recordings are all elders, people speaking their language the way it should be spoken,” says Daniel Yona, development manager for FPCC.
The suite of language revitalization services FPCC and First Nations people of B.C. developed extends beyond the language archive. There are dictionary and tutor apps (available offline) for iPhone and Android as well as a keyboard for First Nations languages.
As with many native languages around the world, some Inuit languages in the B.C. province don't use the Latin alphabet and their letter ordering differs from that Latin alphabet. “We had to create a custom representation of their words in order to be able to index them in a way that would be intuitive for those searching for them,” says Lisa Marcus, VP of Government, Nuxeo. “All of the language-sensitive aspects of the Nuxeo Platform are customizable -- the labels in the user interface, the data model (including vocabularies), the search index tokenizer, emails templates, etc.”
Yona says something that drew them to Nuxeo was that they wanted a company with a mature security model that would allow them to be able to implement security on a property or document level.
“Because of the cultures they serve they have many words that are sacred words -- I equate those to things like PII [personally identifiable information] for a business user. Those words are very cherished and need to be secure and only accessed by those who have the ability and right to see them,” says Marcus.
Something that makes this language research and archival project unique is that the community members own the archived data.
“Historically, linguists have come into communities and documented the languages, but also taken the knowledge from the indigenous peoples and copywritten that knowledge in dictionaries,” says Herbert.
Because the FPCC and the First Nations community members built FirstVoices and do all of the data collection, they also own all of the data.
“[FirstVoices] is about acknowledging indigenous peoples as experts in their languages and also acknowledging that we also have the capacity to use and develop technologies as Indigenous Peoples and then to curate our own data and have that data sovereignty,” says Herbert.
This need for data sovereignty was a major deciding factor as to why FPCC went with Nuxeo, an open source solution. The open source ethos of transparency closely aligned with the values of the FPCC and First Nations people.
Yona says he also chose an open source solution as a way to ensure that the project will continue beyond the founders and the technologists that are currently working on FirstVoices.
“It’s important to [be able to] say that if I’m not in this position in a few years, someone can come in and do the work with [the FirstVoices platform]. It’s not a closed source that one person or one company can build,” says Yona.
While FPCC’s main focus is supporting indigenous peoples and ensuring all of the B.C. providence indigenous languages are captured and archived and curated by indigenous experts, Herbert says that they’re always supporting indigenous peoples and have connections in Sweden, Mexico, Cuba, and have been invited to Guatemala.
“So, the word’s sort of getting out there about the tool, and we’re prepping ourselves for making it more accessible outside of B.C., but it’ll probably be a year before something like that is launched,” says Herbert.
Image: First Peoples' Cultural Council
About the Author
You May Also Like
Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring
September 25, 2024 | <urn:uuid:8da05027-e6cd-4d0d-9ad3-70530edc4ff6> | CC-MAIN-2024-38 | https://www.informationweek.com/machine-learning-ai/a-fight-to-save-indigenous-language-culture | 2024-09-17T04:51:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00387.warc.gz | en | 0.953538 | 1,187 | 2.984375 | 3 |
Structure II_DATA VALUE
The fields in the II_DATA_VALUE structure describe the data values manipulated by the routine. The fields are as follows:
Type identifier of the data.
Length of the data.
Pointer to the actual data (if appropriate). This data may not be aligned as required by your machine, and you may need to align the data for correct operation.
Precision of the data value. For most data types, this is not needed, and should be ignored on input and set to 0 for output.
For DECIMAL, the high order byte will represent the value's precision (total number of significant digits), and the low order byte will hold the value's scale (the number of these digits that are to the right of an implied decimal point.) | <urn:uuid:339427d1-e114-4f7d-93c2-28f79f5ba377> | CC-MAIN-2024-38 | https://docs.actian.com/ingres/10S/ObjMgmtExt/Structure_II_DATA_VALUE.htm | 2024-09-19T18:46:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00187.warc.gz | en | 0.785048 | 163 | 2.625 | 3 |
Welcome back to AI 101, a series dedicated to breaking down the realities of artificial intelligence (AI). Previously we’ve defined artificial intelligence, deep learning (DL), and machine learning (ML) and dove into the types of processors that make AI possible. Today we’ll talk about one of the biggest limitations of AI adoption—how much it costs. Experts have already flagged that the significant investment necessary for AI can cause antitrust concerns and that AI is driving up costs in data centers.
To that end, we’ll talk about:
- Factors that impact the cost of AI.
- Some real numbers about the cost of AI components.
- The AI tech stack and some of the industry solutions that have been built to serve it.
- And, uncertainty.
Defining AI: Complexity and Cost Implications
While ChatGPT, DALL-E, and the like may be the most buzz-worthy of recent advancements, AI has already been a part of our daily lives for several years now. In addition to generative AI models, examples include virtual assistants like Siri and Google Home, fraud detection algorithms in banks, facial recognition software, URL threat analysis services, and so on.
That brings us to the first challenge when it comes to understanding the cost of AI: The type of AI you’re training—and how complex a problem you want it to solve—has a huge impact on the computing resources needed and the cost, both in the training and in the implementation phases. AI tasks are hungry in all ways: they need a lot of processing power, storage capacity, and specialized hardware. As you scale up or down in the complexity of the task you’re doing, there’s a huge range in the types of tools you need and their costs.
To understand the cost of AI, several other factors come into play as well, including:
- Latency requirements: How fast does the AI need to make decisions? (e.g. that split second before a self-driving car slams on the brakes.)
- Scope: Is the AI solving broad-based or limited questions? (e.g. the best way to organize this library vs. how many times is the word “cat” in this article.)
- Actual human labor: How much oversight does it need? (e.g. does a human identify the cat in cat photos, or does the AI algorithm identify them?)
- Adding data: When, how, and what quantity new data will need to be ingested to update information over time?
This is by no means an exhaustive list, but it gives you an idea of the considerations that can affect the kind of AI you’re building and, thus, what it might cost.
The Big Three AI Cost Drivers: Hardware, Storage, and Processing Power
In simple terms, you can break down the cost of running an AI to a few main components: hardware, storage, and processing power. That’s a little bit simplistic, and you’ll see some of these lines blur and expand as we get into the details of each category. But, for our purposes today, this is a good place to start to understand how much it costs to ask a bot to create a squirrel holding a cool guitar.
First Things First: Hardware Costs
Running an AI takes specialized processors that can handle complex processing queries. We’re early in the game when it comes to picking a “winner” for specialized processors, but these days, the most common processor is a graphical processing unit (GPU), with Nvidia’s hardware and platform as an industry favorite and front-runner.
The most common “workhorse chip” of AI processing tasks, the Nvidia A100, starts at about $10,000 per chip, and a set of eight of the most advanced processing chips can cost about $300,000. When Elon Musk wanted to invest in his generative AI project, he reportedly bought 10,000 GPUs, which equates to an estimated value in the tens of millions of dollars. He’s gone on record as saying that AI chips can be harder to get than drugs.
Google offers folks the ability to rent their TPUs through the cloud starting at $1.20 per chip hour for on-demand service (less if you commit to a contract). Meanwhile, Intel released a sub-$100 USB stick with a full NPU that can plug into your personal laptop, and folks have created their own models at home with the help of open sourced developer toolkits. Here’s a guide to using them if you want to get in the game yourself.
Clearly, the spectrum for chips is vast—from under $100 to millions—and the landscape for chip producers is changing often, as is the strategy for monetizing those chips—which leads us to our next section.
Using Third Parties: Specialized Problems = Specialized Service Providers
Building AI is a challenge with so many moving parts that, in a business use case, you eventually confront the question of whether it’s more efficient to outsource it. It’s true of storage, and it’s definitely true of AI processing. You can already see one way Google answered that question above: create a network populated by their TPUs, then sell access.
Other companies specialize in broader or narrower parts of the AI creation and processing chain. Just to name a few, diverse companies: there’s Hugging Face, Inflection AI, and Vultr. Those companies have a wide array of product offerings and resources from open source communities like Hugging Face that provide a menu of models, datasets, no-code tools, and (frankly) rad developer experiments to bare metal servers like Vultr that enhance your compute resources. How resources are offered also exist on a spectrum, including proprietary company resources (i.e. Nvidia’s platform), open source communities (looking at you, Hugging Face), or a mix of the two.
This means that, whichever piece of the AI tech stack you’re considering, you have a high degree of flexibility when you’re deciding where and how much you want to customize and where and how to implement an out-of-the box solution.
Ballparking an estimate of what any of that costs would be so dependent on the particular model you want to build and the third-party solutions you choose that it doesn’t make sense to do so here. But, it suffices to say that there’s a pretty narrow field of folks who have the infrastructure capacity, the datasets, and the business need to create their own network. Usually it comes back to any combination of the following: whether you have existing infrastructure to leverage or are building from scratch, if you’re going to sell the solution to others, what control over research or dataset you have or want, how important privacy is and how you’re incorporating it into your products, how fast you need the model to make decisions, and so on.
Welcome to the Spotlight, Storage
And, hey, with all that, let’s not forget storage. At the most basic level of consideration, AI uses a ton of data. How much? Going knowledge says at least an order of magnitude more examples than the problem presented to train an AI model. That means you want 10 times more examples than parameters.
Parameters and Hyperparameters
The easiest way to think of parameters is to think of them as factors that control how an AI makes a decision. More parameters = more accuracy. And, just like our other AI terms, the term can be somewhat inconsistently applied. Here’s what ChatGPT has to say for itself:
That 10x number is just the amount of data you store for the initial training model—clearly the thing learns and grows, because we’re talking about AI.
Preserving both your initial training algorithm and your datasets can be incredibly useful, too. As we talked about before, the more complex an AI, the higher the likelihood that your model will surprise you. And, as many folks have pointed out, deciding whether to leverage an already-trained model or to build your own doesn’t have to be an either/or—oftentimes the best option is to fine-tune an existing model to your narrower purpose. In both cases, having your original training model stored can help you roll back and identify the changes over time.
The size of the dataset absolutely affects costs and processing times. The best example is that ChatGPT, everyone’s favorite model, has been rocking GPT-3 (or 3.5) instead of GPT-4 on the general public release because GPT-4, which works from a much larger, updated dataset than GPT-3, is too expensive to release to the wider public. It also returns results much more slowly than GPT-3.5, which means that our current love of instantaneous search results and image generation would need an adjustment.
And all of that is true because GPT-4 was updated with more information (by volume), more up-to-date information, and the model was given more parameters to take into account for responses. So, it has to both access more data per query and use more complex reasoning to make decisions. That said, it also reportedly has much better results.
Storage and Cost
What are the real numbers to store, say, a primary copy of an AI dataset? Well, it’s hard to estimate, but we can ballpark that, if you’re training a large AI model, you’re going to have at a minimum tens of gigabytes of data and, at a maximum, petabytes. OpenAI considers the size of its training database proprietary information, and we’ve found sources that cite that number as anywhere from 17GB to 570GB to 45TB of text data.
That’s not actually a ton of data, and, even taking the highest number, it would only cost $225 per month to store that data in Backblaze B2 (45TB * $5/TB/mo), for argument’s sake. But let’s say you’re training an AI on video to, say, make a robot vacuum that can navigate your room or recognize and identify human movement. Your training dataset could easily reach into petabyte scale (for reference, one petabyte would cost $5,000 per month in Backblaze B2). Some research shows that dataset size is trending up over time, though other folks point out that bigger is not always better.
On the other hand, if you’re the guy with the Intel Neural Compute stick we mentioned above and a Raspberry Pi, you’re talking the cost of the ~$100 AI processor, ~$50 for the Raspberry Pi, and any incidentals. You can choose to add external hard drives, network attached storage (NAS) devices, or even servers as you scale up.
Storage and Speed
Keep in mind that, in the above example, we’re only considering the cost of storing the primary dataset, and that’s not very accurate when thinking about how you’d be using your dataset. You’d also have to consider temporary storage for when you’re actually training the AI as your primary dataset is transformed by your AI algorithm, and nearly always you’re splitting your primary dataset into discrete parts and feeding those to your AI algorithm in stages—so each of those subsets would also be stored separately. And, in addition to needing a lot of storage, where you physically locate that storage makes a huge difference to how quickly tasks can be accomplished. In many cases, the difference is a matter of seconds, but there are some tasks that just can’t handle that delay—think of tasks like self-driving cars.
For huge data ingest periods such as training, you’re often talking about a compute process that’s assisted by powerful, and often specialized, supercomputers, with repeated passes over the same dataset. Having your data physically close to those supercomputers saves you huge amounts of time, which is pretty incredible when you consider that it breaks down to as little as milliseconds per task.
One way this problem is being solved is via caching, or creating temporary storage on the same chips (or motherboards) as the processor completing the task. Another solution is to keep the whole processing and storage cluster on-premises (at least while training), as you can see in the Microsoft-OpenAI setup or as you’ll often see in universities. And, unsurprisingly, you’ll also see edge computing solutions which endeavor to locate data physically close to the end user.
While there can be benefits to on-premises or co-located storage, having a way to quickly add more storage (and release it if no longer needed), means cloud storage is a powerful tool for a holistic AI storage architecture—and can help control costs.
And, as always, effective backup strategies require at least one off-site storage copy, and the easiest way to achieve that is via cloud storage. So, any way you slice it, you’re likely going to have cloud storage touch some part of your AI tech stack.
What Hardware, Processing, and Storage Have in Common: You Have to Power Them
Here’s the short version: any time you add complex compute + large amounts of data, you’re talking about a ton of money and a ton of power to keep everything running.
Fortunately for us, other folks have done the work of figuring out how much this all costs. This excellent article from SemiAnalysis goes deep on the total cost of powering searches and running generative AI models. The Washington Post cites Dylan Patel (also of SemiAnalysis) as estimating that a single chat with ChatGPT could cost up to 1,000 times as much as a simple Google search. Those costs include everything we’ve talked about above—the capital expenditures, data storage, and processing.
Consider this: Google spent several years putting off publicizing a frank accounting of their power usage. When they released numbers in 2011, they said that they use enough electricity to power 200,000 homes. And that was in 2011. There are widely varying claims for how much a single search costs, but even the most conservative say .03 Wh of energy. There are approximately 8.5 billion Google searches per day. (That’s just an incremental cost by the way—as in, how much does a single search cost in extra resources on top of how much the system that powers it costs.)
Power is a huge cost in operating data centers, even when you’re only talking about pure storage. One of the biggest single expenses that affects power usage is cooling systems. With high-compute workloads, and particularly with GPUs, the amount of work the processor is doing generates a ton more heat—which means more money in cooling costs, and more power consumed.
So, to Sum Up
When we’re talking about how much an AI costs, it’s not just about any single line item cost. If you decide to build and run your own models on-premises, you’re talking about huge capital expenditure and ongoing costs in data centers with high compute loads. If you want to build and train a model on your own USB stick and personal computer, that’s a different set of cost concerns.
And, if you’re talking about querying a generative AI from the comfort of your own computer, you’re still using a comparatively high amount of power somewhere down the line. We may spread that power cost across our national and international infrastructures, but it’s important to remember that it’s coming from somewhere—and that the bill comes due, somewhere along the way. | <urn:uuid:b4654d0e-4299-4290-9f56-fe70157d4526> | CC-MAIN-2024-38 | https://www.backblaze.com/blog/ai-101-do-the-dollars-make-sense/ | 2024-09-21T00:36:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00087.warc.gz | en | 0.943559 | 3,301 | 2.6875 | 3 |
8 Benefits of Using AI for Cybersecurity
Date: 4 May 2021
Artificial intelligence endeavours to simulate human intelligence. It has immense potential in cybersecurity. If harnessed correctly, Artificial Intelligence or AI systems can be trained to generate alerts for threats, identify new types of malware and protect sensitive data for organisations.
According to TechRepublic, a midsized company gets alerts for over 200,000 cyber events every day. A team of security experts in an average company cannot deal with this volume of threats. Some of these threats will, therefore, naturally go unnoticed and cause severe damage to networks.
AI is the ideal cybersecurity solution for businesses looking to thrive online today. Security professionals need strong support from intelligent machines and advanced technologies like AI to work successfully and protect their organisations from cyber attacks. This article looks into the benefits of integrating AI with cybersecurity.
Let’s check them out:
1. AI Learns More Over Time
As the name suggests, Artificial Intelligence technology is intelligent, and it uses its ability to improve network security over time. It uses machine learning and deep learning to learn a business network’s behavior over time. It recognizes patterns on the network and clusters them. It then proceeds to detect any deviations or security incidents from the norm before responding to them.
The patterns that artificial neural networks learn over time can help to improve security in the future. Potential threats with similar traits to those recorded get blocked early enough. The fact that AI keeps learning makes it difficult for hackers to beat its intelligence.
2. Artificial Intelligence Identifies Unknown Threats
A human being may not be able to identify all the threats a company faces. Every year, hackers launch hundreds of millions of attacks with different motives. Unknown threats can cause massive damage to a network. Worse still is the impact they can have before you detect, identify, and prevent them.
As attackers try new tactics from sophisticated social engineering to malware attacks, it is necessary to use modern solutions to prevent them. AI has proven to be one of the best security technologies in mapping and stopping unknown threats from ravaging a company.
3. AI Can Handle a Lot of Data
A lot of activity happens on a company’s network. An average mid-sized company itself has huge traffic. That means there’s a lot of data transferred between customers and the business daily. This data needs protection from malicious people and software. But then, cybersecurity personnel cannot check all the traffic for possible threats.
AI is the best solution that will help you detect any threats masked as normal activity. Its automated nature allows it to skim through massive chunks of data and traffic. Technology that uses AI, such as a residential proxy, can help you to transfer data. It can also detect and identify any threats hidden in the sea of chaotic traffic.
4. Better Vulnerability Management
Vulnerability management is key to securing a company’s network. As mentioned earlier, an average company deals with many threats daily. It needs to detect, identify and prevent them to be safe. Analyzing and assessing the existing security measures through AI research can help in vulnerability management.
AI helps you assess systems quicker than cybersecurity personnel, thereby increasing your problem solving ability manifold. It identifies weak points in computer systems and business networks and helps businesses focus on important security tasks. That makes it possible to manage vulnerability and secure business systems in time.
5. Better Overall Security
The threats that business networks face change from time to time. Hackers change their tactics every day. That makes it difficult to prioritize security tasks at a company. You may have to deal with a phishing attack along with a denial-of-service attack or ransomware at a go.
These attacks have similar potential but you must know what to deal with first. Bigger threats that can make security a challenge are human error and negligence. The solution here is to deploy AI on your network to detect all types of attacks and help you prioritize and prevent them.
6. Duplicative Processes Reduce
As mentioned earlier, attackers change their tactics often. But, the basic security best practices remain the same every day. If you hire someone to handle these tasks, they may get bored along the way. Or they could feel tired and complacent and miss an important security task and expose your network.
AI, while mimicking the best of human qualities and leaving out the shortcomings, takes care of duplicative cybersecurity processes that could bore your cybersecurity personnel. It helps check for basic security threats and prevent them on a regular basis. It also analyzes your network in depth to see if there are security holes that could be damaging to your network.
7. Accelerates Detection and Response Times
Threat detection is the beginning of protecting your company’s network. It would be best if you detected things like untrusted data quickly. It will save you from the irreversible damage to your network.
The best way to detect and respond to threats in time is by integrating AI with cybersecurity. AI scans your entire system and checks for any possible threats. Unlike humans, AI will identify threats extremely early and simplify your security tasks.
8. Securing Authentication
Most websites have a user account feature where one logs in to access services or buy products. Some have contact forms that visitors need to fill with sensitive information. As a company, you need an extra security layer to run such a site because it involves personal data and sensitive information. The additional security layer will ensure that your visitors are safe while browsing your network.
AI secures authentication anytime a user wants to log into their account. AI uses various tools such as facial recognition, CAPTCHA, and fingerprint scanners amongst others for identification. The information collected by these features can help to detect if a log-in attempt is genuine or not.
Hackers use credential stuffing and brute force attacks to access company networks. Once an attacker enters a user account, your whole network could be at risk.
Keeping your data and network secured isn’t easy in today’s business environment. You can take a decisive step towards being safer by adopting AI to strengthen your security infrastructure. There are several benefits of using AI for business security and we anticipate that very soon artificial intelligence will become an integral part of business cybersecurity.
Author: Daniel Martin
Daniel Martin has hands-on experience in digital marketing since 2007. He has been building teams and coaching others to foster innovation and solve real-time problems.
Daniel also enjoys photography and traveling. | <urn:uuid:b3ad1486-ad38-48d4-ac70-8cc7878862de> | CC-MAIN-2024-38 | https://www.cm-alliance.com/cybersecurity-blog/8-benefits-of-using-ai-for-cybersecurity | 2024-09-21T01:01:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00087.warc.gz | en | 0.945713 | 1,315 | 2.828125 | 3 |
beebright - stock.adobe.com
Malware authors have always been trying to update their software and evolve their techniques to take advantage of technologies and bypass security measures.
Botnets are a perfect example of how cyber criminals have managed to accomplish that over the past decade. Their widespread and severe consequences have transformed botnets into one of the most significant and destructive threats in the cyber security landscape, as they are responsible for many large-scale and high-profile attacks.
Examples of attacks performed by botnets include distributed denial of service (DDoS), personal or classified data theft, spam campaigns, cryptocurrency mining and fake news spreading on social media platforms.
Moreover, there is an exponential increase in attacks that result from crime-as-a-service offerings, which usually include botnets that are rented or sold to people or groups lacking experience or technical skills who wish to perform nefarious activities. So, it is clear that taking security measures against botnets is crucial for an organisation’s well-being and the protection of private data.
The growing threat of botnets
One way to categorise botnets is by the technology they adopt for their command and control (C&C) mechanism. In terms of C&C, the architecture of a botnet can be either centralised (see Figure 1) or decentralised (see Figure 2).
In the first category, the bots communicate with one or more servers using the client-server model. The generation of centralised botnets used internet relay chat (IRC) channels to communicate with the C&C server. However, due to the single-point-of-failure nature of centralised architectures, the criminals started developing botnets that were based on peer-to-peer (P2P) communications, overcoming the problem of the previous generation of botnets.
Indeed, P2P botnets, having the advantage of resilience and robustness, formed an even greater threat to organisations, but they also have two major drawbacks. To begin with, their maintenance is very difficult because of the complexity of their deployment and development and, secondly, since there is no longer a central C&C server, the herder might not have full control of the botnet any more.
The solution adopted by malware authors was to return to the centralised architecture model. However, they did not use the IRC protocol for the communications between the herder and the bots; the HTTP protocol was used instead. The advantage and strength of this solution is that the HTTP protocol is commonly used by legitimate, non-malicious web applications and services.
So, the attackers are able to embed their traffic in non-malicious, legitimate HTTP traffic and hide C&C commands among normal network activities. This gives HTTP-based botnets their great advantage, which is their ability to stay “under the radar” and perform their nefarious operations undetected.
Botnet detection techniques
Many researchers have dedicated their efforts to the study and analysis of HTTP botnets and finding accurate ways to detect them. A large number of researchers approach the problem by employing behaviour-based detection techniques, since the traditional signature-based systems are often easily bypassed by new generations of malware.
More specifically, the analysis of network traffic and its characteristics (not necessarily the packets’ payload) can provide very insightful information as to whether a network flow or packet is benign or if it is part of a botnet’s C&C mechanism, even in cases where traffic is encrypted. Examples of traffic characteristics that could prove useful are the flow duration, the total number of packets exchanged in a flow, the length of the packet in a flow and the median of payload bytes per packet.
Machine learning plays a key role in this approach, as behaviour-based botnet detection systems are usually built using a classification model that is trained on a dataset with specified features (a set of network characteristics in our case). This classification model is able to identify efficiently and accurately malware-generated traffic when certain behaviour patterns are met.
Apart from classification, more machine learning tools (feature extraction, for example) could be used to make our system as accurate and fast as possible. In general, novel attacks deployed by newer or more advanced versions of existing malware can be prevented using this approach, as this detection system is not based on malware signatures.
Cyber attackers adapt
Unsurprisingly, attackers started looking for ways and techniques that would allow them to overcome detection systems’ progress and bypass behaviour-based detection. Adversarial machine learning is an emerging technique that, among others, could target and evade security systems that utilise machine learning for dealing with malicious activities.
Typically, its functionality is based on taking advantage of classifier’s weaknesses. For example, there might be a space of instances (flows/packets, for example) that the classifier might not be able to describe well, so instances that belong to that space will be misclassified.
Another kind of attack that can be performed against systems based on machine learning is when adversaries attempt to attack the training phase of classification; that is, they try to inject adversarial training data to the classification model. This eventually leads to a model that labels malicious instances incorrectly as non-malicious, thus increasing the number of false negatives and leaving the system vulnerable.
Obfuscation techniques used by attackers should also be taken into consideration when implementing detection systems based on behaviour. More specifically, attackers might attempt to convert the value of certain attributes and characteristics of network traffic flows that are indicative of malicious activity, into values that are typical and normal for non-malicious flows, thereby evading security measures. Therefore, if the obfuscated features are used by the classification system, the malicious flows will have a greater chance of bypassing the detection system.
Keep an eye on trending threats
To conclude, a best practice for organisations in terms of security is to always be up-to-date with the current trends in the cyber threat landscape as it is a field that changes constantly and radically.
Machine learning has proven to be an extremely powerful ally in the battle against certain kinds of malware, and it currently seems to be the ideal method for keeping up with the evolution of threats, both in terms of detection accuracy and efficiency.
Of course, behaviour-based systems have the drawback of false positives, but the benefits of this approach are more than enough to ignore that disadvantage.
However, when employing behaviour-based systems, organisations should not overlook the complexity and difficulty of building such systems and the caveats that come with this solution, some of them mentioned above – adversarial machine learning, obfuscation of features.
Technical expertise, along with patience and the ability to gain insight, are probably the most important values professionals and organisations should be equipped with to successfully deploy and manage such complex systems that will help them adjust to today’s threat landscape and continue operating in a secure environment. | <urn:uuid:ec5fc4ba-953e-4f04-b972-4553511cbdd0> | CC-MAIN-2024-38 | https://www.computerweekly.com/microscope/opinion/Botnets-and-machine-learning-A-story-of-hide-and-seek | 2024-09-20T23:40:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00087.warc.gz | en | 0.955383 | 1,404 | 3.515625 | 4 |
What is an Autonomous Network?
Automation itself, and the idea that technologies could be self-provisioning, self-diagnosing, and self-healing, has been around for some time. But with advances in Artificial Intelligence (AI) and cloud technologies, such fanciful notions are quickly becoming realities.
Nowadays, most of us use AI-enabled apps when we ask Apple’s Siri or Amazon’s Alexa for help with a task. And even streaming services like Netflix help us pick movies and TV shows using AI.
In the world of terrestrial and submarine networks, AI and the elements that make up the cloud are converging to enable the next level of networking.
Those elements include:
- Virtually boundless compute resources: Modern CPUs can be clustered to offer massive, almost unlimited parallel processing power.
- Essentially unlimited storage: Storing a single, massive set of data across multiple processing platforms—within the same data center or across numerous physically separated data centers—is not only feasible, it’s now common.
- Dynamic connections: Agile, on-demand bandwidth—provisioned with the precise connection needed depending on the type of data in transit—is a critical component of an autonomous network.
- Open source software: Open source solutions like Hadoop are at the center of AI transforming networks.
- Big data: For an autonomous network, the gathering and analysis of data from network sensors leads to better detection of patterns and anomalies.
- Sensors: Embedded sensors, and the data they produce, form the foundation of an autonomous network infrastructure
Ciena has been on the leading edge of the development for autonomous networking, releasing the first virtual control plane in 2008. But, even though it’s a significant advance, Ciena sees autonomous networking as too restrictive and too rigid. Our customers have been telling us for years that they need to maintain strict control of their network and that automation alone is not enough.
So Ciena has started to champion a broader, more holistic concept: the Adaptive Network, which is geared toward providing a network that can grow and adapt with a company as its business needs and markets change.
An autonomous network runs with minimal to no human intervention—able to configure, monitor, and maintain itself independently.
The Adaptive Network includes three important layers:
- Programmable infrastructure: This includes the network’s physical and virtual elements, as well as the telemetry gathered from them.
- Analytics and intelligence: The programmable infrastructure produces significant amounts of data. Some of it is big data that indicate trends the network learns and adjusts to over time. Big data can inform the network how to adjust in the long term, which traffic patterns to look out for, and which parts of the network could be vulnerable.
- Software control and automation: Effective automation of network tasks as defined by the network provider, such as loading access controllers and provisioning routers, can eliminate human error and keep the network running at peak performance.
Ciena’s unmatched product portfolio supports the Adaptive Network in a number of important ways. Critical components include:
- WaveLogic Ai: The next generation of Ciena’s industry-leading WaveLogic coherent technology fundamentally changes how optical networks are built and managed. WaveLogic Ai enables tunable capacity, from 100G to 400G, in 50G increments—giving you previously unattainable control over your network.
- WaveLogic Photonics: Ciena’s fully instrumented, intelligent photonic system includes WaveLogic coherent optics and flexible line elements that, combined with embedded and discrete software tools, offer superior automation, control, and visibility of optical networks.
- Ciena’s optical and packet platforms: Ciena has an array of converged optical and packet platforms built on flexible platform architectures that are indispensable in the autonomous network and provide efficient matching of client services to line capacity.
- Blue Planet Analytics (BPA): BPA is a core element of the Blue Planet intelligent automation platform. It normalizes real-time and historical data collected from multiple sources across the network and also seamlessly integrates with third-party big data cluster systems.
- Navigator Network Control Suite (NCS) domain controller: Ciena brings the power of software-defined programmability to your next-gen Ciena network and service operations. Navigator NCS eliminates the manual, time-consuming, error-prone steps between multiple, separate management tools and provides the Blue Planet foundation from which you can evolve your operations to drive closed-loop intelligent automation across multi-vendor multi-domains.
Ciena’s 25 years of experience connecting the world makes it the perfect partner to deliver the Adaptive Network.
- Unmatched network experience: With more than 1,300 customers worldwide, Ciena supports 80 percent of the world’s largest network providers. Ciena has deployed over 160 million kilometers of coherent optical networks.
- Network provider services: Ciena has designed services specifically to help providers evolve their networks. Ciena’s approach supports the complete network lifecycle, with consulting, solution practices, and services oriented around the needs of providers.
- Partners: Ciena augments its value with a vibrant partner program that evolves and develops offerings and expertise to enable all aspects of the Adaptive Network.
- Open APIs and modern data models: At both the hardware and software layers, Ciena enables better real-time network telemetry and measurement using APIs. Ciena’s APIs provide faster, easier, and multi-platform/multi-vendor application development, easy integration with IT tools, and more efficient IT resource utilization. | <urn:uuid:433f49db-b7df-465b-8681-b3b4e61302eb> | CC-MAIN-2024-38 | https://www.ciena.com/insights/what-is/What-Is-an-Autonomous-Network.html?utm_source=blog&utm_medium=Social | 2024-09-12T10:19:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00887.warc.gz | en | 0.90484 | 1,183 | 3.03125 | 3 |
Design work has been completed on the Square Kilometre Array's two supercomputers.
With the world's largest radio telescope set to be built across two nations, South Africa and Australia, the SKA will need local supercomputers in Cape Town and Perth, known collectively as the Science Data Processor (SDP).
Once built, the array will be the world’s largest public science data project. Construction on SKA1, the first phase, is expected to begin this year.
“We estimate SDP’s total compute power to be around 250 petaflops - that’s 25 percent faster than IBM’s Summit, the current fastest supercomputer in the world,” Maurizio Miccolis, SDP’s project manager for the SKA Organisation, said.
“In total, up to 600 petabytes of data will be distributed around the world every year from SDP - enough to fill more than a million average laptops.”
Some 5Tbps of data will flow through SDP, requiring the system to make near real-time decisions about what data to keep and what to discard as noise.
“By pushing what’s technologically feasible and developing new software and architecture for our HPC needs, we also create opportunities to develop applications in other fields,” Miccolis said.
But before data reaches the SDP, it first goes through the Central Signal Processor - hardware and software inside the SKA itself that converts digitized astronomical signals detected by SKA receivers (antenna & dipole -“rabbit-ear”- arrays) into the information that is sent to the SDP. The SKA Organisation notes: "It will be located in remote locations and must be designed to deliver the maximum science possible within a hard cost cap and an aggressive timeline."
“SDP is where data becomes information,” Rosie Bolton, data center scientist for the SKA Organisation, explained. “This is where we start making sense of the data and produce detailed astronomical images of the sky.”
After initial processing and filtering in the SDP, the SKA data will be sent to regional supercomputers around the world, including the Barcelona Supercomputing Center and the Jülich Supercomputing Centre.
Bolton added: "The [SDP] architecture has to be able to evolve over the 50-year lifetime of the project. So we have to design a system that's highly scalable and highly extensible so that we can add on modules and change the size of it as time goes on."
The SKA telescope is expected to be powerful enough to detect very faint radio signals emitted by cosmic sources billions of light years away from Earth, those signals emitted in the first billion years of the Universe when the first galaxies and stars started forming.
The SKA Organisation anticipates that the array will be used to answer fundamental questions of science and about the laws of nature, including: how did the Universe, and the stars and galaxies contained in it, form and evolve? Was Einstein’s theory of relativity correct? What is the nature of ‘dark matter’ and ‘dark energy’? What is the origin of cosmic magnetism? And, is there life somewhere else in the Universe? | <urn:uuid:7968efa2-a8bb-4e15-9319-a2a97c257a1d> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/news/design-completed-250-petaflops-square-kilometre-array-supercomputers/ | 2024-09-12T09:07:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00887.warc.gz | en | 0.918645 | 684 | 3.15625 | 3 |
Under Sheikh Hasina’s leadership during her 15-year tenure, Bangladesh has faced an unprecedented economic crisis characterized by widespread financial mismanagement and corruption. A staggering $150 billion, equivalent to about Tk 17.6 lakh crore, has been illicitly siphoned from the country. This figure, reported by the US-based think tank Global Financial Integrity (GFI), is over double the nation’s largest-ever budget of Tk 7.97 lakh crore. The economic fallout from this plutocratic behavior raises crucial questions about the integrity and efficiency of Bangladeshi governance during this period, marking a significant episode of financial debacle in the nation’s recent history.
The Mechanics of Financial Misappropriation
A significant portion of the illicit funds originated from large-scale bank loans that were diverted for purposes other than intended. These misappropriated resources were primarily funneled through politically connected individuals and influential businesses. The network of culprits involved politicians, influential businessmen, bureaucrats, and even senior officials from banks and insurance companies. Corruption was rife among mid-level functionaries as well, creating a sprawling web of financial deceit. These illicit activities were not only limited to embezzlement but also included methods such as inflating the costs of government projects and manipulating contract prices to extract additional funds illegally.
Money laundering and trade mis-invoicing were commonly employed techniques for expatriating these ill-gotten gains. Offshoring of embezzled money occurred through various channels, including deposits into foreign accounts in Switzerland, the UK, the US, Canada, and other Southeast Asian and East European countries. Investment in real estate properties was another preferred method to launder money. Notable personalities, like former Land Minister Saifuzzaman Chowdhury, used such funds to purchase 260 properties in the UK valued at Tk 1,888 crore, considerably contributing to the outflow of money from Bangladesh.
Institutional Failure and State Complicity
Bangladeshi institutions tasked with regulation and oversight failed dramatically under the strain of pervasive corruption. The country’s central financial body, Bangladesh Bank, saw its regulations flouted and oversight nullified because of immense political pressure. High-ranking officials have come forward admitting their incapacity to enforce financial norms due to political interference, culminating in an ineffectual governance structure. This culture of impunity emboldened figures like former police chief Benazir Ahmed to accumulate substantial illegal wealth without fear of repercussions, showcasing how deeply entrenched corruption had become.
Systemic corruption rendered state institutions ineffective, leaving them unable to safeguard national financial interests fully. The banking sector, in particular, became the hotbed for financial mismanagement. Weak oversight owing to compromised integrity among senior officials only exacerbated the looting of public resources. These instances exposed the structural weaknesses within state institutions, demonstrating an urgent need for a comprehensive overhaul to restore the integrity and functionality of these bodies.
The Path to Recovery and Necessary Reforms
While money laundering under the Awami League is not a unique occurrence in Bangladesh’s political history, it underscores the necessity for wide-ranging systemic reforms. Addressing the economic crisis involves overhauling the banking sector and tightening regulations to prevent misuse of funds. Strengthening accountability mechanisms is crucial to ensuring that financial misconduct does not go unpunished. Restoring public confidence in the government and its institutions requires this substantial shift in governance practices.
A recent mass movement led by students offers a window of opportunity for significant reforms. It signals a pressing demand from the grassroots for political accountability and transparency. The upcoming interim government has a critical role to play in enacting these changes. Prioritizing the prosecution of individuals involved in money laundering and focusing on the recovery of embezzled funds would significantly impact rejuvenating Bangladesh’s economy. Such actions would not only stabilize the financial system but also set a precedent for the integrity of future governance.
Conclusion: The Cost of Corruption and A Call for Change
Under the 15-year leadership of Sheikh Hasina, Bangladesh has encountered a profound economic crisis marked by rampant financial mismanagement and corruption. An astonishing $150 billion, equivalent to approximately Tk 17.6 lakh crore, has been illicitly siphoned out of the country. This alarming statistic, which comes from the US-based think tank Global Financial Integrity (GFI), surpasses more than twice the nation’s largest recorded budget of Tk 7.97 lakh crore. This large-scale embezzlement has had significant repercussions, raising serious concerns regarding the integrity and effectiveness of Bangladesh’s governance during this period. The economic turmoil brought on by this plutocratic behavior underscores a troubling chapter in the nation’s recent history, characterized by financial misdeeds that have undermined public confidence and strained the nation’s economy. This episode is pivotal, bringing to light critical issues that demand urgent attention to restore financial stability and public trust in Bangladesh’s government and economic systems. | <urn:uuid:e4eaefb4-fd68-4b85-be58-42cf78e4cd4d> | CC-MAIN-2024-38 | https://bankingcurated.com/regulatory-and-compliance/how-did-sheikh-hasinas-rule-lead-to-an-economic-crisis-in-bangladesh/ | 2024-09-13T14:18:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00787.warc.gz | en | 0.946857 | 1,004 | 2.5625 | 3 |
Emerging from artificial intelligence (AI), machine learning (ML) manifests a machine’s capacity to simulate intelligent human behavior. Yet, what tangible applications does it bring to the table? This article delves into the core of machine learning, offering an intricate exploration of the dynamic workflows that form the backbone of ML projects. What exactly constitutes a machine learning workflow, and why are these workflows of paramount importance?
Understanding Machine Learning
Machine Learning (ML) is a pivotal branch of AI and computer science. ML mimics the iterative human learning process through the synergy of data and algorithms, perpetually refining its precision. Positioned prominently within the expansive field of data science, ML deploys statistical methods to educate algorithms in making predictions and drawing insights within the data mining landscape. These insights, in turn, influence decision-making processes in applications and businesses, ideally fostering organic business growth. With the surge in big data, the demand for skilled data scientists has soared. ML emerges as a critical tool for pinpointing pivotal business questions and sourcing the pertinent data for resolution.
In the present era, data holds unprecedented value, functioning as a currency of immense significance. It plays a crucial role in shaping critical business decisions and providing essential intelligence for strategic moves.
Machine Learning's holistic approach
Frameworks guiding development
Unveiling the machine learning workflow
Machine learning workflows map out the stages of executing a machine learning project, outlining the journey from data collection to deployment in a production environment. These workflows typically encompass phases such as data collection, pre-processing, dataset construction, model training and evaluation, and ultimately, deployment to production.
The aims of a machine learning workflow
At its core, the primary aim of machine learning is to instruct computers on behavior using input data. Rather than explicit coding of instructions, ML involves presenting an adaptive algorithm that mirrors correct behavior based on examples. The initial steps involve project definition and method selection in framing a generalized machine learning workflow. A departure from rigid workflows is recommended, favoring flexibility and allowing for a gradual evolution from a modest-scale approach to a robust “production-grade” solution capable of enduring frequent use in diverse commercial or industrial contexts. While specifics of ML workflows differ across projects, the outlined phases – data collection, pre-processing, dataset construction, model training and evaluation, and deployment – constitute integral components of the typical machine learning odyssey.
Stages in the machine learning workflow
The journey commences by defining the problem at hand laying the foundation for data collection. A nuanced understanding of the issue proves pivotal in identifying prerequisites and optimal solutions. For instance, integrating an IoT system equipped with diverse data sensors becomes imperative in a real-time data-centric machine learning endeavor. Initial datasets are drawn from many sources, such as databases, files, or sensors.
The second phase involves the meticulous refinement and formatting of raw data. Since raw data is unsuitable for training machine learning models, a transformation process is initiated, converting ordinal and categorical data into numeric features – the lifeblood of these models.
The selection of an apt machine learning model is a strategic decision, factoring in performance (the model’s output quality), explainability (the ease of interpreting results), dataset size (affecting data processing and synthesis), and the temporal and financial costs of model training.
Model training odyssey
Model metric evaluation
Hyperparameters wield the scepter in shaping the model’s architecture. Navigating the intricate path to discover the optimal model architecture is the art of hyperparameter tuning.
Model unveiling for predictive prowess
The grand finale involves deploying a prediction model. This entails creating a model resource in AI Platform Prediction, the cloud-based execution ground for models. Develop a version of the model and seamlessly link it to the model file nestled in the cloud, unlocking its predictive prowess.
Streamlining the machine learning journey through automation
Unlocking the full potential of a machine learning workflow involves strategically automating its intricacies. Identifying the ripe opportunities for automation is the key to unleashing efficiency within the workflow.
Innovative model discovery
Embarking on the journey of model selection becomes an expedition of possibilities with automated experimentation. Exploring myriad combinations of numeric and textual data and diverse text processing methods unfolds effortlessly. This automation accelerates the discovery of potential models, offering a quantum leap in time and resource savings.
Automated data assimilation
Effortlessly managing data assimilation empowers professionals with more time for nuanced tasks and paves the way for heightened productivity. This automation catalyzes refining processes and orchestrating resource allocation with finesse.
Savvy feature unveiling
The art of feature selection takes a transformative turn with automation, revealing the most invaluable facets of a dataset for the prediction variable or desired output. This dynamic process ensures that the machine learning model is armed with the most pertinent information, elevating its efficacy to unprecedented levels.
The pursuit of optimal hyperparameters undergoes a paradigm shift with automation. Identifying hyperparameters that yield the lowest errors on the validation set becomes a seamless quest, ensuring the harmonious generalization of results to the testing set. This automated exploration of hyperparameter space becomes the cornerstone for elevating the model’s overall performance to new heights.
✔️ Machine Learning Operations (MLOps) refers to the practice of implementing the development, deployment, monitoring, and management of ML models in production environments. The ultimate goal of MLOps implementation is to make ML models reliable, scalable, secure, and cost-efficient. But what possible challenges are related to MLOps, and how do we tackle them? → https://hystax.com/what-are-the-main-challenges-of-the-mlops-process/ | <urn:uuid:b71de0c0-374c-4832-8de7-fe1249392d51> | CC-MAIN-2024-38 | https://hystax.com/relevance-and-impact-of-machine-learning-workflow/ | 2024-09-16T02:31:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00587.warc.gz | en | 0.877606 | 1,175 | 3.015625 | 3 |
Today we invite you to read about one of the most important aspects of product development, which may not seem to some as valuable as actual coding, implementation of cutting-edge technologies such as Artificial Intelligence and Machine Learning, or the latest advancements in the Internet of Things. Quality Assurance testing can play a deciding role in the success of your product, your potential revenue, the growth of your business, and your brand image.
You may have the most talented dedicated development team in the world, working on a product that is set to potentially disrupt an entire industry and change people’s lives forever. However, without proper Quality Assurance (QA) software testing, does the product really have a chance to take off? There are plenty of examples in which a great idea failed due to poor realization, while the same idea done right became a success later.
According to Market Watch, the global Software Quality Assurance market size was valued at USD 1772.81 million in 2022 and is expected to reach USD 3989.57 million by 2028. So, there will be no shortage of talent—all you need is to know how they can help. This article will help you to gain a comprehensive understanding of the QA testing process in software product development to boost your business outcomes by improving your existing operations.
QA testing process involves systematic evaluation of the various aspects of developed software before the release of the product for the users. QA testing process is conducted during and after development and is performed by qualified professionals to ensure the reliability and quality of a product.
You need the help of a quality assurance testing company to cover the following areas:
With the QA process explained, it’s time to clear out some common misconceptions and explain two related areas: Quality Assurance (QA) and Quality Control (QC).
Quality Assurance is all about improving and optimizing the development process and finding weak points and opportunities for improvement. It runs with the actual development process, emphasizing the establishment of processes, standards, tools, and methodologies that contribute to the overall quality. An example of QA activities can be developing a comprehensive test plan, implementing coding standards, and conducting regular code reviews.
Quality Control is a process set up by the organization to meet quality standards and deliver the product as intended—usually, this is a reactive process focused on identifying and correcting defects in the final product. An example of Quality Control may include tracking defects and reporting them back to developers. Lacking a particular feature after the release can be considered a defect.
Quality Testing can be considered one of the components of Quality Control, and it is a process aimed at determining whether the final product behaves as required and how it reacts to various conditions. The tester verifies and validates the product, making decisions on whether the particular test case (a specific set of conditions, inputs, actions, and expected outcomes designed to validate a particular aspect of an application or system) is passed or failed. An example of testing can be considered an execution of test cases. In one of our previous articles, we explained the difference between a test plan and a test strategy, so if you want to discover more information on testing, you are welcome to read it!
QA software testing can be manual (where the tester executes test cases without leveraging automated testing tools) and automated (in which reviewing and validating a software product is executed with the help of automated tools and scripts).
Each of these three elements is equally important, essential to overall software quality, and contributes to the end result greatly.
A fine-tuned QA process in an organization will greatly decrease the workload on Quality Control and Quality Testing activities. Additionally, preventing bugs and issues during the development phase is far less expensive, compared to making changes after the release.
To this point, we already talked a lot about quality. But how do we measure it? In the IT industry, there are specific characteristics that help define the quality and help to make the QA process in software testing transparent and understandable.
Obviously, high-quality software must function just as intended and provide capabilities and features to the fullest, just as specified in the requirements.
All functions must perform without any critical errors or defects. Reliable software should perform consistently and have predictable behavior under any conditions, with as low a failure rate as possible.
In other words, ease of user interaction with the product. High-quality software is always designed to be user-friendly and intuitive and allow users to conduct actions with minimal guidance or training.
This parameter is all about speed, responsiveness, and efficient resource utilization of software. A well-built product always performs great, has a smooth flow to it, responds quickly to user inputs, and can handle reasonable workloads without significant slowdowns.
The security aspect of software is critical in any industry. No one wants security breaches and sensitive data loss, with eventual damage to the brand’s image, company’s reputation, and revenue. Quality Assurance in software testing here is critical and must provide guarantees that everything in the project is up to the latest security standards and compliance.
Another important aspect is the ease of updating, modifying, and extending it without any unintended side effects or disruptions to business processes. The product should be well-structured, built according to modern coding standards, and easily modified.
Quality software should be built with testability in mind, allowing seamless flow in identifying defects and errors using different methodologies at any stage of development.
It’s also important to make sure that the product is compatible with different operating systems, devices, and environments when it’s necessary.
It will be a good idea to design software with scalability in mind, making it easier to grow and be modified as the requirements of the project will change.
Finally, having comprehensive and well-organized documentation is very important. You should have manuals, tech guides, API documentation, and other materials that will help users and developers understand how to use it and how it works.
Let’s talk more closely about the flow of QA processes. Whether it’s mobile app testing or testing of an entire financial platform, for example, the core elements will remain the same.
The process starts with the QA team getting familiar with software requirements, drilling into its specifics and further analysis. The requirements cover functional and non-functional requirements and those contained in related documentation. The QA team should clear out the details with business representatives, managers, and developers, in case there are any ambiguities.
The next step in the Quality Assurance testing flow is to create a comprehensive QA plan, which includes:
The QA plan can be considered a roadmap for the testing process, helping to ensure the systematization of the entire process.
Based on the requirements and test scenarios identified during the planning stage, the test cases are created. Test cases can be considered specific steps and data inputs that QA specialists follow to determine how the software behaves and whether it meets requirements.
After the creation of test cases, the actual execution in the designated test environments comes next. At this point, the QA processes may include manual or automated testing, depending on the complexity and the nature of the project.
During the test realization stage, the defects and the issues will be most likely identified. After those reports, developers should work on fixing issues, and QA experts should verify those corrections, making sure that problematic aspects function as intended.
When all required activities are complete, the QA team analyzes the test results. They document all insights gained during the testing process to improve their future efforts, as well as provide feedback to management.
The final stage of the flow is release tests. These tests are often part of acceptance testing and are conducted to make sure that the project is ready for deployment. The final round of tests is crucial to make sure that there are no critical issues that can damage the release.
There are multiple types of tests that ensure the best results in QA processes, the most important include:
This is conducted to verify that everything is functioning according to specified functional requirements. The testers launch test cases based on requirements to check the behavior of software in different scenarios.
The purpose of this is to evaluate how intuitive and easy the product is for end-users to operate. The testers collect feedback from real users and identify areas for improvement.
Here, testers assess how well the software runs under such conditions as high load or the need for scalability. The main purpose is to identify bottlenecks related to speed, stability, and responsiveness.
The aim here is to protect the product from unauthorized access, data breaches, and potential threats by identifying vulnerabilities and weaknesses.
The goal here is to determine that the latest code changes and system updates don’t negatively affect the existing functionality.
This focuses on the interaction between different units and modules of the software, with the goal to make sure that integrated components function as expected.
This phase comes before the release and involves validating whether the software meets the acceptance criteria of stakeholders.
Documentation plays a significant role in quality assurance in software testing, ensuring effective communication between experts, maintaining consistent methodologies, offering traceability of project changes, as well as serving as a knowledge retention tool.
Key documents include:
A comprehensive document that outlines the general testing strategy and approach for the project, defining such elements as the scope of testing, testing objectives, timelines, deliverables, roles, responsibilities, required resources, and test environments.
These are the detailed instructions that specify how to perform a particular test, including such elements as test inputs, expected outputs, preconditions, and post-conditions.
A specific type of documentation for automated testing with the goal of performing repetitive tests efficiently and consistently.
Finally, this type of documentation summarizes the results of all QA activities, providing valuable insight into the software’s quality and progress.
The advantages and value of utilizing professional QA services for business are significant. With an expert QA team, you can be sure that the most appropriate and up-to-date QA techniques, tools, and methodologies will be used in the development of your project, leading to improved functionality, usability, performance, and security of your product.
Early detection and prevention of issues will dramatically reduce spending time on fixing them after the release of the product.
The fine-tuned flow of QA processes will reduce or eliminate significant post-release defects, which will result in fewer customer-reported issues and higher customer satisfaction for your brand.
Professional QA services incorporated into the development process maximize your efforts, ensuring efficient workflows. Catching bugs and issues at early development stages results in a smoother development cycle, without costly and time-consuming rework.
The accelerated development process helps to improve time-to-market as well, because the project will more likely meet the deadlines.
Quality Assurance is a vital part of any software development process, securing high-quality, stable, and user-friendly products that meet customer expectations and industry standards.
If you want to learn more information about the specifics of the QA process or need help from an expert QA team, feel free to contact NIX for a consultation. Beyond QA services, we can cover the entire development process, creating complex, secure, and modern software solutions that will improve your business outcomes.
Be the first to get blog updates and NIX news!
SHARE THIS ARTICLE:
We really care about project success. At the end of the day, happy clients watching how their application is making the end user’s experience and life better are the things that matter.
Streamlining Virtual Desktop Testing for Thomson Reuters
Thomson Reuters: Test Automation Center of Competence
Workforce Management Platform to Streamline Outstaffing
Online Banking Platform for SEPA Payments
Financial and Banking | <urn:uuid:96d615d7-f6ce-4012-9886-ef14f2dd1f83> | CC-MAIN-2024-38 | https://nix-united.com/blog/how-the-qa-testing-process-impacts-your-software-development/ | 2024-09-16T01:41:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00587.warc.gz | en | 0.935017 | 2,427 | 2.59375 | 3 |
What type of nutrient is starch?
c- simple carbohydrate
d- compound carbohydrate
e- complex carbohydrate
Starch is a complex carbohydrate.
Starch is a type of complex carbohydrate, also known as a polysaccharide, that serves as an important source of energy for the body. This nutrient is made up of multiple simple sugar molecules, specifically glucose, bonded together in a chain-like structure.
When we consume foods containing starch, our bodies break down the starch into individual glucose units. These glucose molecules are then used as a primary source of energy for various bodily functions, including cell metabolism and physical activities.
Therefore, it is crucial to include sources of starch in your diet to ensure you have sufficient energy levels for your daily activities. Complex carbohydrates like starch provide a steady release of energy compared to simple sugars, making them an essential nutrient for overall health and well-being. Include foods such as whole grains, legumes, potatoes, and corn in your diet to ensure an adequate intake of starch and other complex carbohydrates. | <urn:uuid:e1afd725-a688-4f09-a382-9d09cb249f89> | CC-MAIN-2024-38 | https://bsimm2.com/biology/the-importance-of-starch-as-a-complex-carbohydrate-in-your-diet.html | 2024-09-17T08:30:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00487.warc.gz | en | 0.929157 | 210 | 3.640625 | 4 |
In today’s digital age, where cyber threats loom large and data breaches are all too common, the need for enhanced security measures has never been more pressing. Enter Two-Factor Authentication (2FA), a security process that goes beyond the traditional password to provide an extra layer of protection for users.
What is Two-Factor Authentication?
Two-Factor Authentication, often abbreviated as 2FA, is an authentication mechanism designed to ensure that people trying to gain access to an online account are who they claim to be. Instead of merely asking for a username and password, 2FA requires a second piece of evidence, or “factor,” to verify identity. This second factor could be something you know (like a password), something you have (like a physical device), or something you are (like a fingerprint or other biometric trait).
Factor Type | Examples |
Something you know | Password, PIN, security question |
Something you have | Smartphone, smart card, security token |
Something you are | Fingerprint, facial recognition, voice pattern |
Why is 2FA Important?
The digital landscape is fraught with hackers and cybercriminals who have become adept at stealing personal information. Traditional passwords, no matter how complex, are no longer enough to keep these adversaries at bay. With 2FA, even if a malicious actor obtains your password, they would still need the second factor to access your account, making unauthorized access significantly more challenging.
Furthermore, the increasing number of high-profile data breaches has made consumers more conscious of their digital security. Businesses, recognizing this concern, are implementing 2FA to not only protect their users but also to build trust and enhance their reputation.
The Evolution of Authentication
Historically, the humble password was the sole guardian of our digital identities. However, as cyber threats evolved, so did our defenses. From PINs at ATMs to biometric scans on smartphones, the journey of authentication has been one of continuous innovation. Two-Factor Authentication is the next logical step in this evolution, bridging the gap between convenience and security.
The Mechanics of 2FA
Understanding the intricacies of Two-Factor Authentication requires a deep dive into its core mechanics. At its heart, 2FA is about layering security measures to create a more robust defense against unauthorized access.
The Dual Layers of Security
The primary principle behind 2FA is the combination of two distinct authentication methods. This dual-layer approach ensures that even if one factor is compromised, the second layer remains intact, acting as a barrier against potential breaches.
- First Factor – Knowledge-Based Authentication: This is the traditional method most people are familiar with. It involves something the user knows, such as a password, PIN, or the answer to a security question. While this method has been the standard for years, it’s also the most vulnerable. Hackers often exploit weak or reused passwords, making this layer susceptible to breaches.
- Second Factor – Possession or Inherence-Based Authentication: This layer is where 2FA truly shines. It requires the user to present evidence of something they have (like a smartphone or a hardware token) or something inherent to their identity (like a fingerprint or voice pattern). This factor is much harder for cybercriminals to replicate or steal, adding a significant layer of security.
Common 2FA Methods
Several methods have emerged as popular choices for the second factor in 2FA:
- SMS Verification: After entering a password, the user receives a one-time code via SMS, which they must enter to gain access. While convenient, this method has vulnerabilities, especially if the user’s phone is compromised or if the SMS is intercepted.
- Authenticator Apps: Applications like Google Authenticator or Authy generate time-sensitive codes that users must input after providing their password. These apps are generally more secure than SMS verification.
- Hardware Tokens: These are physical devices that generate codes or use a button press to authenticate. They are not connected to the internet, making them highly secure against online hacks.
- Biometrics: Fingerprints, facial recognition, and voice patterns are unique to each individual, making them an excellent choice for authentication. Modern smartphones often come equipped with the necessary sensors for biometric verification.
The User Experience
While 2FA adds an extra step to the login process, it’s a small price to pay for enhanced security. Most users find the process straightforward, especially with intuitive methods like biometric verification. Businesses implementing 2FA often prioritize user experience, ensuring that the added security doesn’t come at the cost of convenience.
Real-World Applications of 2FA
As the digital realm expands, the application of Two-Factor Authentication has permeated various sectors, proving its versatility and importance. From banking to social media, 2FA has become a standard security measure, ensuring that users’ data remains protected.
Banking and Financial Transactions
One of the earliest adopters of 2FA was the banking sector. Given the sensitive nature of financial data, banks have always been at the forefront of security innovations:
- ATM Transactions: The simple act of withdrawing money from an ATM involves 2FA. The ATM card (something you have) combined with a PIN (something you know) ensures that even if a card is lost or stolen, unauthorized access is prevented without the PIN.
- Online Banking: Many banks now send a one-time code to the user’s registered mobile number or email when they attempt to log in or make a significant transaction. This code acts as the second factor, ensuring an added layer of security.
Social Media and Online Platforms
With the rise of social media platforms and the vast amount of personal data they hold, implementing 2FA has become crucial:
- Email Services: Providers like Gmail and Outlook offer 2FA options, usually through SMS or authenticator apps. Given that email accounts can be the gateway to resetting passwords for various services, securing them with 2FA is paramount.
- Social Networks: Platforms like Facebook, Twitter, and Instagram have introduced 2FA to protect users’ accounts from unauthorized access, especially given the rise in account hacking incidents.
E-commerce and Online Shopping
The e-commerce sector, which deals with both personal and financial data of users, has also embraced 2FA:
- Payment Gateways: When making online purchases, users often receive a one-time password (OTP) on their registered mobile number, which they must enter to complete the transaction.
- Account Logins: E-commerce platforms like Amazon and eBay offer 2FA options for logging in, ensuring that users’ accounts and purchase histories remain secure.
Healthcare and Sensitive Data
In sectors where data sensitivity is of utmost importance, 2FA plays a pivotal role:
- Medical Records: Access to electronic medical records often requires multiple authentication steps, especially when accessed remotely, to ensure patient confidentiality.
- Research and Development: Companies involved in R&D, especially in fields like pharmaceuticals or technology, use 2FA to protect their proprietary data from industrial espionage.
Advantages of Using 2FA
In an era where cyber-attacks are becoming increasingly sophisticated, the importance of robust security measures cannot be overstated. Two-Factor Authentication, with its dual-layered approach, offers several advantages that make it a preferred choice for many organizations and individuals.
Enhanced Security Against Unauthorized Access
The primary benefit of 2FA is the added layer of security it provides:
- Barrier to Breaches: Even if a hacker manages to obtain a user’s password, they would still need the second factor to gain access. This makes breaching an account significantly more challenging.
- Protection from Common Attacks: Techniques like brute force attacks, where hackers try multiple password combinations, become less effective when 2FA is in place.
Protection Against Identity Theft and Online Fraud
Identity theft can have severe consequences, both financially and reputationally:
- Financial Safety: With 2FA, even if someone has stolen your credit card details, they would need the second authentication factor (like an OTP sent to your phone) to complete a transaction.
- Personal Data: For platforms that store personal data, 2FA ensures that even if a password is compromised, the data remains secure.
Increased Trust and Credibility
For businesses, implementing 2FA can enhance their reputation:
- Consumer Confidence: Customers are more likely to trust and engage with platforms they believe are secure. Knowing that a service uses 2FA can boost this confidence.
- Regulatory Compliance: Many industries have regulations that mandate certain security measures. Implementing 2FA can help businesses meet these requirements.
Flexibility and Customization
2FA solutions are versatile:
- Multiple Methods: Organizations can choose from various 2FA methods, such as SMS, authenticator apps, or biometrics, depending on their needs and the level of security they desire.
- Adaptive Authentication: Some advanced 2FA systems can adapt based on user behavior. For instance, if a user logs in from a new location or device, the system might prompt for 2FA, even if it doesn’t always do so.
Cost-Effective Security Solution
Compared to other security measures, 2FA offers a cost-effective solution:
- Reduced Breach Costs: The expenses associated with a data breach can be astronomical. By preventing breaches, 2FA can save organizations significant amounts.
- Low Implementation Costs: Many 2FA solutions, especially those based on software or SMS, are relatively inexpensive to implement and maintain.
Challenges and Vulnerabilities
While Two-Factor Authentication offers a robust layer of security, it is not without its challenges and vulnerabilities. Understanding these potential pitfalls is crucial for both users and organizations to maximize the benefits of 2FA while minimizing risks.
Phishing Attacks and Man-in-the-Middle Threats
Cybercriminals have evolved their tactics to bypass 2FA:
- Phishing Schemes: Attackers can create fake login pages to trick users into providing both their password and the second authentication factor, such as an OTP.
- Man-in-the-Middle (MitM) Attacks: In this scenario, a hacker intercepts communication between the user and the service. They can capture both the password and the second factor, granting them unauthorized access.
Issues with SMS-Based Verification
While convenient, SMS-based 2FA has vulnerabilities:
- SIM Swapping: Attackers can trick mobile carriers into transferring a victim’s phone number to a new SIM card. Once done, they can receive all SMS messages, including OTPs, meant for the victim.
- Network Interception: In some cases, SMS messages can be intercepted during transmission, especially if they’re not encrypted.
User Inconvenience and Resistance
Balancing security with user experience is a challenge:
- Extra Step: Some users find the additional authentication step cumbersome, especially if they need to access a service frequently.
- Locked Out: If a user loses their second factor, like a phone or hardware token, they might be locked out of their account until they can recover or reset the authentication method.
Technical Glitches and Failures
No system is infallible, and 2FA is no exception:
- System Downtime: If the system responsible for generating or sending the second factor experiences downtime, users might be unable to access their accounts.
- Sync Issues: Authenticator apps rely on time-based codes. If there’s a time sync issue between the server and the user’s device, the generated codes might not work.
Over-reliance on 2FA
A potential pitfall for organizations:
- Complacency: Believing that having 2FA in place makes a system impervious to attacks can lead to neglecting other essential security measures.
- Single Point of Failure: If the 2FA system itself has vulnerabilities and is compromised, it can become a single point of failure, jeopardizing the security of all accounts.
Modern Implementations and Innovations
The realm of Two-Factor Authentication is not static. As technology evolves and cyber threats become more sophisticated, so do the methods and tools associated with 2FA. Let’s delve into some of the modern implementations and innovations that are shaping the future of this security measure.
Third-Party Authenticator Apps
Beyond the traditional SMS-based 2FA, third-party apps have gained popularity:
- Google Authenticator and Authy: These apps generate time-sensitive codes for users to input during the authentication process. They operate offline, reducing the risk of interception.
- Push-Based Authentication: Some apps, like Duo Security, offer push notifications as an authentication method. Instead of entering a code, users simply approve or deny a login request sent to their device.
Biometric-based 2FA methods are becoming more advanced:
- Facial Recognition: Beyond smartphones, facial recognition is being integrated into various systems, from laptops to secure building access.
- Voice Biometrics: Unique voice patterns can be used as an authentication factor, especially useful for phone-based services.
- Behavioral Biometrics: This involves analyzing patterns in user behavior, such as keystroke dynamics or mouse movements, to authenticate a user.
Universal 2nd Factor (U2F) and Security Keys
U2F is an open authentication standard that strengthens and simplifies 2FA:
- Physical Security Keys: Devices like the YubiKey can be plugged into a computer or tapped on a device to authenticate. They’re resistant to phishing and man-in-the-middle attacks.
- Wireless Options: Some U2F keys use NFC or Bluetooth, allowing for wireless authentication.
Blockchain technology is being explored for 2FA:
- Decentralized IDs: Instead of relying on a centralized server, authentication data is stored on a decentralized ledger, enhancing security.
- Smart Contracts: These can be used to set up and verify authentication protocols, ensuring transparency and security.
Adaptive and Risk-Based Authentication
Moving beyond static methods, adaptive 2FA adjusts based on the situation:
- Risk Analysis: The system assesses the risk associated with a login attempt. For instance, logging in from a new country might trigger 2FA, while routine logins might not.
- Machine Learning: Algorithms analyze login patterns and behaviors, adjusting authentication requirements in real-time.
In an era where our digital footprints expand with each passing day, the importance of safeguarding our online identities and assets cannot be overstated. Two-Factor Authentication, with its multi-layered approach to security, stands as a beacon of hope against the ever-growing tide of cyber threats. From its foundational principles to the latest innovations, 2FA represents a dynamic and evolving response to the challenges of digital security.
While no system is entirely infallible, the adoption and continuous refinement of 2FA underscore a collective commitment to a safer digital future. For businesses, it’s not just about protecting data; it’s about fostering trust and ensuring continuity. For individuals, it’s about peace of mind in an interconnected world.
As we navigate the complexities of the digital realm, tools like 2FA will be our compass, guiding us towards safer shores. Embracing these tools, staying informed about potential vulnerabilities, and always being proactive in our approach to digital security are the keys to thriving in this digital age. In the end, Two-Factor Authentication is not just a technical measure; it’s a testament to our enduring adaptability and resilience in the face of evolving challenges. | <urn:uuid:70e81dcf-7e9d-4dc9-b96d-3bd2016b2c8b> | CC-MAIN-2024-38 | https://hextechsecurity.com/two-factor-authentication-digital-security/ | 2024-09-17T07:56:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00487.warc.gz | en | 0.91834 | 3,219 | 3.5625 | 4 |
Data leakage is a subtype of data loss. Data leakage occurs when sensitive data is exposed to the public, typically due to errors in configuration. Negligence, malicious intent, and human error can all play a role in data leaking, but a close focus on secure configuration can help prevent it.
Data leakage also occurs when features or data points that should not be available are inadvertently included in a machine learning training dataset, causing the wrong data to be incorporated into model training. Later, the model performs poorly on new, unseen data.
Preventing data leakage is crucial for preventing data loss, providing reliable security, and ensuring that machine learning algorithms are accurate.
Data Leakage FAQs
What is Data Leakage?
Data leakage in cyber security refers to a specific kind of sensitive data loss—one that happens due to misconfigured security settings, weak access controls, and software vulnerabilities. Publicly accessible databases that contain sensitive information, unsecured cloud storage, or improperly protected network shares are all examples of sources of this kind of vulnerability.
This kind of data loss can expose customer details, intellectual property, trade secrets, or other valuable information and hurt the integrity and availability of the system.
Implement and maintain the system according to secure configuration guidelines, and use a few key tools that help mitigate the risks that lead to data leakage. Also involve security in the review of design and new feature roll-outs, especially external portal use cases:
- Access controls
- Data loss prevention (DLP) solutions
- Data exposure prevention (DEP) solutions)
- Conduct regular security assessments
- Up-to-date software patches
What is Data Leakage in Machine Learning?
Data leakage in machine learning occurs when training data is inadvertently and inappropriately exposed to other data that skews the model. This is a common problem for complex datasets in particular that affects the ability of prediction models to perform and generalize. For example:
- Time series datasets which make creating training and test sets especially challenging;
- Graph problems for which random sampling methods can be difficult to construct;
- Analog observations such as images and sounds with samples stored in separately sized files with timestamps.
Data leakage in machine learning is different from data breach, data loss, data exfiltration, and data exposure in a few key ways:
- Data source. External data sources with information about or highly correlated with the target variable can mistrain the model if they are exposed to it. To prevent data leakage, carefully separate training, validation, and test datasets.
- Future data. Information that would not be available at the time of prediction is mistakenly included in training data.
- Data preprocessing. Preprocessing steps such as scaling or imputation may inadvertently introduce information about the entire dataset, including the test set, leading to leakage. For example, using statistics calculated on the entire dataset for normalization rather than just the training set may cause leakage.
- Target variable. Information derived from the target variable might be included in the feature set if the target variable was used to generate features or if data from the future was used to construct the variable.
- Feature engineering. Creating features that directly or indirectly encode information about the target variable results in the model learning from the variable itself, not relevant patterns in the data.
Data Breach vs Data Leakage
In the context of machine learning, data leakage vs data breach refer to distinct data access issues:
Improperly separating train and test datasets, for example, can cause data leakage.
A breach is the unauthorized access, disclosure, or compromise of any sensitive data used in the model. This kind of data exposure might result from a range of vulnerabilities, or unauthorized / incorrect use of 3rd party models by developers.
Data Leakage vs Data Access Exposure
Data access exposure is a holistic approach to security, taking on both data leakage and data loss in machine learning and broader cybersecurity contexts. It identifies vulnerabilities or risks associated with unauthorized access to sensitive or confidential data.
For example, data exposure occurs when weaknesses or loopholes in security measures that protect data allow unauthorized individuals to access information by exposing it to the public internet. Any discussion of data access exposure is really more about proactively preventing exposure of sensitive data by limiting access to that information, and correctly configuring associated application functionality.
It is critical to address data access exposure because it directly impacts the confidentiality, integrity, and availability (CIA) of data. Ensuring proper data exposure and access modeling minimizes risk, maximizes trust, and safeguards assets to keep operations secure and reliable.
Various factors can cause data access exposure, such as inadequate access controls, weak authentication mechanisms, unencrypted data transmission, insecure storage practices, or malicious activities like hacking or insider threats.
Addressing data exposure requires implementing robust access controls, encryption, strong authentication methods, effective logging and threat detection capabilities, regular security audits, and employee training on best practices for data security.
Data Leakage vs Data Loss
Similarly, data leakage and data loss represent distinct data handling issues, but data loss is often related to data breaches and technical failures.
We have discussed why data leakage occurs.
Loss can result in the inability to access or retrieve data. As mentioned, breaches might cause data loss, as can software and hardware issues, human error, natural disasters, and accidental or intentional deletion. Most data loss prevention tools focus on identifying unsafe or non-compliant forms of data transfer.
Data Exfiltration vs Data Leakage
Corruption caused by this kind of data leakage is limited to the internal data model. There is no damage outside the system—unless the flawed model is deployed.
However, data exfiltration poses a significant security risk for companies, especially when it involves sensitive intellectual property (IP) being used to train third-party large language models (LLMs). This can lead to the unauthorized use of the company’s intellectual property, a resulting loss of competitive advantage, and legal/reputational risk.
Similar to a breach, during data exfiltration, an attacker extracts sensitive data from a computer or network without authorization. Their goal is usually financial gain, sabotage, or other malintent.
For SaaS platforms and other third-party systems, as well as subcontractors with access to company data living in cloud systems, the risk of data leakage is exacerbated for several reasons.
Third-party systems and subcontractors expand the attack surface for potential breaches. Each additional entity with access to the data increases the risk of a security incident leading to data leakage.
When data lives across various systems that many parties can access, it presents more monitoring challenges. This complexity makes it easier for malicious actors to exploit vulnerabilities and exfiltrate sensitive information.
Companies may have limited control over how third parties handle their data. Even if they have robust security measures in place within their own infrastructure, weaknesses in third-party systems or subcontractors’ practices can cause data breaches.
Data Leakage Examples
There are several common examples of data leakage:
- Misconfigured SaaS web portals. These make it easier for attackers to access data and harder for users to monitor how data is being stored and used.
- Unauthorized use of 3rd party LLMs. This type of data leakage is really a form of IP theft. More than a security risk, it also presents a potential source of liability for brands.
- Compromised subcontractors. Mishandling of data by 3rd and 4th parties is possible as chains of subcontractors are added to the mix. Complex webs of data leakage make for difficult forensic processes for IT teams.
- Misconfigured storage. Misconfigured cloud storage services may allow public access to sensitive data.
- Unsecured databases. Misconfigured database servers may be left exposed without proper authentication or access controls.
- Exposed Application Programming Interfaces (APIs). Improperly configured without access controls or authentication mechanisms, these may allow unrestricted access and expose sensitive data.
- Open File Transfer Protocol (FTP) servers. Open or configured with weak security settings, FTP servers allow unauthorized users to access and download sensitive files.
- Improper email configuration. Misconfigured email servers or forwarding rules may unintentionally expose sensitive communications or attachments to unintended recipients.
- Unencrypted data transmission. Failure to encrypt data can lead to interception and exposure.
- Time-based data leakage. Failure to properly exclude future data from time-series datasets allows it to leak into the training set. This in turn enables the model to learn from information that would not be available at prediction time.
- Information from test set. Accidentally including test set data during model training, such as unique test set labels or features, produces overly optimistic performance estimates during training.
- Proxy variables. Some proxy variables indirectly leak target variable information and perform poorly. For example, account creation date as a proxy variable for fraud prediction is often limited.
- Cross-validation leakage. Incorrect cross-validation can cause data leakage from the test set into the training process.
These data leakage use cases can occur when the model learns irrelevant patterns that are present merely due to chance. Understanding the various causes and types of data leakage can help focus efforts on detection and prevention.
How to Detect Data Leakage
Detecting causes of data leakage requires careful analysis of the data leakage model and its performance:
- Prevent data exposure. Examine users who have access to identify any that might increase data leakage risk. Limit access to prevent exposure.
- Engage in cross-validation. Use appropriate cross-validation techniques, such as time-series or group-wise cross-validation, to prevent data leakage attacks across folds. Monitor performance across folds to detect significant variations that could suggest data leakage vulnerability.
- Target leakage indicators. Look for signs of target data leakage problems, such as unusually high model performance metrics during training, especially if they cannot be replicated on new, unseen data.
- Holdout validation. Set aside a portion of the data as a validation set and periodically use it to evaluate model performance and to detect any unexpected drops or increases in performance that could indicate data leakage threats.
- Manually inspect data, features, and preprocessing steps. Identify any potential data leakage causes, such as data transformations, feature engineering techniques, or data sources that may inadvertently include information about the target variable.
- Consult experts. Seek input from domain experts or data scientists familiar with data leakage detection and prevention to identify potential sources of leakage and validate the appropriateness of features and preprocessing steps.
- Conduct feature analysis. Examine the features used in the model to identify any that might increase data leakage risk. Identify features that might be too predictive or contain information from the target variable or future data.
These are the basic principles of data leakage detection. There are multiple tools and approaches for implementing them.
How to Prevent Data Leakage
A well-crafted data leakage prevention policy is crucial. Here are some data leakage prevention best practices for the cybersecurity context:
Data leakage monitoring. Continuously monitor with a data leakage solution to detect unexpected behaviors and deviations from expected outcomes.
Conduct regular data audits and maintain documentation. Audits of data sources can identify potential leakage before it happens and ensure data integrity. Documentation facilitates transparency and reproducibility and assists with data leakage risk assessment.
Security and access controls. Implement strong measures to protect sensitive data from unauthorized disclosure. Limit access to datasets, model outputs, and other sensitive information based on role or need as well as data leakage prevention policies.
Employee training and awareness. Comprehensive data leakage protection policy education for employees is critical. Encourage employees to report suspicious activities, security incidents, and potential data leakage incidents promptly to IT or security teams for investigation. Ensure team members understand how any data leakage protection solutions in use work.
What is data leakage prevention?
- Perform cross-validation. Perform cross-validation correctly, with data preparation and separated data folds to prevent leakage between them. Use appropriate data leakage prevention tools and cross-validation techniques, such as k-fold, stratified, or time-series cross-validation, depending on the nature of the data and problem.
- Add noise. Add random noise to input data to smooth the effects of leaking variables.
- Ensure strict data separation. Data leakage prevention tools separate training, validation, and test datasets to prevent information leakage between them.
- Feature engineering. Avoid using features that directly or indirectly encode information about the target variable. Be cautious when creating features from time-related data. Data leakage prevention controls apply feature scaling, imputation, and other preprocessing techniques separately to the training and testing datasets to prevent leakage of information from the test set.
What is Data Leakage Protection?
Data leakage detection techniques and solutions offer a range of capabilities that address various aspects of data leakage protection. In general, they offer several features:
Anomaly detection. Data leakage detection systems monitor data access patterns, user behavior analytics, and behaviors to identify deviations from normal usage which could indicate potential data leakage or unauthorized access.
Data loss prevention (DLP). DLP technologies monitor and filter outbound traffic to identify and block any attempt to transfer sensitive data outside the organizational perimeter.
Regular audits and reviews of system configurations and access controls. This identifies vulnerabilities and ensures compliance with regulatory requirements and industry standards.
Configuration monitoring and management. Data leakage controls continuously monitor and manage system configurations to ensure they adhere to security best practices and alert administrators of deviations and recommended remediation steps.
How Does AppOmni Detect and Prevent Data Leakage?
As the leading SSPM solution, AppOmni detects unintended data exposures and other vulnerabilities or risks caused by customer-side misconfigurations in SaaS platforms. This holistic data access exposure approach to security prevents both data leakage and sensitive data loss in the machine learning niche and in broader cybersecurity contexts.
AppOmni Insights analyzes the various factors that can cause data access exposure, including exposed APIs, improper permissions, incorrect ACLs and IP restrictions, or toxic combinations of misconfigurations, to help users identify issues.
The platform continuously detects these misconfigurations and many similar SaaS access risks, enabling timely, proactive SaaS threat detection, prevention, and guided steps for remediation. AppOmni can alleviate the burden of permission management and offer users full visibility into organizational access.
Learn more about how AppOmni offers continuous data access exposure protection that can help prevent data leakage. | <urn:uuid:c69305cd-f79e-4b04-8b1a-5231bda4b6bb> | CC-MAIN-2024-38 | https://appomni.com/glossary/data-leakage/ | 2024-09-21T03:34:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00187.warc.gz | en | 0.900796 | 2,984 | 3.484375 | 3 |
Updated: Jul 21, 2020
Seemingly one of the most overlooked security vulnerabilities in the web applications that we test is the deserialization of untrusted data. I say overlooked because awareness of this issue seems to be comparatively low among web developers. Contrast that with the severity of this class of vulnerability (an attacker running arbitrary code on your web server), and the fact that it is present in the more common modern web application frameworks (.NET and Java), and you have a dangerous situation. OWASP recently recognised this, moving it into their Top 10 Most Critical Web Application Security Risks.
If you are deserialising any data; whether that be through JSON, XML, or Binary deserialisation, there is a chance that your code is vulnerable to this type of attack.
The dangers of deserialisation in the .NET framework in particular have only recently been demonstrated, with practical attacks having been publicised just last year. Examples in this article will largely refer to .NET code, but the principles apply to many languages; especially Java. This is not intended to be a comprehensive review of all possible attacks, as the subject is quite deep; but it will help to catch the vast majority of the issues we see.
So what is this vulnerability?
Deserialisation is the process of turning a stream of bytes into a fully-formed object in a computer’s memory. It makes the job of a programmer much simpler: less effort spent parsing complicated data structures: just let the framework handle it for us!
One aspect of deserialisation that is important to understand; both from the point of view of functionality, as well as security; is that simply filling in an object’s internal properties may not be sufficient to reconstruct it. Some code may have to run to complete the process: think of hooking up internal references to sub-objects, or global objects, or ensuring that data is normalised, etc. In .NET, for example, this is done using the OnDeserialized and OnDeserializing attributes, as well as OnDeserialization callbacks.
The big problem with deserialisation arises when an attacker can coerce another system to deserialise an object of an unexpected type. If the code doing the deserialising trusts whatever type it receives, that’s where things start to go south. Take the following code for example:
Vulnerable deserialisation code:
byte value = `user-controlled data`; MemoryStream memoryStream = new MemoryStream(value); var formatter = new BinaryFormatter(); MyType deserialized = (MyType)formatter.Deserialize(memoryStream); // Cast the object to the expected type deserialized.SomeMethod(); …
At first glance, it appears as though we’re checking that the serialised object that we receive is of the correct type. We certainly verify it before using the object.
From an attacker’s point of view, however, the code does the following:
Receive data from an attacker
Determine the type of object that the attacker wants to use
Run all the deserialisation callbacks for that type of object, using the data sent by the attacker as parameters
Then, a type check is performed, which will fail… but the damage has already been done: we have run the OnDeserialized method of an unexpected type!
Then, eventually, run the object’s finaliser (upon being cleaned up by the garbage collector)
But what’s the big problem with that? An attacker can’t just create their own type, and send it over the wire to be deserialized. However, what if there were a built-in type which could be leveraged to perform some malicious action?
Well, security researchers have done just that: various types, which are built-in to their respective frameworks (.NET, Java, etc.) have been discovered, which, upon being deserialised, can be leveraged to perform malicious actions, including executing arbitrary code on the server. Even if the code immediately checks “Was this the expected type?” (as in the above example), the damage has already been done by the very act of deserialising.
Take, for example, the .NET type SortedSet.
Upon deserialisation, a SortedSet needs to make sure that it is indeed sorted; otherwise its internal state would essentially be corrupted. So, in its deserialisation callback, the SortedSet class calls its sorting function to make sure everything’s as it should be.
To allow programmers the flexibility to sort items according to their own business rules, the SortedSet class allows the programmer to set an alternative sorting function, as long as it is a method that receives two parameters of the expected type. So an attacker can send a malicious payload that does the following:
Create a SortedSet<string> object that contains two values, say for example “cmd” and “/c ping 127.0.0.1”
Configure the SortedSet to use the method Process.Start(string process, string arguments) to “sort” its entries. This method, while not being used for its intended purpose, technically meets the criteria above: it’s a method that takes two strings as parameters.
Serialise the SortedSet, and send it to the vulnerable system.
Upon being deserialised, in its callback, the vulnerable system will attempt to “sort” the set by calling Process.Start(“cmd”,”/c ping 127.0.0.1″) which will in turn run an arbitrary command (in this case, a ping command). The thread will immediately throw an exception because Process.Start returns an unexpected type… but again, the damage has already been done, as the command is already running in a separate process.
The GitHub project ysoserial.net contains a list of other “gadgets” that can be used for code execution in .NET, and will create payloads to exploit vulnerable code. Some of these gadgets have been patched by Microsoft, but the majority are difficult to fix, as they would require breaking changes to the Framework. They are thus still currently available to attackers, despite having been public for over a year.
The Jackson deserialisation library for Java has taken a more aggressive approach in accepting breaking changes; in that, as researchers have found gadgets that can be used for code execution, they are added to a “blacklist” of types which will be prevented from being deserialised. This is somewhat of a “whack-a-mole” approach: it will complicate exploitation for code that is already vulnerable; but should by no means be relied upon to prevent newly-discovered attacks. Known Java deserialisation attacks can be found in the ysoserial GitHub project.
It’s alright, I’m encrypting the serialised data
Cryptography is hard. Even modern ciphers such as AES can be used in an insecure way. We often find incorrectly-implemented cryptography in the websites we test, allowing us to read, and sometimes inject our own data. Or perhaps the encryption key is sitting in a file that an attacker might be able to read through a file disclosure attack, or a temporary server misconfiguration. If you are relying on cryptography as your only protection mechanism against this attack, you’re living dangerously close to the edge.
What can I do about it?
Quite often, deserialisation is just the wrong design pattern, especially for web applications. This is especially true when serialised objects are passed between the client and server to maintain state. Removing the attack surface completely by never accepting serialised objects from the user is often the safest and most correct solution.
If you insist upon using deserialisation, though, how can you avoid this vulnerability? The core problem for deserialisation attacks is that, by controlling the type of an object, an attacker is able to run code that was not intended to be run as part of the vulnerable program. As a result, a developer must configure the deserialiser to check the type of the serialised object before deserialising.
Different deserialisers behave differently in this respect, so it bears going through the main ones that are used:
A review of .NET deserialization mechanisms was performed by Alvaro Muñoz and Olexsandr Mirosh for the 2017 Black Hat conference, and their whitepaper contains detailed information about all major deserialisation libraries in .NET. By way of summary:
If you are using the BinaryFormatter type to deserialise, you are almost certainly vulnerable to this attack, as your code will by default just deserialise whatever type an attacker wants it to. Our advice is to never use this class with untrusted data at all; however you can configure BinaryFormatter to be more secure by restricting the valid deserialised types using the SerializationBinder class.
If you are using Newtonsoft’s Json.Net deserialiser, this is secure-by-default; but it can be accidentally configured to be vulnerable. Specifically, before deserialising an object, the serialised type will be checked to ensure it is the same as the expected type, thus removing an attacker’s ability to control the type of an object. However, a developer may wish to allow subclasses of an expected type, and may configure deserialisation to be polymorphism-friendly by setting the TypeNameHandling property to have a value other than None. As soon as this is done, a door is opened for an attacker: if any deserialised field, either in the parent object, or in some sub-object, is of type System.Object, an attacker will now be allowed to place whatever type they wish into that field, including one of the unsafe code-execution “gadgets”.
XmlSerializer is even harder to make vulnerable, however it has been known to happen. By using .NET generics with a static type, XmlSerializer will prevent arbitrary types from being used, at the expense of some flexibility for the developer. However, if a program’s code sets the expected deserialisation type dynamically (say for example by inspecting the Xml and setting the expected type using reflection), an attacker once again has control over the type that will be deserialised. Using XmlSerializer with a static expected type is secure.
Most other built-in .NET deserialisation libraries use BinaryFormatter or XmlSerializer internally, and would thus inherit their security properties. Other 3rd-party deserialisation libraries also exist, and Muñoz and Mirosh’s paper examines some of them.
The ObjectInputStream.readObject method will deserialise whatever serialised Java object it is given, making it comparable to .NET’s BinaryFormatter class in both its flexibility and its attack surface.
The Jackson JSON deserializer has a similar attack surface to Json.Net, in that it is secure-by-default; but by enabling polymorphic behaviour through the ObjectMapper.enableDefaultTyping() method, arbitrary subclasses may be created.
The bottom line
Deserialisation is a highly flexible and convenient tool for developers… which unfortunately means it’s also highly flexible and convenient for hackers. If you’re deserialising data anywhere in your code, make sure you consider the security implications… or better yet, don’t use deserialisation at all, if you can avoid it. And if you’re a developer, there’s nothing quite like performing this attack on your own code, to give yourself a better understanding of the risks. | <urn:uuid:ead7b940-a48a-49d4-b80c-e6b471aea47f> | CC-MAIN-2024-38 | https://www.ionize.com.au/deserialisation-vulnerabilities/ | 2024-09-21T03:06:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00187.warc.gz | en | 0.923681 | 2,465 | 3.015625 | 3 |
While making data-driven decisions, businesses are often bothered by questions like – Where is the data coming from and where is it going? Here is exactly where data lineage comes into play. It helps businesses decipher all the relevant information around their data and offers clarity on how data flows across their organization.
In this blog, we shall cover all the crucial aspects related to data lineage:
- Meaning of data lineage
- Benefits of data lineage
- Data lineage techniques
- Data lineage best practices
What is data lineage?
Data lineage is the process of comprehending, recording, and visualizing data as it flows from data sources to end-users. Simply put, it helps you gain a granular view of how data flows from source to destination.
With data lineage, businesses can receive continuous updates on how the data is moving through the organization, how is it getting transformed, where is it being stored, and various other metadata.
Organizations can leverage the power of data lineage to conduct system migrations, track errors in data processes, bring data discovery and metadata closer, and reduce risks.
In a nutshell, data lineage is all you need to facilitate data accuracy and consistency. One thing that you must note is that metadata management is crucial for presenting data lineage across on-premises and the cloud. When you manage metadata effectively, it enables you to view data flow via different systems. This makes it easier to identify data concerning a particular report or ETL process.
Benefits of data lineage
Data lineage is like a boon for IT teams. It helps them gain a 360-degree view of the journey of data. Data lineage has powerful impacts on several business areas. Let’s have a look at some of them:
1. Enables effective data governance
With data lineage, businesses can easily map and verify how data is accessed and transformed. This further helps in maintaining regulatory compliance, data quality, data privacy, and other important aspects related to data governance.
2. Facilitates successful data migration
Data lineage uses data graphs to present the relation between various data objects and data flows. It makes it easier for data architects to move data to new systems as data lineage helps them gain a fair idea of the location and lifecycle of data sources.
3. Boosts business productivity
Data lineage tracks how all the sensitive data flows throughout the organization and helps businesses quickly identify and fix data gaps. This further helps in enhancing business products and services.
4. Generates accurate analytics
When it comes to IT operations, data lineage presents the impact of data changes on downstream analytics. It helps you identify the business risks associated with the changes and adopt a proactive approach to change management.
Data lineage techniques
Here’s a list of a few standard techniques that businesses can utilize to conduct data lineage:
1. Pattern-based lineage
This technique leverages patterns to perform data lineage. Pattern-based lineage does not deal with the code that is used to transform data. It uses metadata for tables, columns, and business reports. This technique only monitors data and not the data processing algorithms.
2. Lineage by parsing
It is one of the most advanced data lineage techniques. It tends to automatically read the logic that is used to process data. Lineage by parsing has a complex deployment. You need to comprehend different tools and programming languages like ETL logic, SQL, Java solutions, and more.
3. Lineage by data tagging
In this technique, every data that moves or transforms gets tagged by a transformation engine. Lineage by data tagging requires a consistent transformation tool to control all the data movement. To produce a lineage presentation, all tags are read from start to finish.
4. Self-contained lineage
This technique can inherently present lineage and does not require any external tools. There are certain data environments that comprise data lakes. The data lake can store all kinds of data in all stages of its lifecycle.
Data lineage best practices
Let’s have a quick look at some of the best practices that you should follow while implementing data lineage for your business:
1. Leverage automation for data lineage extraction
Avoid capturing lineage manually in static tools. Use intelligent data lineage tools to automate processes. We recommend using Informatica’s Enterprise Data Catalog to gain insights into single-step transformations. Thanks to its AI and ML capabilities, Informatica EDC can automatically stitch lineage together from various enterprise sources.
Our tool is built using Apache Spline, Python, and Informatica EDC which extracts data lineage from Spark job execution and visualizes the same on Informatica EDC.
Here’s a graphical representation to show how LumenData’s tool helps you extract data lineage:
2. Add the metadata source to the data lineage
Relational database management systems, ETL software, custom applications, and BI tools tend to create their data. This metadata should be included in your lineage as it helps in comprehending the data flow.
3. Choose progressive extraction
Make sure to extract data lineage and metadata in the same way as data flows through your system. Progressive extraction makes it easier to map out the connections and dependencies among systems.
4. Ensure to validate end-to-end data lineage
Make sure to validate data lineage progressively. Start from high-level connections between systems. Next, move to connected data sets followed by data elements. Lastly, validate the transformation documentation.
Enable data processing optimizations with data lineage
We hope our blog helped you understand the critical concepts surrounding data lineage. With AI-powered data lineage capabilities, it becomes easier to comprehend beyond data flow relationships. Data lineage helps you avoid errors in business intelligence reports and data sources.
Get in touch with our team to learn more about using data lineage tools effectively.
Senior Data Scientist
Senior Data Scientist | <urn:uuid:30e97783-25d9-4b5e-84b5-d32a8454d593> | CC-MAIN-2024-38 | https://lumendata.com/blog/the-2023-all-in-one-guide-to-data-lineage/ | 2024-09-07T17:22:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00487.warc.gz | en | 0.901869 | 1,191 | 3 | 3 |
Four common API vulnerabilities and how to prevent them
Proper security measures are one of the most important aspects of building an application programming interface, or API. It’s great for an API to connect systems and give developers access to the data and functions they need to create new apps and digital experiences, but only if those connections and that access are protected.
For the API provider, this requires a balance. One of the main purposes of an API is to help developers get things done—and no one wants to work with a locked-down tool whose security mechanisms get in the way of productivity. An API is worthless if developers aren’t consuming it, so ease-of-use is important.
This means API providers should generally avoid the kind of complex systems dependencies and heavy-handed governance models that typified previous generations of IT strategy—but they also need to understand today’s threats and provide strong protections that don’t get in the user’s way. Here, based on our observations working with Fortune 500 companies, are four security cautions that may help API teams strike this balance.
APIs without authentication
APIs are the keys to an organization’s databases, so it’s essential to control who has access to them. Industry-standard authentication and authorization mechanisms such as OAuth/OpenID Connect, in conjunction with Transport Layer Security (TLS), are crucial.
Injections have emerged as one of bad actors’ favorite attack vectors. This threat comes in many forms, but the most typical are SQL, RegEx, and XML injections. APIs should be designed with an awareness of these threats and efforts made to avoid them—and active monitoring should be employed after APIs are deployed to confirm no vulnerabilities have made it to production code.
As concerns around security escalate, encryption of data needs to be a top priority for enterprises. Ideally, sensitive information is encrypted from the point where data is captured through transit, all the way to where data is consumed. As the layer that passes valuable data between backend systems of record and frontend systems of engagement, APIs play an important role in this process. To go beyond the basic encryption and protections provided by TLS and authentication mechanisms, API providers should employ trace tools for debugging issues, implement data masking for trace/logging, and leverage tokenization for PCI & PII data.
When APIs are open to the public, they face the challenge of determining if incoming requests should be trusted. Is the request a customer? Or is it an attacker?
In some cases, even if the API detects and successfully denies an untrusted request, the API may nevertheless allow the potentially malicious user to try again—and again and again and again. This kind of security oversight may allow attackers to attempt to playback or replay a legitimate user request until they are successful. Counter measures against these brute force attacks include rate-limiting policies to throttle requests, HMAC authentication, two-factor authentication, or a short-lived access token facilitated by OAuth.
Data in URI
Implementing API keys for authentication and authorization is often sufficient. However, keys may be compromised if they are transmitted as part of the Uniform Resource Identifier (URI). Sensitive data, including API keys and passwords, may become accessible to attackers when URI details appear in browser or system logs. A best practice is to send API keys as a message authorization header, since doing so avoids logging by network elements. Use the HTTP POST method with payloads carrying sensitive information.
Accelerate business without compromising security
Potential threats can often be avoided by thinking critically about API design and establishing policies that can be applied across the business. Even though an agile, API-first approach often involves decentralizing IT management and giving individual teams greater autonomy to leverage valuable resources, security can never move to the background. Though some of the issues in this article may seem straightforward, we’ve encountered them more often than you might anticipate. Following these best practices can help you avoid breaches and help your business maximize the leverage provided by its APIs.
Contributing author: Vidheer Gadikota, Technical Solutions Consultant at Google. | <urn:uuid:897c7c08-c7ab-4530-ac9e-5fb7c4440cb0> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2018/07/03/common-api-vulnerabilities/ | 2024-09-08T21:50:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00387.warc.gz | en | 0.930406 | 848 | 2.6875 | 3 |
The interpreted language Python is a lot of fun. It’s great for quick and dirty lash-ups, and has list comprehensions whilst being easier to use that Haskell. There are many great reasons why you would never deploy it in a production environment, but that’s not what this article is about.
In the UK, the government decided that schoolchildren needed to learn to code; and Python was picked as the language of choice.
Superficially it looks okay; a block structured BASIC and relatively easy to learn. However, the closer I look, the worse it gets. We would be far better off with Dartmouth BASIC.
To fundamentally understand programming, you need to fundamental understand how computers work. The von Neumann architecture at the very least. Sure, you can teach CPU operation separately, but if it’s detached from your understanding of software it won’t make much sense.
I could argue that students should learn machine code (or assembler), but these days it’s only necessary to understand the principle, and a high level language like BASIC isn’t that dissimilar.
If you’re unfamiliar with BASIC, programs are made up of numbered lines, executed in order unless a GOTO is encountered. It also incorporates GOSUB/RETURN (equivalent to JSR/RTS), numeric and string variables, arrays, I/O and very little else. Just the basic building blocks (no pun intended).
Because of this it’s very quick to learn – about a dozen keywords, and familiar infix expression evaluation, and straightforward IF..THEN comparisons. There are also a few mathematical and functions, but everything else must be implemented by hand.
And these limitations are important. How is a student going to learn how to sort an array if a language has a built-in list processing library that does it all for you?
But that’s the case for using BASIC. Python appears at first glance to be a modernised BASIC, although its block structured instead of having numbered lines. That’s a disadvantage for understanding how a program is stored in sequential memory locations, but then structured languages are easier to read.
But from there on, it gets worse.
Data types are fundamental to computing. Everything is digitised and represented as an appropriate series of bits. You really need to understand this. However, for simplicity, everything in python is treated as an object, and as a result the underlying representation is completely hidden. Even the concept of a type is lost, variables are self-declaring and morph to whatever type is needed to store what’s assigned to them.
Okay, you can do some cool stuff with objects. But you won’t learn about data representation if that’s all you’ve got, and this is about teaching, right? And worse, when you move on to a language for grown-ups, you’ll be in for a culture shock.
A teaching language must have data types, preferably hard.
The next fundamental concept is data arrays; adding an index to a base to select an element. Python doesn’t have arrays. It does have some great built in container classes (aka Collections): Lists, Tuples, Sets and Dictionaries. They’re very flexible, with a rich syntax, and can be used to solve most problems. Python even implements list comprehensions. But there’s no simple array.
Having no arrays means you have to learn about the specific characteristics of all the collections, rather than simple indexing. It also means you won’t really learn simple indexing. Are we learning Python, or fundamental programming principles?
Unlike BASIC, Python is block structured. Highly structured. This isn’t a bad thing; structuring makes programs a lot easier to read even if it’s less representative of the underlying architecture. That said, I’ve found that teaching an unstructured language is the best way to get students to appreciate structuring when it’s added later.
Unfortunately, Python’s structuring syntax is horrible. It dispenses with BEGIN and END, relying on the level of indent. Python aficionados will tell you this forces programmers to indent blocks. As a teacher, I can force pupils to indent blocks many other ways. The down-side is that a space becomes significant, which is ridiculous when you can’t see whether it’s there or not. If you insert a blank line for readability, you’d better make sure it actually contains the right number of spaces to keep it in the right block.
WHILE loops are support, as are FOR iteration, with BREAK and CONTINUE. But that’s about it. There’s no DO…WHILE, SWITCH or GOTO.
You can always work around these omissions:
You can also fake up a switch statement using IF…ELSEIF…ELSEIF…ELSE. Really? Apart from this being ugly and hard to read, students are going to find a full range of control statements in any other structured language they move on to.
In case you’re still simmering about citing GOTO; yes it is important. That’s what CPUs do. Occasionally you’ll need it, or at least see it. And therefore a teaching language must support it if you’re going to teach it.
And finally, we come on to the big one: Object Orientation. Students will need to learn about this, eventually. And Python supports it, so you can follow on without changing language, right? Wrong!
Initially I assumed Python supported classes similar to C++, but obviously didn’t go the whole way. Having very little need to teach advanced Python, I only recently discovered what a mistake this was. Yes, there is a Python “class”, with inheritance. Multiple inheritance, in fact. Unfortunately Python’s idea of a class is very superficial.
The first complete confusion you’ll encounter involves class attributes. As variables are auto-creating, there is no way of listing attributes at the start of the class. You can in the constructor, but it’s messy. If you do declare any variables outside a method it silently turns them into global variables in the class’s namespace. If you want a data structure, using a class without methods can be done, but is messy.
Secondly, it turns out that every member of a class is public. You can’t teach the very important concepts of data hiding; how to can change the way a class works but keep the interface the same by using accessors. There’s a convention, enforced in later versions, that means prefixing a class member with one or more underscores makes it protected or private, but it’s confusing. And sooner or later you discover it’s not even true, as many language limitations are overcome by using type introspection and this rides a coach and horses through any idea of private data.
And talking of interfaces, what about pure virtual functions? Nope. Well there is a way of doing it using an external module. Several, in fact. They’re messy, involving an abstract base class. And, in my opinion, they’re pointless; which is leading to the root cause why Python is a bad teaching language.
All Round Disaster
Object oriented languages really need to be compiled, or at least parsed and checked. Python is interpreted, and in such a way as it can’t possibly be compiled or sanity checked before running. Take a look at the eval() function and you’ll see why.
Everything is resolved at run-time, and if there’s a problem the program crashes out at that point. Run-time resolution is a lot of fun, but it contradicts object orientation principles. Things like pure virtual functions need to be checked at compile time, and generate an error if they’re not implemented in a derived class. That’s their whole point.
Underneath, Python is using objects that are designed for dynamic use and abuse. Anything goes. Self-modifying code. Anything. Order and discipline are not required.
So we’re teaching the next generation to program using a language with a wide and redundant syntax and grammar, incomplete in terms of structure, inadequate in terms of object orientation, has opaque data representation and typing; and is ultimately designed for anarchic development.
Unfortunately most Computer Science teachers are not software engineers, and Python is relatively simple for them to get started with when teaching. The problem is that they never graduate, and when you end up dealing with important concepts such as object orientation, the kludges needed to approximate such constructs in Python are a major distraction from the underlying concepts you’re trying to teach. | <urn:uuid:47f62b25-d6b6-4282-802b-bb1777c198da> | CC-MAIN-2024-38 | https://blog.frankleonhardt.com/tag/basic/ | 2024-09-13T18:15:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00887.warc.gz | en | 0.934879 | 1,850 | 2.9375 | 3 |
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.
Reserve Your Seat Today"SCADA Guide" is another way of saying "SCADA Tutorial".
Have you been tasked with purchasing, upgrading, or maintaining a SCADA system? Are you expected to make decisions about Intelligent Electronic Devies (IEDs) or Remote Telemetry Units (RTUs)? Do people come to you with questions about SCADA System Masters or mediating SCADA Telemetry to SNMP Masters? Do you fully understand the role of a SCADA system? These are important questions, and even if you're slightly familiar with the operation of a SCADA system, you could be overlooking some key aspects. To fully understand the benefits of a quality SCADA system, you need a high-quality SCADA guide, written by experts, to help you.
For instance, do you know how SCADA Telemetry fits into a Remote Monitoring and Control System? Do you know how to setup network alarm management so that all of your alarms can be viewed in a single place from a single screen? This kind of instantly useful information forms the core of a valuable SCADA guide.
SCADA stands for Supervisory Control and Data Acquisition. It's not a specific type of technology, but instead describes an application. Any system that collects data about the system to control that system is considered a SCADA application. This type of application has two elements; the process or device you want to control and the devices that control the system. A variety of devices and protocols can be used to form a SCADA system, so that you can get the exact functionality that you need.
SCADA is used in a wide variety of industries to control all manner of processes. For instance, they are used to automate water and sewage processes, electric power generation, manufacturing processes, traffic signals, mass transit and more. No matter what your industry is, there is usually always a way to implement a SCADA system in order to automate a process.
Not part of one of the above industries? Fear not! SCADA can be used in many ways. As long as you have a process that needs to be controlled and can be automated, you have a potential use for SCADA. If you're unsure of how SCADA can fit into your business, consider these benefits of a SCADA system:
Still think SCADA isn't for you or your business? Use it to automate tedious manufacturing processes. Use it to switch from one solar panel cell to another when you need power. SCADA has more uses than you can imagine.
To learn more about SCADA, you need a level of information that can only come from engineers with years of experience, not some anonymous tech forum member. If you're trusting the success of your SCADA implementation to the SCADA guide you're reading, it had better contain the best available information.
But, be forewarned! There are a great deal of SCADA guides online that have less than adequate information, written as fact, by someone unknown. Do not rely on anonymous forum posts or questionable websites for vital information. You need correct and relevant SCADA info in a quick format. Only industry experts can provide you with the tools you need to do your job better. When looking for a guide, check to make sure it is written by a reputable person or company with years of experience in SCADA. It is essential that you find a reliable guide with information written by someone with years of experience. An accurate, detailed manual could make all the difference to your company's future (and even your personal career future).
Every page of a SCADA tutorial should have info collected from years of experience. If you're relying on a few sentences from a tech forum, how successful can you really expect to be? You need a SCADA tutorial that will introduce you the basic concepts of SCADA systems, important definitions, and detailed application drawings. Also, it should tell you how a SCADA system could work to increase your company's profit, and troubleshooting techniques that can be applied to common SCADA problems. The processes in a detailed SCADA tutorial will limit human error, lost revenue, and increase the uptime of your operations. Your company will benefit from the info you gain after downloading and reading an intro guide. Knowledge is the key to successful SCADA implementation and management. DO NOT doom yourself to failure by relying on "tips" from those who have never been successful with SCADA.
DPS Telecom engineers have been developing perfect-fit solutions for their clients for over 20 years. In that time, they've accumulated a world-class monitoring knowledge base. Now, part of that knowledge is available to you as an introductory SCADA guide. You'll learn the basics of SCADA, keys to a successful implementation, and costly pitfalls to watch out for.
To see additional information related to a "SCADA Guide", please visit the SCADA Tutorials page. | <urn:uuid:d803eb1a-f0e7-4baf-b4f6-667b15e45aa3> | CC-MAIN-2024-38 | https://www.dpstele.com/scada/guide.php | 2024-09-17T12:19:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00587.warc.gz | en | 0.943413 | 1,028 | 2.59375 | 3 |
Cybersecurity threats grow relentlessly and more sophisticated each day. As such, creating strong security mechanisms is essential for any organization. Deploying an active defense strategy empowers businesses to detect and actively engage with these threats. Implementing its practices daily will enhance your overall security and help you build a more proactive cybersecurity posture.
Active defense is a set of cybersecurity measures and strategies brands use to detect, counter and mitigate threats. Unlike passive defense — which focuses on fortifying systems and waiting for attacks to occur — active defense involves continuous monitoring. It’s an approach that engages with the threat environment to anticipate attacks and respond as they happen.
It essentially allows an enterprise to “fight back” against attacks in a controlled and measured way. The strategies include various methods, from setting up decoys to conducting offensive countermeasures. Actively seeking out and neutralizing threats reduces the risk of damage. It also ensures you maintain control over the security landscape.
An active defense strategy can be the key to keeping threats at bay, but it’s essential to deploy certain techniques to make it effective.
Businesses often find themselves surrounded by thousands of critical notifications every day. Research shows companies receive an average of 16,937 weekly alerts, yet they only investigate 4% of them. This overwhelming volume can lead to alert fatigue and may cause them to overlook critical warnings. In fact, a 2021 study found 51% of internet security leaders felt their teams were overwhelmed by the alert volumes received.
In an active defense strategy, it helps to set up high-priority alerts by determining where to focus monitoring efforts. Define what constitutes a critical situation based on impact and likelihood. For example, you could establish alerts based on certain triggers like access attempts to sensitive data or anomalies in inbound traffic. Use security tools incorporating AI and machine learning to detect and prioritize notifications better.
Intrusion detection systems (IDS) and security information and event management (SIEM) are critical to an active defense strategy. Each plays a unique role in keeping IT environments safe.
Companies use IDS to monitor networks and systems for malicious activities. An IDS functions by analyzing traffic and identifying suspicious patterns. On the other hand, SIEM provides a more holistic view of information security. These systems work by aggregating and analyzing log data to identify anomalies, track security events and issue alerts.
Using IDS and SIEM is beneficial because each complements one another. While IDS is excellent at detecting anomalies, it operates in isolation and generates alerts that lack context. SIEM helps you understand these alerts by correlating them with other logged events. As such, you gain a clearer picture of what’s happening, reducing the likelihood of false positives.
Threat hunting is a proactive security practice where teams actively search for signs of malicious activity. Traditional measures rely on automated alerts, but threat hunting involves human-driven initiatives aimed at discovering stealthy threats.
Threat hunting typically begins with a hypothesis based on observed anomalies and recent cyber threat reports. This way, the threat hunters know what behaviors and patterns to look for. Equip your team with analytics tools, endpoint detection and response systems for detailed data collection.
During the hunt, gather data from the network and systems to look for discrepancies that match the threat hypothesis. This involves analyzing activities such as irregular network traffic or anomalies in account activities. Threat hunting is an iterative process, but each deep dive can provide new insights.
Deception technology leverages decoys and false information to mislead attackers and protect sensitive data. This strategy involves setting up systems that appear vulnerable and attractive to attackers, including honeypots, honeynets and decoy databases. When attackers interact with these, they unknowingly reveal their methods, providing intelligence and notifications to security teams.
Deception technology requires careful planning and precise implementation to ensure the decoys are indistinguishable from real systems. That way, it diverts attackers from your real assets and teaches you ways to improve security measures.
Each method is a proactive way to detect attacks early. They also assist in preparing much stronger defenses against future threats. Consider it as a feedback tool as you continue engaging with it to help you refine your defenses as a result.
Penetration testing and red teaming involve simulating cyber-attacks to test security systems and protocols. The former focuses on identifying vulnerabilities, often using automated tools to scan for weaknesses. Red teaming takes this further by simulating a full-scale attack using the same tactics as real-world attackers.
Both methods help you understand how your defenses would stand up under pressure. When conducting penetration testing, you can implement it frequently on more specific systems. Meanwhile, red team exercises should be less frequent and more comprehensive.
Utilizing both activities is critical to testing the technical barriers and the human elements of security. Routine challenges test your security measures to discover gaps in defenses and improve them. Regular reviews and updates following these trials guarantee positive changes in defense strategies, helping you maintain strong protective measures against attackers.
Deploying active defense is crucial for staying ahead in today’s ever-evolving threat landscape. Its strategies help you identify and mitigate threats before they become a danger to an organization. As such, regularly incorporating them into your cybersecurity tactics will help you remain robust and adaptive. | <urn:uuid:0c737f34-70c8-4bbc-a991-ee7a28e5a13a> | CC-MAIN-2024-38 | https://cyberexperts.com/how-to-deploy-an-active-defense-strategy/ | 2024-09-07T21:44:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00587.warc.gz | en | 0.941119 | 1,071 | 2.546875 | 3 |
Personal data is not only a personal asset that needs to be kept private for our protection from abuse but also a valuable commodity and potential source of income.
In the pre-digital economy, we used our physical or intellectual abilities to generate income which we would in turn spend on life’s necessities and goods and services.In today’s digital economy, the data we produce has monetary value; very much like our physical or intellectual labor. It contains information that helps businesses make decisions and generate additional revenues. While there are many discussions around data mining and data privacy, there hasn’t been as much focus on generating revenues from personal data for the owners of the data. A handful of very large companies have been mining and reselling consumer and enterprise data without sharing the revenues with the producers of this data. Most people and even enterprises either do not recognize the value of their data or don’t have a simple way of monetizing their data.
Take for instance the case of the white knights of the ride share sector like Uber and Lyft that provide convenient ride discovery with “cheaper” fairs and at the same time create new jobs. For these companies, their main source of revenue is a percentage of the ride fair. But they also collect and mine valuable data on drivers, passenger, the cars operated by drivers, traffic and driving patterns, frequent and popular destinations, tourism patterns, consumer and provider behavior, and many other valuable data. These data are continuously collected and processed to gain valuable knowledge. The knowledge extracted from the data not only can generate additional revenues from new sources but also can impact the future of businesses by identifying trends and disruption opportunities through data mining. For instance, the ride services data can be used to help roll out a network of autonomous vehicles that would eventually dis-intermediate drivers’ roles and make them redundant.
The larger the network for collecting this data, the larger is the power and control on monetization and less opportunity to compete by smaller entities. In effect, consumers and smaller enterprises are the data producers that fuel the growth of these businesses and not only do they not receive any benefits, but their data can be used to manipulate their decisions or used to minimize their participation and share in the economy. Our data is invaluable for the growth and expansion of these companies.
We’re in effect investing our data into these companies which they monetize but don’t provide us any financial returns on our investment.
However, you cannot blame these companies for mining the data that we voluntarily provide for free. Today, we don’t have a simple way of monetizing our personal data and have become reliant on the services provided by them. Ideally, we should get a return from the value of our data similar to getting equity in return for cash investment in these companies.
We must realize that most of our data has perpetual value; one could even claim that some of our data becomes more valuable over time contributing to a valuable data ocean that can be processed to derive economic value. We’re all walking data mines with never ending and perpetual resources even after our physical deaths. Today, our data is being mined to create and grow highly profitable giant companies with little return on our investments that are fueling their success.
As my friend Gina Cody once said; this sounds like the unfortunate life of Henrietta Lacks (https://en.wikipedia.org/wiki/Henrietta_Lacks) whose cells were taken without her knowledge in 1951. Her cells and the knowledge derived from them became one of the most important tools in medicine, and vital for developing the polio vaccine, cloning, gene mapping, and other important commercial endeavors in the health sector. Henrietta’s cells have been bought and sold by the billions, yet she remains virtually unknown, and her family can’t even afford health insurance. Our personal data is a valuable resource that is stolen from us like our labor was stolen from us during the slavery and in the early days of industrial revolution. The difference is the way this is done, not through forced labor camps, nor unorganized labor force with no collective power, but through google searches, and Facebook likes, and Instagram posts.
What we receive in return for our valuable data is junk advertisement that manipulates our behavior as consumers, influencing our buying decisions, and even our social and political choices. Those with power to mine our data can control our lives and destiny putting many of us in digital ghettos!
The digital economy must compensate the producers and owners of data. It is obvious that this is not going to happen voluntarily as long as we are willing to provide the data for free. Some governments have established data privacy provisions to protect us from serious abuse and manipulation, but that does very little to provide us a fair share of the economic value of our data.
To address this issue, we must first realize the value of our personal data, and then we need a technology platform that could help us monetize our data with little cost or overhead.
As a first step, we need to remove all unnecessary middle-men in the consumer and enterprise data value chain. Then, we need a scalable technology platform and the business logic to allow consumers and enterprises to seamlessly provide the data in return for monetary value.
One key technology to enable this vision is edge cloud which can enable the consumer to better control and manage their personal data with minimal reliance on central entities. Edge cloud can help build new concepts such as personal data wallets where data can be exchanged real-time in return for economic value.
Edge cloud can provide an efficient, scalable, and interoperable platform for real-time exchange of consumer and enterprise data. The business logic can be built to empower the consumer to get fair share of value in the trade of this valuable commodity. The adoption of edge cloud can help return a significant value to consumers and at the same time create a new ecosystem of new companies for data brokerage. | <urn:uuid:4b719316-da1e-4cd1-9485-376081df798a> | CC-MAIN-2024-38 | https://stg-3x.mimik.com/customer-analytics-or-data-robbe/ | 2024-09-07T20:00:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00587.warc.gz | en | 0.945531 | 1,205 | 2.828125 | 3 |
Certificate in Digital Forensics Fundamentals
Provided by QA
The Certificate in Digital Forensics Fundamentals course (QAIDIGFOR) is designed to help commercial and government organisations collect, preserve and report on digital artefacts in a way which is suitable for use in investigations.
The course covers the broad topics essential to the digital forensics; disciplines. It sets out a framework for investigations, covering the best practice as described by The National Police Chiefs' Council (NPCC) formally ACPO guidelines. Forensic fundamentals will be covered as well as the use of open source forensic tools. The data will be then analysed, and an example report produced.
Participants to this course learn about the methods to identify, preserve, analysis and report on digital artefacts. Using a mixed approach of fundamentals and open source software, delegates will be able to select suitable tools and report on their findings in an evidential way.
The Certificate in Digital Forensic Fundamentals course audience includes all teams across the IT, Security, Internal Audit, Law Enforcement and Government.
- Understand the purpose, benefits, and key terms of digital forensics
- Describe and adhere to the principles of the forensic framework
- Understand the importance of the chain of custody
- Demonstrate a basic knowledge of key locations in different operating systems
- Identify how different file systems represent files and how they deal with deletion etc.
- Understand where timestamps and other meta data comes from
- Have knowledge of the legal framework in which they operate, and the expected level of ethical behaviour expected
Module 1: Intro to Digital forensic
- What digital forensics is
- What is digital evidence?
- When and why is digital forensics used?
- Different Types of Digital Forensics - Standalone and e-discovery
- What skills should a computer forensic expert have?
- Introduction to the forensic framework
- What legislation applies to investigations?
- ISO/IEC standards what does it cover?
- What does the legislation cover?
- What do authorising officers have to consider
- What does the legislation mean for investigators?
- The consequence of failing to adhere to the legislation which applies
- Computer Misuse Act and how it applies
- The NPCC guidelines and how they apply to the collection of digital evidence
- The role of a First Responder
- Triaging - the new digital forensics approach
- What is ;chain of custody; concept and how critical it is to maintain
- Triaging - Digital Forensics
- What is the order of volatility
- What imaging is and why we work on imaged data
- Write blocking hardware and software
- How do we forensically image a live device?
- How do we forensically image a switched off device?
- Physical and Logical Imaging
- Understand Hashing Algorithms and collisions and how it is used to verify acquisitions
- Creating Forensic Image using FTK Imager
- Why do we need to know about hardware?
- Live RAM capture and analysis (pagefile.sys and hiberfil.sys)
- Data storage - magnetic hard disks
- Understand how solid state drives and flash memory differ
- What is the BIOS and UEFI and what settings they hold
- Analysing the boot process
- Partitioning Disk analysis
- Volume and Master Boot Record
- How number systems work and how data is represented in binary and hexadecimal
- Difference between Big and Little Endian
- Character Encoding ASCII and Unicode
- Different File systems NTFS, FAT
- Analysis what happens when file is saved, deleted
- What is Slack Space and the different types of slack
- Access control lists and permissions
- What is the Master File Table used for?
- Recovering Data from Recycle bin
- Viewing Deleted data
- Analysis of Prefetch folder
- Differences between user profiles
- File Signatures Analysis
- Manual File carving
- File Carving Using Kali Linux
- What is Metadata?
- Understand about MAC times
- How to find meta-data inside documents
- How to use Fingerprinting Organizations with Collected Archives how to extract Meta-data
- EXIF Data and analysis
- Windows User Profile
- Identifying different Windows Artefacts and what information can be found
- Analysing Thumbnail Cache
- Viewing the Windows Registry and locating information
- Analysing Email Headers
- Forensic Analysis of HTTP data using Wireshark
- Analysing of web browser artefacts
- Understanding the different type of logs and what information they can provide as part of forensic analysis
- Analysing thumbnail cache databases
- How to analyse the windows registry and find evidence
- How to analyse email headers
- Mobile Forensics Require a Different Approach
- What information a mobile device can provide
- Different methods for conducting mobile device examinations
- Mobile phone evidential values
- The difference between notes, examination logs and witness statements
- The issue with printing evidence and court requirements
- Commercial Forensic
- Open Source Forensic Tools
Duration - 90 minutes. Questions - 70 Multiple choice (4 multiple choice answers only 1 of which is correct). Pass Mark - 50%
The exam is a Proctor-U APMG exam for the Certificate in Digital Forensics Fundamentals, which will be taken by delegates in their own time after the course.
Delegates will receive individual emails to access their AMPG candidate portal, typically available two weeks post exam.
If you experience any issues, please contact the APMG technical help desk on 01494 4520450. | <urn:uuid:80e09e3b-613c-4c5f-9c6c-e1b38bed213a> | CC-MAIN-2024-38 | https://www.cybersecuritytrainingcourses.com/course-details/57471/certificate-in-digital-forensics-fundamentals/ | 2024-09-09T01:43:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00487.warc.gz | en | 0.872129 | 1,166 | 2.796875 | 3 |
Platform-as-a-Service (PaaS) providers offer a cloud-based platform for developing and deploying apps.
After reading this article you will be able to:
- Define PaaS
- Explore the advantages and disadvantages of PaaS
- Compare PaaS to serverless computing
What is PaaS (Platform-as-a-Service)?
Developers that use the Platform-as-a-Service (PaaS) model rent everything they need to build an app from a cloud provider, including development tools, infrastructure, and operating systems. This is one of the three cloud computing service models. PaaS greatly simplifies web application development because all backend management is done behind the scenes, away from the developer’s view. Although there are some parallels between PaaS and serverless computing, there are many significant distinctions. 3 Main Categories & Services of Cloud Computing – SAAS, IAAS, PAAS.
What are the three service models of cloud computing?
PaaS (Platform-as-a-Service), SaaS (Software-as-a-Service), and IaaS (Infrastructure-as-a-Service) are the three cloud computing paradigms (Infrastructure-as-a-Service). IaaS refers to a cloud vendor‘s management of cloud computing infrastructure, such as servers and storage, whereas SaaS refers to entire applications hosted in the cloud and maintained by the SaaS vendor. If a SaaS client is like someone renting a home, a PaaS customer is like someone renting all of the heavy equipment and power tools needed to quickly build a house, assuming the tools and equipment were maintained and repaired on a regular basis by their owner. Learn more.
How does PaaS compare to development environments that are hosted internally?
PaaS may be accessed through any internet connection, allowing you to create a full application in your browser. Developers may work on the application from anywhere in the globe because the development environment is not hosted locally. This enables teams who are dispersed around the globe to collaborate. It also implies that developers have less control over the development environment, albeit at a far lower cost.
What is included in PaaS?
The main offerings included by PaaS vendors are:
- Development tools
- Operating systems
- Database management
Other services may be included by different vendors, but these are the basic PaaS features.
PaaS suppliers provide a range of software development tools, such as a source code editor, a debugger, a compiler, and other necessary tools. These tools might be used to form a framework. The tools available will vary depending on the provider, but PaaS should contain everything a developer needs to construct their project.
Middleware is generally included in platform-as-a-service offerings so that developers don’t have to create it themselves. Middleware is software that stands between user-facing programmes and the operating system of the machine; for example, middleware allows software to get input from the keyboard and mouse. Although middleware is required to execute an application, end users do not interact with it.
The operating system on which developers work and the application runs will be provided and maintained by a PaaS vendor.
Data bases are managed and maintained by PaaS providers. They will almost always supply a database management system to developers as well.
In the cloud computing service paradigm, PaaS is the next tier up from IaaS, and it includes everything that IaaS does. Servers, storage, and physical data centres are either managed by a PaaS provider or purchased from an IaaS provider.
Why do developers use PaaS?
Time to market is increased.
PaaS allows developers to build applications faster than they could if they had to worry about constructing, configuring, and providing their own platforms and backend infrastructure. All they have to do with PaaS is develop the code and test the application, and the vendor will take care of the rest.
From beginning to end, there is just one environment.
PaaS allows developers to create, test, debug, deploy, host, and update their apps all in the same place. This allows developers to ensure that a web application will work effectively as a hosted application before releasing it, as well as simplifying the application development process.
In many situations, PaaS is more cost-effective than using IaaS. PaaS clients save money since they don’t have to manage or provide virtual machines. Furthermore, some vendors provide a pay-as-you-go pricing model, in which the vendor only charges for the computer resources used by the programme, saving clients money. However, each vendor’s pricing structure varies significantly, and some platform providers charge a monthly fixed cost. See details on how to choose a reliable cloud vendor.
Licensing is simple.
All licencing for operating systems, development tools, and everything else contained in the PaaS platform is handled by the PaaS provider.
What are the potential drawbacks of using PaaS?
Because the application is created using the vendor’s tools and particularly for their platform, switching PaaS providers may be difficult. Each vendor’s architectural requirements may differ. Languages, libraries, APIs, architecture, and operating systems used to create and run the application may not be supported by all suppliers. Switching suppliers may need rebuilding or significantly altering a programme.
Companies may become more reliant on their existing PaaS vendor due to the time and resources required to switch vendors. A little change in the vendor’s internal procedures or infrastructure might have a significant impact on the performance of an application that was meant to function smoothly on the previous setup. Furthermore, if the vendor’s pricing strategy changes, an application’s operating costs may increase dramatically.
Security and compliance challenges
In a PaaS architecture, a third-party vendor stores most or all of an application’s data while also hosting its code. In certain situations, the vendor may store the databases through a third-party source, such as an IaaS provider. Despite the fact that most PaaS suppliers are major corporations with robust security, it is impossible to thoroughly analyse and verify the security measures safeguarding the application and its data. Furthermore, for businesses that must adhere to stringent data security requirements, checking the compliance of extra external vendors would add to the time it takes to get to market.
How is Platform-as-a-Service different from serverless computing?
In both PaaS and serverless computing, all a developer needs to worry about is creating and uploading code, while the vendor handles all backend procedures. When utilising the two models, however, the scaling is significantly different. Serverless computing, or FaaS, apps grow automatically, but PaaS applications do not unless they are configured to do so. Serverless apps may be up and running nearly instantaneously, but PaaS applications are more like conventional programmes and must be operating most of the time or all of the time in order to be available to users right away.
Another feature is that, unlike PaaS suppliers, serverless vendors do not supply development tools or frameworks. Finally, price distinguishes the two versions. PaaS pricing isn’t as accurate as serverless computing billing, which breaks out charges by the number of seconds or fractions of a second that each instance of a function operates. | <urn:uuid:e9e47541-67e2-4da6-aecf-09772d9549bc> | CC-MAIN-2024-38 | https://hybridcloudtech.com/what-is-paas-platform-as-a-service-in-cloud-computing/ | 2024-09-10T08:40:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00387.warc.gz | en | 0.934238 | 1,557 | 2.765625 | 3 |
Technology is already influencing education in ways that would have been considered ‘sci-fi’ just a few decades ago (think all-pervasive laptops, networking, face-time communication with those on the other side of the world).
But now, as digital technologies such as virtual and augmented reality (VR and AR), artificial intelligence (AI) and the cloud start to fully mature, the education sector is set for a far greater transformation in ways that no one can fully predict.
This will involve not only the adoption of advanced technologies in the classroom, but new and streamlined ways to teach and study, where technology plays a vital supporting role to teachers both in and out of the classroom, without replacing them entirely.
Some of the pending advances will be more visible than others, and accessibility definitely falls into this category. As the days when students needed to be physically in a classroom to learn anything continue to recede into history, lecturers and tutors can now share their content over the cloud in so-called e-learning modules.
This is making education increasingly accessible to students anywhere, anytime – something that will become far more pronounced going forward and which also promises to level the education playing field.
Additionally, teachers will make greater use of digital tools such as Formative, Google Forms, and Padlet, which not only generate increasingly personalized tasks for students but also constitute an important evolution in the student-teacher paradigm.
A whole new reality
One of the biggest game changers will be in the fields of VR and AR, where students can enjoy increasingly multilayered learning experiences involving more and more sophisticated equipment.
Tech heavyweights Apple, Google, and Samsung have all invested heavily in this technology and are convinced of its breakthrough potential, especially in the education sector. Advanced VR and AR will, in the not-too-distant future, allow students to conduct scientific experiments without the hazard of burning chemicals, or attend a historical battle site or travel deep into a mine to test the strength of a new safety material.
Afterwards, they can share their findings in real time on platforms such as Google Hangouts or Skype using tools like Padlet, Trello and Slatebox.
Similarly, AI and advanced machine learning, along with emerging technologies such as blockchain, will increasingly streamline the way in which certain subjects are taught and how students process information, due to their ability to cut through vast amounts of data for the most pertinent and actionable insights.
In the near future, AI especially will provide links between work, study and performance across most sectors of the economy, giving educators unprecedented amounts of information regarding what precise tools students will need to succeed in their chosen industries.
Technology is already playing a leading role in transforming education, and digital tools will increasingly come to the fore as they are integrated into existing education systems to bolster best teaching practices for hopefully far greater learning outcomes. | <urn:uuid:882ca0ee-133e-44db-a88b-4e2d3375664a> | CC-MAIN-2024-38 | https://www.docutrend.com/blog/see-where-education-technology-is-headed/ | 2024-09-10T08:34:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00387.warc.gz | en | 0.95281 | 585 | 3.3125 | 3 |
ZigBee Impacts the Physical World
Now you have a little background on some of the advantages and disadvantages of ZigBee. Where is it being used, and what is all the security hubbub about? While ZigBee may not be the fastest or offer the greatest distances it still has a tremendous amount of uses.
A noted wireless security expert once said, "No wireless technology has been more integrated or impacts the physical world more than ZigBee." He was absolutely right! ZigBee radios have been integrated into all sorts of sensors and monitoring and control devices; and have found their way into hospitals, industrial facilities, and control systems that our society relies on every day.
In hospitals, ZigBee radios are more frequently found in patient-monitoring systems to provide data collection while allowing uninhibited mobility of the person wearing the device. They are even finding a home "inside" patients as a means for doctors and medical professionals to communicate with patients that have been fitted with an implanted defibrillator or other heart-monitoring device.
The short range and low output power limits the problem of radio interference with other radio devices or medical equipment. Doctors can simply use another ZigBee radio to interact with the patient's device, collect data, or even change the configuration settings of the implanted device.
ZigBee radios are also heavily used in industrial applications. They are integrated into refineries, chemical plants, and water-treatment facilities as sensors or to control processing equipment.
The explanation and business drivers behind why companies are utilizing ZigBee radios are very simple to understand. Running physical wires has a significant labor and material cost, while using a self-contained, battery-operated ZigBee radio limits these costs and provides administrative advantages when troubleshooting issues.
Other organizations are switching to ZigBee radio devices for monitoring and regulating the temperature in buildings, or communicating with controllers to shut down lights when people are not in the room.
Another area where ZigBee devices have become very popular is in our homes. New residential housing is often implemented with ZigBeeenabled water and gas meters. Utility providers use specially configured ZigBee radios as data collectors to collect the water and gas meter's transmissions from their utility vehicles. This process greatly increases the efficiency of utility companies collecting meter and billing data. | <urn:uuid:8a16e07b-20f6-4fa9-8be0-1b9d0e8edbc3> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=1823368&seqNum=3 | 2024-09-11T13:19:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00287.warc.gz | en | 0.943466 | 459 | 2.96875 | 3 |
From smartphones to the rapidly growing electric vehicle market, power-packed batteries fuel modern lifestyles. Yet, this widespread reliance comes with a pressing challenge – the proper management of end-of-life batteries. With the rise in battery-powered devices and renewable energy systems, there’s an urgent need for a sustainable solution that recovers valuable materials, minimizes waste, and supports the global transition to a circular economy. Li-Cycle Corp is an industry leader specializing in the responsible recycling of lithium-ion batteries.
The demand for battery recycling has never been more pronounced, with the proliferation of electric vehicles (EVs), the growth of renewable energy systems, and the omnipresence of lithium-ion batteries in our daily lives.
With an expected annual processing capacity of approximately 81,000 tons of lithium-ion battery materials within its Spoke network by the end of 2023 and an additional 35,000 tons of black mass processing capacity at the Rochester Hub at full operation, Li-Cycle is poised to make a significant impact in the recycling industry. The company’s comprehensive approach covers all chemistries and formats of lithium-ion batteries, boasting an impressive up to 95% recycling efficiency rate. This high efficiency ensures that valuable materials are efficiently returned to the supply chain, significantly reducing the environmental impact.
We recover critical materials from lithium-ion batteries and reintroduce them back into the supply chain, giving end-of-life batteries a new purpose. — Ajay Kochhar, President, and CEO
One of the most striking aspects of Li-Cycle’s operations is the focus on sustainability throughout the entire process. The company’s closed-loop resource recovery technology is a game-changer, enabling a circular economy. Unlike traditional recycling methods, Li-Cycle’s approach minimizes waste production. Notably, there’s no production of landfill waste during the process, setting a new standard for responsible battery recycling.
Safety is paramount in the handling of lithium-ion batteries, especially given their potential risks. Li-Cycle’s Spokes ensure the safe processing of fully charged Li-ion batteries of all chemistry and formats, seamlessly transferring output products into anode and cathode materials. For materials containing IP-sensitive design information, Li-Cycle offers secure destruction, providing a home for the safe disposal of such materials.
The logistics behind battery recycling can be complex. Li-Cycle’s logistics management ensures a stress-free experience, coordinating seamless and efficient shipments through a reliable network of logistics partners. Customers can rest assured that their recycling requests are handled with the highest standard of care, adhering to regulatory standards at regional and international levels.
Li-Cycle’s commitment goes beyond recycling. It offers add-on services tailored to meet specific customer needs. Whether it’s advice on packaging, forward logistics, and spare battery storage, comprehensive battery replacement campaigns, or fully customized programs, Li-Cycle stands ready to support businesses in navigating the complex world of lithium-ion battery recycling.
The market’s growing appetite for clean energy, EVs, consumer electronics, and energy storage systems is driving the need for sustainable battery solutions. Li-Cycle’s approach resonates across various sectors. For instance, in the EV and e-mobility sector, where major automakers are electrifying their fleets, Li-Cycle provides tailored services that prioritize both customer needs and environmental standards. In lithium-ion manufacturing, the company helps manufacturers save costs while recycling scraps sustainably.
In the consumer electronics space, where valuable resources often go to waste, Li-Cycle achieves up to a 95% recycling efficiency rate, making a significant environmental impact. Finally, in the energy storage sector, Li-Cycle’s closed-loop solutions offer a secondary source of battery-grade materials to meet the rising demand for renewable energy storage.
Li-Cycle has carved out a prominent role in redefining the future of battery recycling. The company’s commitment to recovering and reintroducing critical materials back into the supply chain is a testament to its dedication to sustainability. Its innovative approach, high recycling efficiency, and comprehensive services make it an essential player in shaping a cleaner, more sustainable, and circular future. | <urn:uuid:316363aa-b088-41f7-80e2-94da98e8bedf> | CC-MAIN-2024-38 | https://www.ciocoverage.com/li-cycle-lithium-ion-battery-recycling-resource-recovery/ | 2024-09-12T19:48:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00187.warc.gz | en | 0.912175 | 871 | 2.515625 | 3 |
What is Personally Identifiable Information (PII) Compliance?
First of all, we need to know what is PII. PII or Personally Identifiable Information is the particular data that can help enterprises to secure their cloud data by contacting, identifying, or locating specific individuals or users. The implementation of PII is done in two ways- solo, or it can work in combination with other measures which can then be easily accessed. PII compliance thus becomes useful when the organizations have any confidential data like medical, financial, educational, or even employment records, and are linked to the users. Many data elements can be used to identify a particular user or person. Some of them are names, contact addresses and numbers, biometric data, fingerprints, social security numbers, email addresses, etc. PII thus has to be stored securely, and the onus lies on the federal agencies to safeguard that data.
What is Need for PII Security Compliance?
Some regulations are necessary to keep a check on the personal information shared by enterprises with other third parties. PII compliance laws provide these compliance measures and requirements, which help corporate entities in keeping their confidential data secure. Society has long been relying on these safe PII laws, but still, there has been an increased reporting of data breach cases through hacking and other methods. But now, since computer technology has taken center spot with every organization and individual hooked to the advanced internet, it becomes pertinent that organizations strictly adhere to these PII protection laws. PII encompasses various other related laws like the following:
- Privacy Act
Some PII Regulatory Compliance Examples
The collection and selling of PII are legally permitted and are required by organizations for various purposes. But, it becomes a problem when the information from PII is exploited for malicious intent by criminals resorting to cyber crimes and stealing individuals’ identities. This sort of identity theft is the leading cause of concern since it is capable of causing damaging emotional and financial repercussions for the victims. As per the statistics by the FBI, these crimes count among the fast-growing ones. Various government bodies have effectively formed legislation to tackle this menace and put a limit on the ways by which personal information gets distributed. The PII compliance checklist is as follows:
- Name: The person’s full name, including the maiden name or any of the parent’s names and any such alias.
- Address: The address information will include the residential and the office address complete with the street address.
- Email Address: It includes Email addresses also.
- Contact Numbers: The contact address will include the numbers through which the person is accessible, like the mobile number and the office contact numbers.
- Assets: Asset information consists of the IP/MAC address or some other static identifiers that the person uses.
- Identification Numbers: Passport number, Driver’s License, Patient-Identification number, social security number, credit card number- all these form the PII through which a person is identified.
- Others: Birthplace, birth date, religion, geographical indicators, educational data, financial and medical records, and other activities also give essential details about an individual.
How Can CloudCodes Help Enterprises For Achieving PII Compliance?
CloudCodes’ Cloud Access Security Broker (CASB) solution can help organizations comply with Personally Identifiable Information (PII). Organizations must identify PII and are handled carefully. Any compromise of this data in the form of security breaches can have severe repercussions for the organizations carrying that data and also for the individual whose information is breached. Hence it becomes essential that the government entity becomes responsible for protecting and safeguarding sensitive data by governing its usage and ensuring that PII laws are met. Understanding raw data and knowing how crucial it is; is the first step toward cloud security. By guaranteeing complete cloud security, CloudCodes CASB solution can help corporate entities meet their PII compliance requirements. CloudCodes solutions can be customized as per the needs of the organizations to check unauthorized leakage of sensitive data and help in securing it. | <urn:uuid:dd92e1c5-b343-4420-9e5e-2cf4e70944f7> | CC-MAIN-2024-38 | https://www.cloudcodes.com/blog/casb-help-organizations-pii-compliance.html | 2024-09-12T18:43:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00187.warc.gz | en | 0.935655 | 827 | 2.734375 | 3 |
What Test Witnesses Need to Know About Accelerometers and Laboratory Testing
Shock and Vibration analysis utilizing accelerometers and laboratory testing is a requirement for many commercial and defense standards. Because accelerometers provide the feedback to the excitation system, proper selection and placement is essential. To achieve optimal testing validity the test witness and test manager should have some knowledge about accelerometers and how they are used so as to provide valuable information to the laboratory personnel setting up the test.
Pre-test Preparation for Dynamic Testing
Having a fixture that is designed for your test item is optimal. Having the same fixture to mount your test item on ensures that a test is repeatable. This fixture should be characterized prior to testing so that no resonances or nulls to the shock or vibration profile are being introduced to the Unit Under Test (UUT). Fasteners used should be characteristic of those to be used in the product’s intended installation and should be tightened to the specified torque. Furthermore the UUT should be mounted on the fixture identically for all tests.
Know the Parameters of the Tests
Because of the wide range of possibilities, it is important that the test witness be aware of the characteristics of the profiles to be used in a dynamic tests. These characteristics include the frequency range and the amplitudes. It is a good idea to review this information from the test plan with the laboratory test engineer to ensure proper accelerometer selection. Selection of an appropriate accelerometer is essential because of the wide variety of usable frequency range and amplitude scale.
Mounting of the Accelerometers
The method of mounting an accelerometer can greatly effect its frequency response. Methods for mounting include stud mounting, adhesive, and adhesive mounting pad.
When practical stud mounting provides the maximal frequency response. Often a coupling fluid such as grease or beeswax is used to enhance frequency response to compensate for surface flatness or roughness. If these are used the specific medium used should be documented so that the test parameters can be replicated.
There are small differences between adhesives in their frequency responses. Often Loctite 454 is used. Generally these work well. For testing where large forces are at play however, such as hammer shock tests used in shipboard shock and ballistic shock, adhesives are not advised. These adhesives can fail during the test resulting in necessary retest and possible over test of the UUT.
Once the location of the accelerometer(s) have been established and validated the locations and means of mounting should be documented. Documentation, preferably by photo, should also show the means of securing the accelerometer wiring because base strains caused by wiring can effect the response of the sensor.
The vast majority of test laboratory engineers are well informed about the dynamic testing they perform on a regular basis. They however, need to know about any specific information particular to the UUT. If dynamic testing is to be conducted with the UUT in an operational state, areas that reach high temperatures should be noted. If these areas are used for mounting the test engineer may have to utilize thermal compensation. Additionally if the UUT generates extreme magnetic fields shielding might be required.
CVG Stategy Test and Evaluation Experts
CVG Strategy has performed test and evaluation for a wide range of commercial and military applications. We have extensive experience in dynamic, climatic, and EMI/EMC. We can provide test program management, test witnessing, test program documentation, and product evaluation. Contact us to see how we can help you get the most from your test and evaluation program. | <urn:uuid:f49157da-c475-4b83-869a-ebfbe23b5628> | CC-MAIN-2024-38 | https://cvgstrategy.com/accelerometers-and-laboratory-testing/ | 2024-09-14T01:36:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00087.warc.gz | en | 0.91594 | 716 | 3.09375 | 3 |
The European Union has banned access to TikTok on its official devices due to concerns over the platform’s Chinese origins and potential security risks. TikTok has been known to collect user data and share it with the Chinese government, raising privacy and data security concerns. Additionally, the platform can be used to spread malicious software or propaganda through its content, making it unsafe for organizational systems.
Overview of TikTok
TikTok is a popular social media platform that allows users to create and share short-form videos. It has seen exponential growth in recent years, particularly among younger generations. However, there have been concerns about the platform’s data collection practices and security measures, particularly because it is owned by a Chinese company.
Recently, the rise of TikTok in social media has been exponential. While this platform can be a great source of entertainment, organizations should be aware of the potential risks they face if they allow access to this platform in their networks.
Potential Security Risks of Allowing Employees to Access TikTok
TikTok is known to collect data from users, which can include personal information, location data, and browsing history. This data can be used for targeted advertising and other marketing purposes. Additionally, the platform is infamous for its lack of security measures, making it vulnerable to cyber-attacks and data breaches. Moreover, TikTok has been accused of hosting inappropriate content and sharing user data with third-party companies, which could result in reputational damage, financial loss, and legal liabilities for the organization. Therefore, it is essential for organizations to weigh the advantages and disadvantages of allowing employees to access TikTok and consider blocking the app in the organization’s network.
To mitigate the risks posed by TikTok, organizations should develop and enforce an acceptable use policy that outlines what apps and websites are permitted to be used on the organization’s network. Employees should also be educated on the potential risks of using TikTok and other social media platforms and encouraged to practice safe online practices. Network security solutions can be implemented to detect and block TikTok traffic and other potentially risky apps and websites. Organizations should also regularly monitor network traffic to identify any unauthorized or suspicious activity, ensure that all devices accessing the organization’s network have the latest software updates and security patches installed, and implement multi-factor authentication for access to sensitive data and systems.
Regularly backing up important data and conducting disaster recovery drills can also prepare the organization for any potential data loss or cyber-attacks.
How can organizations block TikTok Access Through Network Security Solutions
This is difficult to maintain, particularly when employees and visitors bring their own devices. Not allowing the installation of this application is probably not enough.
Organizations can block TikTok access through network security solutions such as those offered by Cubro, which can detect and block the application running on any device on the network. Cubro offers two solutions to block TikTok or any other application accessing the Internet. The solutions can also block and throttle the application on subscriber/user level and be geolocation or time-related.
Cubro’s application filtering solution is a smart decision to alleviate the burden on monitoring and security tools, leading to an optimized return on investment. Filtering network traffic based on applications simplifies the monitoring process and helps eliminate extraneous traffic, including Social Media or video streaming content that isn’t relevant for analysis. By implementing best practices and network security solutions, organizations can reduce the potential security risks posed by TikTok and other social media platforms, protect their data and systems, and maintain a positive brand image.
Conclusion: Benefits of Blocking TikTok Access in Corporate Networks
Blocking TikTok access in corporate networks can provide several benefits for organizations, including:
- Mitigating Security Risks: TikTok collects user data, which includes personal information and browsing history, and shares it with the Chinese government. This poses a significant security risk to corporate networks as it may lead to data leakage and unauthorized access to sensitive information. By blocking TikTok, organizations can reduce potential security risks and protect their data and systems.
- Improving Network Performance: TikTok and other social media platforms consume a significant amount of bandwidth, which can slow down the network and affect the performance of other applications. By blocking TikTok, organizations can reduce the bandwidth consumption and improve network performance, ensuring critical applications run smoothly.
- Maintaining a Positive Brand Image: TikTok is known for its inappropriate content, which can have a negative impact on an organization’s brand image if accessed through its corporate network. By blocking TikTok, organizations can reduce the risk of reputational damage and ensure their employees are not exposed to inappropriate content.
- Compliance with Regulations: Some countries and industries have regulations that require organizations to ensure the security and privacy of their data. By blocking TikTok, organizations can comply with these regulations and avoid potential legal liabilities.
In summary, blocking TikTok access in corporate networks can reduce the risk of cyber-attacks, protect data, and improve network performance. It can also reduce the harm of unwanted content, which could have legal implications for management and free up bandwidth in the network, which unwanted applications can steal. | <urn:uuid:2f347336-7b8d-4ebc-8b68-2a801d8556d1> | CC-MAIN-2024-38 | https://www.cubro.com/en/blog/why-the-eu-has-forbidden-the-tiktok-app-on-official-devices/ | 2024-09-13T23:58:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00087.warc.gz | en | 0.937228 | 1,053 | 2.921875 | 3 |
Slideshow: RFID In Healthcare
Slideshow: RFID In Healthcare(click image for larger view and for full photo gallery)
Say hello to the wireless building of tomorrow, powered by radio frequency identification (RFID) tags. According to a new study, the range of passive RFID tags -- which can function as wireless sensors -- gets a boost, literally, when the tags' antennas are wired to buildings' heating, ventilating and air-conditioning (HVAC) ducts.
How much can their range be extended? Typical RFID tags can be "read" through open air at a range of 5 to 10 meters by reflecting a signal broadcast by an RFID reader. But the researchers' tests found that by using HVAC ducts, which largely consist of hollow metal pipes, readers worked well at a range of 30 meters, and may function reliably even farther away.
The research was conducted and authored by a team from Carnegie Mellon University, Johns Hopkins University, Intermec Technologies and North Carolina State and published in this month's Proceedings of the IEEE.
Today, sensor systems in buildings largely use wired technology and infrastructure to monitor everything from climate control and structural integrity to security and health and safety parameters. But wireless RFID tags could do the job instead. "This would work with anything you can create an electronic sensor for," said Dan Stancil, head of North Carolina State's Department of Electrical and Computer Engineering and co-author of the study. Possibilities include "RFID tag smoke detectors, carbon-monoxide monitors or sensors that can detect chemical, biological or radiological agents."
Furthermore, by using less expensive RFID tags eliminating the need for a wired infrastructure, organizations could lower the cost of outfitting or augmenting a building with the relevant sensors and monitoring. "Because you can tap into existing infrastructure, I think this technology is immediately economically viable," said Stancil. "Avoiding the labor involved with installing traditional sensors and the related wiring would likely more than compensate for the cost of the RFID tags and readers."
According to a study from ABI Research released earlier this year, the RFID market is expected to reach $5.5 billion this year and $8.3 billion by 2014. After a lot of initial hype, the technology has been making slow and steady progress in some sectors, such as for government documentation, retail and apparel, animal tagging and baggage handling.
About the Author
You May Also Like
Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring
September 25, 2024 | <urn:uuid:50d9f6e8-1b9d-4e97-a7b3-9dcf1abe5673> | CC-MAIN-2024-38 | https://www.informationweek.com/it-infrastructure/rfid-tags-range-boosted-by-ductwork | 2024-09-14T00:58:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00087.warc.gz | en | 0.942226 | 553 | 2.75 | 3 |
Consumer behaviors have changed significantly since 2019, leading to a 50% increase in online sales.1 Services like buy-online, pick-up-in-store and curbside pickup have expanded to new retailers and been adopted by more consumers, who intend to continue the practice. In fact, more than 80% of consumers have developed new shopping behaviors.2
Omnichannel retail requires a seamless experience between the digital and physical realms, even extending to delivery partners. Many retailers initially scrambled to meet the challenge but supporting omnichannel customers is worthwhile—they make purchases 70% more often and spend 34% more than those who shop solely in physical stores.3
The shift to digital also creates a larger attack surface, thus increasing security risk. Cybercriminals frequently go after retailers and their customers for financial gain. Basic web application attacks, social engineering (specifically phishing and pretexting), and system intrusion were the top three patterns used in retail breaches in 2021.4 Credentials were the top type of data compromised, making fraud a major concern. Global e-commerce losses to online payment fraud were estimated at $20 billion USD in 2021.5
The competitive advantages of traditional retail are no longer enough, proven by 46% of U.S. consumers changing brands or retailers recently.6 Retailers must continue to optimize digital and omnichannel shopping, providing experiences that will retain and attract customers, while also keeping a growing amount of personal data secure.
Fraud and the abuse of applications by bad actors often occur as a result of automated bot or human-driven attacks, like account takeover and credential stuffing. Many businesses have adopted tools like CAPTCHA to mitigate automated attacks, but they also frustrate legitimate users. And CAPTCHAs can be easily bypassed with solving services or human intervention.
Artificial intelligence and machine learning trained on real human traffic can identify and stop fraud without friction—even sophisticated automation that can emulate human behavior. AI-powered fraud prevention can block automated traffic to keep out attacks and reduce burden on infrastructure.
Applications provide new opportunities to reach customers, such as personalization or virtual assistants that help with purchase decisions and reduce returns. Apps need to be flexible, scalable, and secure to be successful, and running them in the cloud provides reliability and performance.
Whether retailers plan to modernize existing apps or build new ones, they need to be able to deliver quickly and deploy anywhere, including in hybrid and multi-cloud environments. Apps also need to be secure and free from vulnerabilities that can be exploited. By integrating security into the DevOps pipeline, it becomes part of the development lifecycle instead of causing delays at the deployment stage.
APIs allow retailers to collaborate and innovate, connecting the supply chain, delivery partners, and distribution channels. They enable interactive experiences, like virtual try-ons of clothing and makeup or location-based coupons. They can also create borderless channels, eliminating the lines between a physical store, online, mobile apps, and phone support.
Managing APIs effectively requires maximizing performance and monitoring the complete lifecycle. By reducing API call response times, integrations appear seamless to the consumer. For all of their benefits, APIs also add an additional security risk; therefore, real-time protection via security policies, anomaly detection, and access control are needed.
F5 solutions help you modernize applications, manage APIs, and secure sensitive data. With AI, machine learning, and real-time monitoring, you can prevent sophisticated automated and human-led attacks that result in fraud, data theft, and application abuse. F5’s WAF solutions can be integrated into your CI/CD pipeline to protect applications without delaying releases. API management enables integrations that offer great performance and security. F5 Distributed Cloud fraud and risk services mitigate bots and predict fraud across the customer journey. Innovate rapidly without compromising security or adding user friction.
Google Cloud is a flexible, secure cloud provider that embraces open source, making it a platform to migrate infrastructure and modernize applications. It’s also multi-cloud friendly, providing pioneering capabilities around Kubernetes as well as big data and analytics. Google Cloud has always prioritized security; the platform’s strong security and cutting-edge encryption allow companies to safely store and analyze sensitive personal identifiable information. Digital commerce tools from product discovery to retail analytics help you become a customer-centric, data-driven retailer.
Together with F5 and Google Cloud, you can offer a seamless and secure omnichannel experience that attracts and retains customers while exceeding their expectations.
1 Digital Commerce 360, US Ecommerce Sales, February 2022
2,3 McKinsey, The five zeros reshaping stores, March 2022
4 Verizon, Data Breach Investigations Report, May 2022
5 Statista, Value of e-commerce losses to online payment fraud worldwide in 2020 and 2021, March 2022
6 McKinsey, In search of speed: A new way for retailers to organize, October 2021 | <urn:uuid:1a93bc96-1c11-41f3-96b0-27af071a8aa8> | CC-MAIN-2024-38 | https://www.f5.com/pt_br/company/blog/meet-rising-consumer-expectations-digital-transformation | 2024-09-17T15:29:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00687.warc.gz | en | 0.931242 | 1,010 | 2.5625 | 3 |
The global vaccines market is expected to grow at a CAGR of around 6.4% from forecast period 2019 to 2026 and to reach the market value of around US$ 55.2 billion by 2026.
Vaccine is a biological product that provides acquired immunity against a specific disease. It is generally prepared from killed or weaken disease causing microorganism, one of its surface proteins, or from its toxins. Vaccination is the most effective method to prevent infectious diseases. Polio, influenza, HPV vaccine, meningococcal, and dengue are some of the widely used vaccines across the world. Increasing infectious diseases, investment in research and development by market players, government and private organizations initiatives to increase awareness about vaccination, and entry of various vaccines in the market are the key driving factors of the market. According to the World Health Organization (WHO), immunization prevents about 2 to 3 million deaths each year from influenza, diphtheria, pertussis, tetanus, and measles. Furthermore, it can prevent additional 1.5 million deaths if vaccination coverage improves across the world.
Increase in vaccination as it is one of the best cost-effective method to avoid diseases, rising in an access in developing countries due to initiatives by WHO and UNICEF’s immunization programs, and introduction of vaccines against various regional diseases are some of the driving factors of the vaccines market. Favorable reimbursement, growing awareness about vaccination, and investment by market players to introduce vaccines in the market are also positively impact the market growth. According to the recent data published by the WHO, measles mortality decreased by 80% across the world. In 2018, about 86% infants received 3 doses of DTP3 vaccine, protecting them from infectious diseases. Similarly, HPV vaccine was introduced in about 90 countries till 2018. However, high development cost and stringent regulations for commercialization of vaccines are the key restraining factors of the vaccines market. Hence, high cost of product may hamper its access in low and middle income countries. According to the WHO, every year about more than 10 million children die before they reach their 5th birthdays in low and middle income countries.
Based on technology, the vaccines market has been segmented into conjugate vaccines, inactivated vaccines, live attenuated vaccines, toxoid vaccines, recombinant vaccines. The conjugate vaccine segment held the largest share in the market due to long lasting protection and greater ability to give protection to toddlers and infants. Conjugate vaccine has a combination of weak and strong antigen as a carrier so that immune system gives strong response to weak antigen.
Pediatric vaccines accounted for the maximum share of the market in 2018. This can be attributed to large targeted population, government initiatives to increase children vaccination, and increasing awareness about vaccination schedules. Adult vaccine segment is also expecting significant growth in coming years due to rising awareness about regular health checkup and vaccination after 40 years of age.
Based on the route of administration, the market has been segmented into intramuscular & subcutaneous, oral, and others. The intramuscular segment accounted for the maximum share of the market in 2018 and is expected to maintain its dominance during the forecasts period as the route avoid adverse local effect of vaccine.
The market research study on “Vaccines Market – Global Industry Analysis, Market Size, Opportunities and Forecast, 2019 – 2026”, offers a detailed insight on the global vaccines market entailing insights on its different market segments. Market dynamics with drivers, restraints, and opportunities with their impact are provided in the report. The report provides insights into the global vaccines market, its technology, disease indication, type, patient type, rote of administration, and major geographic regions. The report covers basic development policies and layouts of technology development processes. Secondly, the report covers global vaccines market size and segment markets by technology, disease indication, type, type of patient, rote of administration, and geography along with the information on companies operating in the market. The vaccines market analysis is provided for major regional markets including North America, Europe, Asia Pacific, Latin America and Middle East and Africa. For each region, the market size for different segments has been covered under the scope of the report.
The players profiled in the vaccines market report include Astellas Pharma Inc., Bavarian Nordic, CSL Limited, Daiichi Sankyo Company, Limited, Emergent BioSolutions, Inc., GlaxoSmithKline plc, Johnson & Johnson, MedImmune, LLC, Merck & Co., Inc., Mitsubishi Tanabe Pharma Corporation. The key players of the industry are involved in R&D activities, commercialization of products, and mergers & collaborations to enhance their footprints in the industry. For instance, in February 2019, Bharat Biotech acquired Chiron Behring Vaccines Pvt. Ltd. from GlaxoSmithKline Asia. Chiron is one of the largest manufactures of rabies vaccines across the world. The acquisition is helping Bharat biotech to strengthen its product portfolio.
Market By Technology
By Route of Administration
By Type of Patient
Market By Geography
Vaccine is a biological product that provides acquired immunity against a specific disease.
Polio, influenza, HPV vaccine, meningococcal, and dengue are some of the widely used vaccines across the world.
Acumen Research and Consulting predict that the vaccines market value is anticipated to be worth around US$ 55.2 billion in 2026.
The vaccines market is anticipated to grow over 6.4% CAGR during the forecast period 2019 to 2026.
North America held maximum share in 2018 for vaccines market.
Asia Pacific is projected to grow at a fast pace during forecast period in the vaccines market.
Pediatric vaccines accounted for the maximum share of the market in 2018.
Availability - we are always there when you need us
Fortune 50 Companies trust Acumen Research and Consulting
of our reports are exclusive and first in the industry
more data and analysis
reports published till date | <urn:uuid:95582edd-bd16-4283-8ae8-933e2951b666> | CC-MAIN-2024-38 | https://www.acumenresearchandconsulting.com/vaccines-market | 2024-09-18T20:55:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00587.warc.gz | en | 0.940253 | 1,238 | 3 | 3 |
A New Way to Harness IoT Data from Legacy Industrial Systems
If you’re familiar with the manufacturing world, chances are Programmable Logical Controllers (PLCs) are nothing new to you. Having their roots in the automotive industry, PLCs are omnipresent across all modern industries today and akin to the brains of factory automation systems. While coming in many shapes and forms, industrial PLCs are used for a common purpose – real-time production control. A PLC gathers data from thousands of input sensors, processes it and triggers automated actions on the corresponding output device like actuators, alarms and switches.
Integrating IoT for legacy equipment
As the Internet of Things (IoT) disrupts the industrial world, integrating legacy equipment with IoT connectivity becomes paramount. While some might question whether IoT will eventually replace PLC systems, seeing these two as mutually exclusive options isn’t the way to go. Indeed, given massive operational data concentrated in a PLC, being able to interface it with an IoT architecture could unlock unprecedented visibility on the factory floor. Not to mention, PLCs have gained their credit as a robust, versatile automation instrument, and it doesn’t look like manufacturers will move on without them anytime soon.
The Challenge of PLC Communications in Legacy Industrial Systems
Until recently, IoT-enabling PLCs had been a major undertaking. The explanation is simple. Invented in the previous century, they weren’t originally designed to be connected to an external system. PLCs might be able to communicate with each other or to a local dashboard at best, but their core functionality is real-time control, not remote networking. As such, data flows within PLC-managed automation networks are closed-loop and stay locked on the factory floor. On top of that, most, if not all, older PLC models employ a plethora of proprietary, vendor-specific protocols that hamper interoperability and data exchange. As PLCs are intended for several decades of use, it’s not uncommon to find these older models dominating a standard manufacturing facility.
Next-gen PLCs do not come without networking challenges, either. Many of them provide built-in Ethernet capabilities and an onboard web server to be directly plugged into the Internet. Nevertheless, running cables around factories is expensive, dangerous, and conducive to production shutdowns. In many outdoor, geographically dispersed industrial settings with challenging topography (e.g. open-pit mines), Ethernet wiring isn’t even an option. Also, PLC web servers require a case-by-case configuration to enable data sharing with an IoT system.
Despite the common understanding, not all PLCs are integrated into the factory-level Supervisory Control and Data Acquisition (SCADA) system. Doing so often requires complex, error-prone PLC reprogramming alongside cumbersome wiring which cost several weeks of production downtime. Certain SCADA systems are even proprietary and only compatible with PLCs from a specific vendor. This complicates the implementation of a unified remote monitoring network for cross-vendor PLC models and protocols. Find out if Internet of Things going to replace SCADA systems.
Plug-and-Play IoT Connectivity for Legacy PLCs
With the rise of advanced IoT technologies, connecting legacy PLCs is now much less of a hassle. Emerging plug-and-play solutions in manufacturing can help you easily IoT-enable your PLC without invasive hardware modifications and inefficient, labor-intensive cabling. Such a solution brings together two major pieces of the puzzle: a retrofit integration gateway and robust, long-range wireless connectivity. Using automation-specific protocols, the integration gateway interfaces with a legacy PLC to acquire critical data points. An IoT transceiver, connected with the gateway on the other end, then transports the data to a remote base station through a reliable wireless radio link, minimizing cable needs.
With a versatile solution, you can have a wide range of supported PLC models and physical interfaces to choose from, enabling scalable and cost-effective implementation of a larger monitoring and control system. At the same time, it allows you to select only the most essential PLC data tags for sending, instead of burdening your backend with influxes of useless information. The wireless connectivity must also be designed for rebar, structurally dense industrial facilities with heavy radio interference from running equipment and existing networks.
Besides simple and flexible implementation, an optimal PLC integration solution fulfills other critical requirements of your IoT deployment. It renders you with complete data control and ownership to decide whether to keep data entirely on-premises, forward it to a third-party cloud/ application platform for further analytics, or adopt a hybrid approach. Equally important, it ensures the security of your critical automation networks through a one-way wireless link and end-to-end data encryption. As legacy industrial systems were built with few security functions in mind, one-way data transfer from the PLC helps avoid attempts to remotely control machines through reverse communication. More about IoT in the Automobile Industry
Harness PLC data for Real Operational Values
Providing IoT connectivity for legacy PLCs enables companies to close the OT/IT gap and unlock real-time visibility into their operations. Previously, it used to take days or even weeks to collect data and generate operational reports from legacy industrial systems. With this delay, it’s often too late to handle an issue and costs can quickly escalate. Now, real-time insights allow businesses to timely pinpoint and act on bottlenecks to optimize processes and reduce costs. You probably know that in manufacturing, even a small improvement can make a big impact on the bottom line.
How to leverage PLC data to augment operational efficiency:
- Monitor machines and equipment and get alerts when anomalies are detected (e.g. motor overheating and excessive vibration, conveyor jams, valve/ pipelines leakage).
- Execute predictive maintenance through data modeling and machine learning.
- Oversee production rates and counts, cycle time, changeovers, scrap rates…to trace sources of performance losses (i.e. speed, availability and quality)
- Monitor fill levels of tanks and silos for just-in-time refills or emptying.
- Track energy consumption of equipment and systems to identify waste sources.
- Automate data collection for FDA and OSHA audits in industries like pharmaceuticals and food and beverage.
Given their indispensable role in legacy industrial systems, PLCs are poised to play a major role in IoT for manufacturing. With the advent of plug-and-play connectivity, updating brownfield PLCs for IoT is no more an expensive and daunting task. By breaking down data silos and tapping into unprecedented operation insights in real-time, the opportunities to enhance operations and bolster your competitive edge are endless.
Wolfgang Thieme is a Cofounder and the Chief Product Officer at BehrTech, an enabler of next-gen wireless connectivity for Industrial IoT. He has more than 12 years of experience in academic research and development; successfully innovating and implementing new technologies with commercial partners and managing development and product teams. | <urn:uuid:681a605d-8668-435e-b72a-cb648609ff9c> | CC-MAIN-2024-38 | https://www.iiot-world.com/industrial-iot/connected-industry/a-new-way-to-harness-iot-data-from-legacy-industrial-systems/ | 2024-09-18T20:59:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00587.warc.gz | en | 0.904726 | 1,466 | 2.578125 | 3 |
Network security is a broad concept encompassing a wide range of technologies, devices, and processes.
In layman’s terms, it is a set of rules and configurations created to protect the integrity, confidentiality, and accessibility of computer and data networks through software and hardware technologies.
Regardless of size, industry, or infrastructure, every organization requires network security solutions to protect it from the ever-changing landscape of cyber threats in the wild today.
There are many sections to consider regarding network security within an organization. An attack or breach can occur in any section of an organization’s network. For this reason, your organization’s hardware, policies, and software need to be updated and designed to circumvent attacks.
A typical network security system consists of three types of network security:
The controls are put in place to prevent unauthorized personnel from accessing components physically in your company.
This entails physically protecting people trying to access your router, hard drives, systems, etc. This network security system relies on protective devices like biometrics and locks present within your organization.
It mainly deals with data network security. Technical security protects data that is stored on the network. This data regularly moves in and out of your network.
The protection process here occurs twofold, as a data network security control from outside threats and hackers; and malicious attempts from employees within the organization.
The administrative network security system provides network security solutions via security policies and processes that control user behavior.
It controls how users are authenticated within the network, determines their level of access, and how the IT department or cybersecurity team adjusts to the network infrastructure.
There are network security tools and devices available to assist your organization in protecting sensitive information and its overall performance, reputation, and even its ability to stay in business.
The ability to continue operating and maintain a good reputation are two essential advantages of adequate network security.
Businesses that are victims of cyberattacks are frequently ruined from within, unable to deliver services or effectively address customer needs. Similarly, networks play a significant role in internal company processes.
When they are attacked, those processes tend to grind to a halt, hampering an organization’s ability to conduct business or even resume normal operations.
However, rather more damaging is the negative impact a network breach can have on your company’s reputation.
It’s easy to see what’s at stake regarding network security:
Per a study, 66 percent of SMBs would be forced to shut down (temporarily or permanently) following a data breach. Even more significantly, established companies may be unable to reclaim their former prominence.
On the other hand, trustworthy network security software and hardware, combined with the appropriate policies and strategies, can help ensure that their impact is minimized when cyberattacks occur.
Your network regularly has to face threats of all shapes and sizes. Gone are the days when hackers and cybercriminals worked with minimal tools. Today, hackers and threats are usually well funded, allowing them to use expansive methods to breach and attack your network.
For this reason, you need to ensure secure networking with the latest network security tools.
Below are 14 network security tools and techniques that every organization needs to implement to prevent attacks:
If cybercriminals cannot gain access to your network, the amount of damage they can inflict will be severely limited.
However, in addition to preventing unauthorized access, keep in mind that even authorized users can pose a threat.
Access control allows you to improve network security by restricting user access and resources to only those parts of the network that are directly relevant to individual users’ responsibilities.
Malware, which can take the form of viruses, trojans, worms, keyloggers, spyware, and other malicious software, is designed to spread through computer networks and infect systems.
Anti-malware tools are network security software that detects and prevents the spread of malicious programs. Anti-malware and antivirus software are also able to assist in the resolution of malware infections, thereby minimizing damage and ensuring secure networking.
It can be hard to identify anomalies in your network if you don’t have a more robust understanding of how it should work.
Network anomaly detection engines (ADE) help you to analyze your network so that you are alerted quickly enough to respond when breaches occur.
Applications are a defensive vulnerability that many attackers can misuse. Application security aids in the placement of security parameters for any applications that may be relevant to the security of your network.
The human element is usually the weakest link in network security. DLP technologies and policies prevent staff and other users from misusing and potentially compromising sensitive data or allowing sensitive data to leave the network.
Email security, like DLP, focuses on mitigating human-related security flaws. Attackers use phishing strategies to persuade email recipients to share sensitive information via desktop or mobile devices or inadvertently download malware and trojans onto the targeted network.
Email security also aids in the identification of potentially dangerous and suspicious emails and can also be used to stop attacks and prevent the sharing of sensitive data.
Bring your own device (BYOD) is becoming increasingly popular in the business world, where the difference between personal and business computing devices is nonexistent.
Unfortunately, when users rely on personal devices to access business networks, they can become vulnerable targets. Endpoint security provides an added layer of protection between remote devices and business networks.
Protect the devices of your employees with Cyvatar’s Cybersecurity Prevention Plan. |
It is similar to gates and barriers in that they can be used to secure the borders between your network and the internet. Firewalls manage network traffic by allowing authorized traffic through while blocking non-authorized traffic.
Intrusion prevention systems regularly scan and analyze network traffic to identify and respond to attacks as quickly as they can.
These systems regularly maintain a database of known attack methods to detect threats as soon as they appear.
There are various types of network traffic, each with its own set of security risks. Network segmentation enables you to grant the right traffic access while restricting traffic from suspicious sources.
There are different architectures and methods for setting up network segmentation for your network. A network segmentation strategy could be enabled based on the size of your organization.
Getting the right information from so many different tools and resources can be difficult, especially when time is of the essence. SIEM tools and software provide first responders with the information to act quickly.
VPN security tools ensure secure networking and allow endpoint devices to communicate with one another. Remote-access VPNs typically use IPsec or Secure Sockets Layer (SSL) for authentication, resulting in an encrypted line that prevents eavesdropping by third parties.
It is a catch-all term for businesses’ network security measures to ensure safe web use when connected to an internal network. It includes security tools, hardware, policies, and more. This prevents web-based threats from using browsers as entry points into the network.
Wireless networks are usually less secure than traditional networks. As a result, strict wireless security measures ensure that threat actors do not gain access.
Internet security encompasses a large umbrella of different types of security. These include three distinct types of Internet security, namely:
Relation between cybersecurity, network and information security.
Network security is the process of safeguarding the functionality and integrity of your network and data. It comprises terms for both hardware and software.
Access to the network is managed by effective network security. It detects and prevents a wide range of threats from entering or propagating on your network. Risks to Network Security:
Cybersecurity is the process of protecting systems, networks, and programs from digital threats. Cybersecurity approaches include methods for assisting and securing various digital components.
Depending on the type of network you are linked to and the type of cyber-attacks you are vulnerable to, there are several approaches to implementing cyber security. Common cybersecurity threats are:
Information security refers to the safeguards to keep documents safe from illegal access and usage. It guarantees confidentiality, integrity, and accessibility. Cyber security and network safety are subsets of information security. It is critical for every large-scale organization or firm. Data might be digital or tangible.
There are a number of network security principles that have been developed over the years in order to ensure complete protection of an organization’s network. These principles, when upheld, provide maximum protection to an organization and its data.
Let’s take a look at them below:
The degree of confidentiality determines the information’s secret. According to the principle, only the sender and receiver will have access to the data transmitted between them.
Confidentiality is jeopardized if an unauthorized individual has access to a message.
Assume sender A wishes to convey some secret information to receiver B, with the information intercepted by attacker C. An intruder now has access to private information.
Confidentiality is a very important aspect to consider when it comes to network protection.
Authentication is the mechanism used to identify a user, system, or entity. It verifies the individual’s identity while attempting to gain access to the information. Most authentication is done with a username and password– the authorized person whose pre-registered identity can prove their identity and access sensitive information.
It ensures that the information received is precise and accurate. The integrity of the communication is lost if the content of the message is modified after the sender sends it but before it reaches the intended receiver.
Non-repudiation is a method that prevents message content from being denied over a network. In rare circumstances, the communication is sent and then rejected. However, non-repudiation does not allow the sender to refuse the receiver.
Access control is established by the principles of role management and rule management. Role management shows who should have access to the data, whereas rule management determines how one can access the data. The information displayed is determined by who is accessing it.
The availability principle states that the resources will be available to the authorized person at all times. Systems should have enough information available to satisfy user requests. If information is not easily accessible, it is not valuable.
In order to ensure complete protection of a given network or a business, it is paramount to couple your network security with the right network security technologies.
When it comes to said network security technologies, these are what are primarily used in the modern day:
Data Loss Prevention is the technology concerned with determining whether the data sent out by the organization is sensitive enough to impede commercial operations.
Typically, data is exchanged via email, and with this technology, the emails have been checked to verify that they are not transporting confidential material out of the firm.
Using this technology, all emails and attachments are closely checked to verify that all material transferred outside the firm is appropriate and not sensitive.
An intrusion detection system (IDS) is a technology that monitors every piece of traffic entering an organization to guarantee that it is not hostile.
This technology is primarily concerned with providing a careful look at the traffic to make sure that it is something that the organization should allow in.
It can also be considered a tool in charge of inspecting traffic and generating an alarm if it is discovered to be malicious or looks to have originated from an untrusted source.
The Intrusion Prevention System (IPS) is a system or program that takes action against malicious traffic identified by the IDS. When a packet enters the system that is deemed untrustworthy, the IPS typically drops it.
It is the primary safeguard that ensures harmful traffic does not access the organization’s network. It is IPS that ensures that all traffic entering the system complies with the policies specified by the organizations and does not interfere with the operation of the systems in any way.
SIEM is another name for it. It is primarily concerned with triggering the alert whenever anything odd is discovered on the organization’s network.
It also keeps track of logs that are generated when guaranteeing network security. Several tools can be linked to SIEM to ensure that anything dangerous generates an alert so that the security team can take action and maintain the internal environment safe.
It can also be seen as the central system to which other tools are linked. All of the technologies function as peers, each protecting the network in its own way.
The firewall serves as the initial line of defense for any system or network. Firewalls are classified according to their function. Network firewalls are used to safeguard the Internet, whereas web application firewalls are intended to protect the online application.
This technique was created to defend the internal network from anomalous activity and to ensure that nothing dangerous enters the network. The technology assures that the ports are only available for appropriate communication and that untrusted data does not enter the system in any case.
The firewall could either allow traffic to enter or implement port filtration to ensure that any traffic passing through it is relevant to the service running on any given port.
Another tool utilized in cybersecurity is antivirus software. Its name implies that it safeguards the system from viruses. A virus is simply malicious code that causes the host or network to perform unexpected actions.
It is installed on the network and can also be used as end-point security. To protect oneself from virus attacks, any network-connected device can have antivirus software installed.
The antivirus analyzes the signatures in its collection to determine whether a particular file was infected with a virus. The most recent antivirus software can use anomalies to detect viruses and take action against them.
Cyvatar provides complete security solutions for your organization, be it your network security or cybersecurity. Connect with our seasoned cybersecurity experts to learn how you can use effortless security services for your business.
Need Help With Cybersecurity? Explore our All-in-One Managed Cybersecurity Subscriptions |
Circa Las Vegas
Thurs. Aug 5th
Cybersecurity Reunion Pool Party at BlackHat 2021 | <urn:uuid:659bfc38-42d7-4b94-a834-417f9c38b374> | CC-MAIN-2024-38 | https://cyvatar.ai/network-security/ | 2024-09-07T23:52:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00687.warc.gz | en | 0.941175 | 2,852 | 2.921875 | 3 |
As many cybersecurity professionals are aware, the Internet of Things (IoT) is proliferating without any established legal framework or regulatory guidelines. According to a Forbes article, the IoT market is expected to reach $457 billion by 2020, up from $157 billion in 2016. The interconnectivity of network-enabled devices makes the IoT a can of worms when it comes to legal liability. U.S. regulatory agencies, such as the Department of Commerce, the FTC, and the FCC are a long way from establishing clear standards for IoT devices. Could the business that owns a connected device be held responsible for carelessness in the event of a data breach or an attack, or does liability fall only to the manufacturer or the software provider?
Imagine a smart home or a smart factory in which all devices communicate with each other. These devices aren’t all necessarily from the same manufacturer, and they may involve different software providers. They send and receive data from multiple sources. “The attack surface is exploding from an information security point of view,” noted Andrzej Kawalec from HPE security services to Computer Weekly.
The IoT is Different from Past Technological Legal Issues
So far, the Department of Commerce has concluded that the IoT is different from any technological issues society has already faced. So many things can go wrong with network-enabled devices, including interception of data (man-in-the-middle attacks) and DDoS attacks in which the device is compromised and used as part of a botnet. Because IoT devices are so interconnected, determining liability will be more difficult than ever.
With traditional products, liability has normally fallen to the manufacturer/service provider. However, as Cheryl Falvey, co-chair of Crowell & Moring’s Advertising Product Risk Management Group, noted at the first Internet of Things National Institute in 2016 held by the American Bar Association, “the diversity of the IoT field has turned the typical regulatory landscape on its head.” If a user of an IoT product doesn’t take basic cybersecurity precautions and an attack occurs, who’s liable? Is it the manufacturer, software provider, the user’s company, or all the above? If the user interconnects a device and uses it for a purpose that is not intended by the manufacturer or software provider, who’s responsible? These are risks your company should be thinking about, whether you are a user or a producer of IoT products.
Where We Are Today
Clete Johnson, Senate Adviser for Technology Policy from the Department of Commerce, also participated in the American Bar Association’s Internet of Things National Institute. He outlined the Department’s approach to regulating the IoT as focused on the following four broad priorities: (1) a free and open internet; (2) trust and confidence in the privacy and integrity of the online economy, both from business and from the public; (3) broad internet access; and (4) quote, innovation, innovation, innovation.
So far the only legislation manufacturers and service providers have to go on is the 2016 European Network Information and Security Directive (NIS), which imposed the first general obligation to maintain adequate security in certain network and information systems, and a bill that is currently proposed in the U.S. Senate – the IoT Cybersecurity Improvement Act of 2017. Unfortunately, there hasn’t been much movement on this bill since it was introduced.
What Can You Do to Protect Your Company?
If you are a user of IoT products, be sure to follow cybersecurity best practices. NIST initiatives for the IoT is a good place to start. If you are a manufacturer of IoT products, consult with a law firm that has experience in this area. Andrew Austin, DR Partner with Freshfields Bruckhaus Deringer, who works with IoT clients in the U.S. and in Europe says that for now, who pays in a liability suit will be decided in large part by what is written in contracts. He recommends:
- Be as clear as possible in your contracts about what a product is and isn’t designed to do.
- Whether you are the hardware provider, network provider, or a software vendor, be clear about the allocation of risk.
- Try to obtain rights to access data to determine the root cause if something goes wrong.
- Explore cyber insurance options. While choices are still limited, this market continues to grow and evolve.
- Be proactive. Monitor performance and act quickly to correct flaws or vulnerabilities in your product/service. | <urn:uuid:d061f0c2-1065-42ef-b020-d8215251f56d> | CC-MAIN-2024-38 | https://www.datafoundry.com/liability-iot-devices-legal-can-worms/ | 2024-09-07T23:39:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00687.warc.gz | en | 0.948962 | 928 | 2.625 | 3 |
Edge computing makes it possible to process data closer to where it was created, but as it becomes more prevalent, it’s having a major impact on communications, especially when it comes to Industrial Internet of Things (IIoT) security. To optimize edge computing communication models for your organization, you have to understand how it’s used and what security risks come along with it. So how does edge computing increase operational technology (OT) risk, and how is it changing communications?
Traditional OT communication protocols depend on a model that requires point-to-point connections that use a request-response or poll-response communication model, according to Josh Eastburn of Opto22. If you have a piece of hardware that needs to create a unique connection to, for example, a piece of software that’s consuming its data or to another piece of hardware that it is requesting data from, that’s point-to-point.
“As the number of devices grow that want to communicate, the number of connections grows,” Eastburn said. “Each of those devices needs to remain open to incoming connection requests because the way that the communication model works requires some primary device or application send[ing] out a request for information.”
When the recipient sends back its data, the device on the edge doesn’t get to determine when it responds. It just sits in an open and listening position. The network is built out of servers that are wide open to incoming connection requests. Looking at that from a security perspective, it’s a big problem.
“We don’t have nearly enough firewalls out there in the process,” Eastburn said. “These devices aren’t capable, generally speaking, of protecting themselves from a malicious request.”
As vendors introduced new communication protocols over the years, they were doing it primarily for the speed they could offer and their ability to gather input/output (IO) data or send IO data out. They were not looking at it the way cybersecurity professionals would for a device that was intended to go on the internet, according to Eastburn. The assumption was that they were not operating in a high-risk environment because there were other things keeping it safe. The underlying problem is that the communication model lends itself to a lack of security, because it requires that everything is open, listening and essentially ready to receive a bad request.
Edge computing and communication
According to Eastburn, edge-oriented architectures shift the communication model to focus on the device and on creating outbound connections rather than incoming connections. An edge device exists on the edge of the network rather than being oriented toward the core of the network, where the majority of processing generally occurs. Edge-oriented architectures create new options and are an evolution on the traditional request-response model. One such evolution is the publish-subscribe communication model.
MQ Telemetry Transport (MQTT) is a great example of that. Many people are taking an interest in MQTT because of the way it affects security. Using a publish-subscribe model flips that relationship on its head. Rather than having a bunch of servers in your network that are open, listening and ready to respond to requests, they become clients and publish data when they’re ready by creating an outbound connection request to a central server.
“It makes it a lot easier to secure that kind of system, because you have essentially one node or one cluster that needs to be sitting there with an open port,” Eastburn said. “The rest of the network can be closed off to incoming connection requests because it’s just not required by that communication model.”
If those devices aren’t sitting in a listening mode, they get to determine when they make a connection and who they make it to, rather than the other way around. This means security is less of an issue, as complexity is greatly reduced.
“Where you have an IO device communicating to a PLC (programmable logic controller), communicating to a SCADA (supervisory control and data acquisition), communicating to MES (manufacturing execution system), to ERP (enterprise resource planning), and on and on — and with levels of security in between all of that — the device itself can actually exist on the network and be inherently protected from a malicious request or from something like a man in the middle attack.”
With that architecture, you don’t need multiple layers of defense because the device itself is inherently secure. The draconian security policies sometimes found in industrial environments, like only having static IP addresses, are no longer necessary.
Reducing complexity, increasing efficiency
Reducing the complexity of the network also increases the ability to manage it efficiently.
“You don’t have so many devices. You have fewer points of contact with the outside world that need to be managed and monitored,” Eastburn said. “The number of point-to-point connections is also reduced in, for example, an MQTT network. Each client node, which again would be an edge device that’s communicating with your central MQTT server, makes one connection regardless of the number of data consumers.”
A PLC that’s capable of communicating over MQTT is generally going to make its one outbound connection to the MQTT server. From there, the MQTT server gets to determine who’s going to consume that information. If more consumers are added to the network, that controller doesn’t need to create more connections.
This results in fewer connections to secure, and each of the connections in your network is inherently more secure. As a result, the whole architecture becomes simpler.
Check out Part 1 of our interview with Opto 22’s Josh Eastburn, where he discusses the benefits of edge computing and how to develop the best edge security strategy for your organization. And check out our Industrial Cybersecurity Pulse YouTube page to view previous installments from our expert interview series. | <urn:uuid:e8aa0c82-4ecb-4f2d-a0c1-67eae158bf6c> | CC-MAIN-2024-38 | https://www.industrialcybersecuritypulse.com/iiot-cloud/edge-computing-is-changing-communications-expert-interview-series-josh-eastburn-opto-22/ | 2024-09-07T23:56:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00687.warc.gz | en | 0.95264 | 1,243 | 2.5625 | 3 |
As we celebrate Disability Pride Month and the 34th anniversary of the passage of the Americans with Disabilities Act (ADA), it’s crucial to recognize the achievements, contributions, and resilience of individuals with disabilities. This month serves as a reminder of the importance of inclusivity and accessibility in every aspect of our society, especially in the digital realm.
On a personal note, Internet access has a huge impact on my life. I am Deaf and have first-hand experience with how high-speed Internet assists the disability community. I visit my audiologist annually to ensure my cochlear implant is working properly, I’ve used the same doctor in Alabama for over two decades. She can now make adjustments remotely and conduct the visit via high-speed Internet. I no longer have to take two or three days away from work and family to drive to Alabama for one visit.
At NTIA, the ADA continues to guide our progress as we move closer to the promise of equal opportunity, full participation, independent living, and economic self-sufficiency for the 61 million individuals with disabilities in our country. Our communities grow stronger when all people can access the Internet, public accommodations, employment, transportation, and community living regardless of disability.
The digital divide disproportionately affects people with disabilities, making it challenging for them to access the same opportunities and resources as their non-disabled peers. This gap is not just about having access to technology but also about ensuring that digital tools are usable and accessible to everyone.
The barriers people with disabilities face can include physical, cognitive, and sensory challenges that make using standard digital devices and services difficult. Additionally, the cost of adaptive technologies and the lack of accessible digital content further exacerbate this divide.
It’s also important to acknowledge that individuals with disabilities often belong to other marginalized groups, such as racial and ethnic minorities, LGBTQ+, rural or low-income communities. These overlapping identities can exacerbate the barriers they face, making digital equity even more critical.
The Digital Equity Act programs, part of the Internet for All initiative, aim to address these challenges head-on. These programs are designed to empower individuals and communities by providing the tools, skills, and opportunities needed to benefit from meaningful access to affordable, reliable, high-speed Internet service.
People with disabilities are a focus of these programs. By providing resources and support tailored to their unique needs, the Digital Equity Act helps bridge the gap, enabling greater participation in education, employment, healthcare, and civic life.
As NTIA continues accepting applications and funding projects through our Digital Equity programs, we have provided a background resource to help facilitate inclusion of perspectives from and support of individuals with disabilities.
The promise of a connected America
As more people begin to get online, we are looking forward to what a fully-connected America looks like. For individuals with disabilities, high-speed Internet access enhances participation in society in many ways. It allows for greater independence, access to remote work opportunities, online education, telehealth services, and social connections that might otherwise be out of reach.
Pursuing Remote and Flexible Work and Learning Opportunities
High-speed Internet access supports remote work, online education, and entrepreneurial opportunities. People with disabilities, along with employers and educational institutions who serve them, can access opportunities and participate in learning communities focused on equity and opportunity in the workplace.
Accessing Telehealth Services and Managing Health Information and Appointments Online
Assistive technologies and Internet access improve the quality of life and economic participation for individuals with disabilities. These technologies include applications and devices designed to assist with communication, mobility, reading, and daily tasks.
As more and more people get online for the first time, it’s important to remember increasing online accessibility goes beyond efforts to ensure users are connected to high-speed Internet.
Section 508 compliance makes information and communication technology (ICT) more accessible for all users and can include font size, color palette, plain language, image descriptions, closed captioning, and more. Visit the FY24 Governmentwide Section 508 Assessment to learn more about how to ensure and test for Section 508 compliance. The General Services Administration’s quick start guide to accessibility for teams includes role-specific considerations.
Upcoming funding opportunity
NTIA will soon announce the launch of the Digital Equity Competitive Grant Program, one of three Digital Equity Act grant programs created by the Bipartisan Infrastructure Law. These programs aim to empower individuals and communities with the tools, skills, and opportunities they need to benefit from meaningful access to affordable, reliable, high-speed Internet service.
Philip Powell is the National Telecommunications and Information Administration Federal Program Officer for Arkansas. He works directly with the Arkansas Broadband Office to provide input and support. Powell also coordinates with stakeholders and state and local government entities to provide resources so that everyone in Arkansas will have access to affordable and high quality internet. This article was originally published on the NTIA blog on July 19, 2024, and is reprinted with permission.
Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to email@example.com. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC. | <urn:uuid:157241a0-5fa7-46b7-a503-87f8bdc6b225> | CC-MAIN-2024-38 | https://broadbandbreakfast.com/celebrating-disability-pride-month-by-embracing-inclusion-and-accessibility/?utm_source=accessinformationnews&utm_medium=newsletter&utm_campaign=07292024&utm_term=editorial | 2024-09-09T06:14:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00587.warc.gz | en | 0.930452 | 1,060 | 2.78125 | 3 |
Protecting Your Users from Pharming
This post is the fourth and last in a series on how to protect users from common online security threats.Let’s assume for a minute that you need to send a confidential and important letter to someone. You’re extremely careful, of course, to ensure that the address is correctly written on the back of the envelope. You drop the envelope into the post box, and it is picked up by the postal service where workers dutifully and accurately interpret the address and use a map to find the recipient’s address before delivering your letter. Wonderful! Just as you’re about to celebrate, you’re told that the streets on the map that the post office used were labelled incorrectly. How confident are you that your correctly addressed envelope was delivered to the intended recipient? You may have just been ‘pharmed.’Pharming (pronounced ‘farming’) involves changing the underlying mapping of a website’s URL to direct traffic to an alternate web address. Depending on the nature of the content being viewed, victims of pharming attacks can be fooled into giving away confidential information in the belief that they are at the correct address. What’s particularly bad about pharming is that users can correctly enter a URL into a browser and still end up at the wrong website.To understand how pharming works, one needs to understand the postal system of the Internet - the Domain Name System (DNS). Every website on the Internet has at least one IP address associated with it. This is a numerical representation (e.g., 22.214.171.124) of the address for the website. Since it’s difficult to remember numbers like this, we use nicknames for websites (e.g., www.egnyte.com). When you enter a website’s nickname into your browser, it tries to get the underlying address behind the nickname from the following sources (in the order specified):
- A local file on your system (called a ‘hosts file’).
- DNS servers in your local network (companies can use these to prevent access to predefined web addresses).
- Public DNS servers.
Think of DNS servers like maps that tell you exactly where the nickname you entered in your browser should take you. Pharming involves compromising any of these maps to send users to the wrong destination. For example, if you want to log in to your online banking account and you enter the address correctly, you might be taken to a site that looks very similar to your bank’s website. Your browser will even tell you that you’re at the correct address (www.yourcompletelysafebank.com), but you are actually at a website which an attacker has set up (correct nickname, wrong IP address). You then enter your username and password to log in and give away your credentials. By the time you realize something is amiss, your attacker already has your information. In 2004, a German teen was able to hijack his country’s eBay domain name, leaving thousands of users redirected to a bogus website.Preventing pharming attacks involves teaching users to be mindful of the format of URLs they use and setting up extra verification steps, in addition to a username and password, when logging in to any service on the Web. By checking that any secure page is using a URL beginning with ‘https,’ users can have further assurance that they are at the correct website. Connections through the https protocol are secure and involve using a special certificate and ‘handshake’ method to ensure that your machine is speaking to the correct server. Compromising this secret handshake is almost impossible for an attacker, and modern browsers verify that the handshake has correctly occurred. If a pharming attack has already taken place, using two factor authentication (e.g., with a phone call or SMS code) can render any credentials that are acquired by attackers useless, since they are unlikely to be able to supply the additional information to pass the second method of identity verification (unless they also have your mobile device).Egnyte has partnered with Duo Security to offer a robust Two-Step Login Verification system, which IT admins can use to enforce identity verification and mitigate the effects of pharming. Egnyte’s network also uses secure TLS encryption for all https connections to allow modern browsers to verify that users are at the correct location when browsing a website.
Get started with Egnyte today
Explore our unified solution for file sharing, collaboration and data governance.
LATEST PRODUCT ARTICLES
Don’t miss an update
Subscribe today to our newsletter to get all the updates right in your inbox. | <urn:uuid:e30cce5a-5d7e-4ad3-b045-62914ffb2565> | CC-MAIN-2024-38 | https://www.egnyte.com/blog/post/protecting-your-users-from-pharming | 2024-09-10T09:55:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00487.warc.gz | en | 0.937605 | 969 | 2.59375 | 3 |
Do you think they need your date? Do you think they need access to your credit cards? There is something more valuable for hackers than you think.
One of the main targets of modern hackers is to get access to your servers. It allows them to use it as an email relay for spam.
But what else they can do with this access?
Some hackers can use access to your servers as a part of a botnet. What does it mean?
It means that they can use your sources to mine for Bitcoins. That’s why they usually don’t interested in your bank cards or data. They have their own target. Are you surprised?
Don’t be afraid. In this article, you’ll learn how to secure your website. Make a long story short, you’ll learn:
- How to protect personal data;
- How not to become someone’s bitcoin farm;
- How to prevent hackers from gaining access to your server.
If you don’t want someone to use your web design and development company with a purpose to mine Bitcoins or something then read this article to the end.
Let’s dive into this valuable topic!
Maybe it’ll sound obvious but keeping your software up to date is pretty important. It’ll increase your chances to stay secure.
You don’t need to worry if you use a managed hosting solution because the hosting companies take care of the most dangerous things that can theoretically happen.
If you use CMS or forums then make sure you are quick to apply any security patches. By the way, most vendors have an RSS feed that will inform you if there is a threat of hacker interference.
An update is key to security. Don’t be lazy to do this regularly. You may not feel significant improvements with each next update, but you’ll always be sure that it’s more difficult for hackers to crack you.
Let’s talk about SQL injection
SQL injection attacks usually occur according to plan like this.
The hacker uses the input form or URL parameters. Through them, he gets access to the database, with the help of simple manipulations. It’s very easy to do if you are using Transact SQL. Moreover, usually this happens unnoticed, and more likely you don’t suspect anything at all.
Almost all programming languages have such a feature as parameterized queries. It’s easy to implement. You should do it if you want to be secure from SQL injection.
The main problem with passwords is that not many people come up with really strong passwords.
Here are the strong password criteria:
- It should be at least 8 characters, but better more;
- It should contain both letters and numbers;
- It should contain letters of a different register;
- In any case don’t make a password the date of your birth.
Follow these guidelines to create a truly strong password. Encourage your site users to create strong passwords for their accounts.
How to push users to this? It’s pretty easy. You need to create a registration form so that the site rejects passwords with less than 8 characters. Then your users will be more or less protected.
Also, create a pop-up window that reminds users of the importance of a strong password. It’s very simple to do, and your users will feel that you care about them.
Good website moderation
This is one of the surest ways to secure your site. You need to ensure a daily and good website moderation. The main points of moderation are as follows:
- Checking the entire website;
- Spam removal
- Prompt provision of any technical changes to the site.
These are just the most basic points to take care of. Does your website have reliable moderation?
You must remember that every 3-4 months you need to scan. It’s easy to forget about it, but if one day you lose your own data, then you will start to treat it more responsibly. Is it worth learning from bad experience if you can hear this advice and add a scan to the list of obligatory tasks for working with the site?
A quarterly scan is performed by PCI through the Trustwave service.This is done very easily, and doesn’t bring as many problems as the absence of this item can bring.
Think twice, and never skip a quarterly scan.
Use HTTPS for Better Protection
HTTPS is a protocol that provides even greater protection then HTTP. This protocol uses next-generation encryption algorithms. It provides the formation of a secure communication channel between the user’s browser and the site.
Of course, you’ll need a credit card for registration. But is it really a problem when nowadays everyone is paid by credit card.
This way you increase the level of security of your site. And if you really care about your users, then you should do it.
Although many websites have already switched to HTTPS.
Use special security tools
There are some special security tools that will help you keep track of security:
Are you looking for a perfect security scanner? You came into the right place. More than 25.000 scans. It’s also open source. Check it to get all the advantages of OpenVAS.
It’s also a very important security tool. It’ll help you to report as quickly as possible. If something went wrong you won’t lose your time. Try it to know how can you protect your website.
There is a free community. Also, you can get a trial version. Netsparker is the best for testing SQL injection.
Safety isn’t everything
Trying to make your website more secure, don’t forget that security is only a good condition for the development of your website. How useful your website is depending on the quality of the content.
Make your website protected. Use it for your needs, and not for the needs of hackers. | <urn:uuid:fa8abd19-d08b-41cc-94e1-500f3b31e88a> | CC-MAIN-2024-38 | https://kalilinuxtutorials.com/security-tips-to-protect-website-from-hackers/ | 2024-09-11T15:39:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00387.warc.gz | en | 0.936255 | 1,245 | 2.609375 | 3 |
Over the past several years, cable operators have seen their networks transform into the premier platform for transmission of data services, both for residential and business customers. In order to accommodate the growth of services and transmission speeds, the networks have been divided into smaller and smaller clusters of customers forming independent service groups.
When fibre came into widespread use in cable television, it was common for one node to serve a cascade of several trunk amplifiers, with each amplifier feeding dozens of line extenders before the RF signals would finally arrive at a customer. That architecture could involve thousands of homes per node. Today, most nodes serve about 500 homes passed and do so via line-extender amplifiers only.
In order to be ready for the near future, these nodes are being sub-divided into smaller service groups, typically feeding 125 or fewer homes each. These smaller numbers allow for gigabit services to be delivered effectively along with legacy video services.
Looking towards the longer-term future, fibre to the home (FTTH) options will demand even smaller service groups, typically 32 or 64 homes each.
With all of these architectures having one thing in common – the need for fibre transmission paths between the headend and service group – let’s take a look at the access network choices and technologies available to help deliver gigabit services.
Supporting a fibre-deep network
Initial system design, and how demands have been addressed to date, are two key areas in assessing how many fibres are available to serve a collection of service groups. Take, for example, a typical Node+3 HFC (Hybrid Fibre-Coax) system designed for 500 homes per node in the early 2000s (Figure 1). Since fibre cable at that time was bundled in buffer tubes or binders of six fibres each, many systems designed six fibres per node. Matching the nodes to the buffers made splicing and organising the fibres easier. Each node would require one fibre for downstream, and one for upstream, leaving four fibres for spares. Business services may have used two more fibres for upstream and downstream dedicated fibre service to a business location, or perhaps all four to feed a ring. Preferably, the ring would involve only two fibres from any single node location for route diversity.
If a node split was required, any spare fibres available could be used, however a 4x4 split requires more than that. We need four service groups where there was just one.
To solve this, 1310/1550nm WDM, coarse wavelength division multiplexing (CWDM) or dense wavelength division multiplexing (DWDM) techniques were, and still continue to be, employed.
A single spare fibre could be used to replace the fibre feed to four or more service groups.
With passive analogue techniques, this method worked well enough to do the job. But, what happens when we want to divide further, as would be required for a Node+0 design, or to consider a PON for FTTH? In this case, there may not be enough fibre from the headend to the aggregation point to do the job.
PON requires a single dedicated fibre from the optical line terminal (OLT) to the splitter for each service group. In a true all-passive-plant PON, the OLT is located at the headend; we would need a fibre from that headend out to each splitter location. The required fibre count leaving the headend could be in the thousands. In addition, the loss budget required to feed the splitters might also require a maximum 16-way split, rather than the usual 32, further increasing the count.
Asking this infrastructure to support a new fibre-deep architecture may be asking too much, but what about DWDM, both conventional and coherent (recently developed by CableLabs)?
As an example, let us take the above Node+3 system and attempt to convert it into a Node+0 system, and also a PON system. The access network can be broken down into three distinct sections: Headend to Distribution Point (DP); DP to Node or OLT; and Node or OLT to Premise.
When faced with insufficient fibres to feed the aggregation point, the only option was to overlay more fibre cables from the headend out. Now there are more options, and regardless of the option chosen, we want to make certain that we do not have to face this choice again in the future.
Let us examine what happens when we convert a Node+3 system into a Node+0 system (Figure 2). Each former Line Extender location becomes a new mini-Node location. There are other options to reduce the node count, such as employing high-output nodes and reversing the tap strings in the coax, but these methods only have a slight effect in the overall node count while increasing the length of service outages involved in the conversion.
What if we create a PON network from the former node location instead? Figure 3 shows that the neighbourhoods previously served by a line extender are now served by a splitter location instead. By remoting the OLT to this distribution point, we can overcome the fibre shortage and loss budget problems mentioned above.
If we compare the diagrams for these two network options, it is apparent that the required fibre counts are very similar, but the total sheath distance of fibre required is different because only the FTTH network requires installation of new fibre cable beyond the node/splitter location.
In addition, the FTTH network will require drop cables from the plant to each home.
Another important consideration is fibre availability for business services. In a primarily residential network, many business services can ride the same coax or PON network used to serve homes. In some cases, however, the operator may require more. A business might demand their own dedicated fibre for security purposes, or for an unusually high bandwidth demand.
They might require a wholly different architecture, such as a ring, for route protection. For this reason, no matter what the required count for serving residential requirements, there should be extra fibres installed for future business services. The amount will depend upon the commercial density of the particular geographic area.
When it comes to cellular networks and upcoming 5G, these networks traditionally use fibre, but with very small cell sizes, a Coax or PON network could accommodate the required bandwidth. Some Radio Access Networks (RAN) work with dark fibre connecting several radio heads to a common controller.
The bottom line is extra fibre strands should be planned to accommodate any future requirements.
A future-ready network
Ultimately, the architecture required to build advanced networks is remarkably similar between PON/FTTH and HFC Node+0. We can build lots of ‘transport’ plant to feed these fibre distribution clusters, or we can make prudent use of active and passive solutions, including Remote OLTs and DWDM, in order to keep the construction of new fibre cable to a minimum.
By analysing the financial implications of the various options, an operator can decide the best choice for their particular situation and be prepared for bandwidth demands that the future will bring.
Vanesa Diaz is market development manager EMEA at Corning Optical Communications | <urn:uuid:09a5ee3c-83fb-44e3-bcc5-bfcce1f24a4e> | CC-MAIN-2024-38 | https://www.fibre-systems.com/feature/advance | 2024-09-11T17:46:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00387.warc.gz | en | 0.940328 | 1,488 | 2.578125 | 3 |
FortiGates are interface driven firewalls. Policy is relatively straight forward. Port 1 to Wan 1 Allow HTTP NAT you get my drift. In more complex environments though where you can easily have 5-10 interfaces (even more if you bring in VLAN’s) you will most certainly want to use Zones.
What is a zone? A zone is a created “Interface” that you assign other interfaces to. For instance, my common deployment has 2 main zones, INSIDE and OUTSIDE. This keeps policy extremely simple.
The train of thought with this ZONE setup is traffic is either coming in or out. From there you just create the policy and work accordingly. This makes deployments for my clients super easy.
The setup at my house is utilized this way as well (I have a FortiGate 92D at home). My setup is slightly more advanced though thanks to having dual internet connections, SSL VPN, and other capabilities kicked on. But as you can see in the policy set below I have an INSIDE zone. That zone has my work network, my personal home network, and my DMZ wireless network (for when I am cleaning peoples deranged and abused machines). I have each one assigned to the INSIDE zone so that I can apply the same policy for traffic that is traveling from inside sources to the internet. This greatly reduces policy count and helps keep things uniform.
Disclaimer: Make sure to click the “Block Intra-Zone Traffic” check box when creating a zone that includes a set of networks that you don’t want to communicate without policy. For instance, my INSIDE zone has my work network which I need to make sure only my work laptop can see, My personal network which sees everything on the personal net, and a DMZ network that I absolutely don’t want ANY of my other networks to receive traffic from or send traffic to. So I check the “block intra-zone traffic” box when I create my zone (can be edited after the zone is created as well) and then manually allow it via policy (work network is able to access printer on personal net etc). Remember, the more granular you are the better your security will be. Also, the only traffiic that should be able to flow is the traffic you explicitly allow. | <urn:uuid:d441bcc3-bf2d-48db-b947-2b1b0e8e19a5> | CC-MAIN-2024-38 | https://www.fortinetguru.com/2016/03/zones-will-save-your-sanity/ | 2024-09-14T03:18:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00187.warc.gz | en | 0.946709 | 479 | 2.625 | 3 |
In response to concerns faced by corporations about the impact of technology on our environment, IBM founded the Responsible.Computing() movement, creating a membership consortium with Dell that is managed by the Object Management Group.
Customers and partners expect responsible corporate policies and practices, and they also build loyalty with employees. Responsible computing establishes a cohesive, interconnected framework across six critical domains to provide every organization the ability to educate on their responsibilities, define goals and measure their progress against these aspirations:
The Responsible.Computing() framework’s six domains of focus
The framework consists of six domains of focus that demonstrate how IBM reflects responsibility:
Infrastructure: An end-to-end approach into the lifecycle of computing units, such as sourcing precious and rare metals and reducing their usage, reducing energy and computing units, and ultimately recycling the units
Data usage: Implementing data privacy and transparency practices and requesting consent for use and authorization for any data acquisition.
Systems: Producing ethical and unbiased systems, including explainable AI.
Impact: Addressing societal issues, such as social mobility.
In an era where technology is pivotal in shaping politics, digital economies and transitional industries (e.g., the automotive industry), the Responsible.Computing() framework expands beyond just sustainability and climate change and adds other domains, such as ethics, security and openness.
Download the white paper and learn more
IBM Cloud is a major contributor to the advancement of technology, but beyond providing technical breakthroughs, we believe in preserving data privacy and security and promoting trust and ethics in our products, right from design. IBM Cloud collaborates with industry, governments and regulators to protect the environment, reduce consumption, preserve data privacy, write efficient code and develop trusted systems, all while tackling modern issues in society.
Learn more on how IBM follows the Responsible.Computing() framework in the Cloud computing space in the white paper “IBM Public Cloud – A Responsible.Computing() Provider.” | <urn:uuid:0df01bae-88ba-4930-872c-df3a34fb16ef> | CC-MAIN-2024-38 | https://www.ibm.com/blog/ibms-dedication-to-responsible-computing/ | 2024-09-14T05:35:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00187.warc.gz | en | 0.915899 | 408 | 2.921875 | 3 |
Edge computing can visualize businesses in real-time. Retailers can make the most of edge computing to give their customers the best experience possible. As a result, edge computing is one of the most effective ways to improve the business and reintroduce customers to retail stores. Continue reading to know more about Edge Computing and its benefits to the business industry.
Edge Computing: An Introduction
Edge computing is a distributed IT architecture. It processes the client's data close to the source at the network's periphery.
Data is the lifeblood of modern business. It also allows for valuable business insight. It also gives real-time control over critical business processes and operations. Data is routinely collected from sensors and IoT devices. These devices operate in remote locations and inhospitable operating environments.
But, the way businesses handle computing is changing as a result of this massive amount of data. The traditional computing paradigm is not suited to handling huge amounts of real-world data. Bandwidth constraints, latency issues, and unpredictably disrupted networks can all sabotage such efforts. Edge computing architecture helps businesses to address these data challenges.
Edge computing moves some storage and compute resources closer to the data source. Processing and analysis of data are done at the place of data generation. This data source can be in a retail store, a factory floor, a sprawling utility, or across a smart city. The results of that computing work at the edge are sent back to the main data center for review and other human interactions. It gives real-time business insights, equipment maintenance predictions, or other actionable answers.
Use edge computing to improve the IT infrastructure in the healthcare industry. Edge computing is useful for medical device management. It helps to avoid application performance latency issues. Healthcare data processing can be localized instead of relying on a single centralized data center. It also increases security, faster response, and more efficient data transmission.
Edge computing brings computation close to the point of data generation. In the retail industry, data is traditionally gathered in one central location. It is then analyzed and then acted upon. Retail stores can use edge computing to optimize locally and in real-time. Edge computing enables organizations to make data-driven decisions more quickly.
Edge computing will unlock profound capabilities for augmented reality. It will allow unprecedented levels of user engagement to take place on devices. AR experiences will also be more vivid, faster, and improved for users. As a result, many aspects of the business will be ready for change. Companies that take advantage of these advantages early on will have a better chance of succeeding.
Edge computing's local computation brings high levels of intelligence into the data analytics process. This allows you to generate a more precise set of data and speed up analysis and decision-making. Otherwise, intelligence in the cloud is less refined and applied later in the process. As a result, analytics take longer and are less focused than they could be.
To achieve smart manufacturing, edge computing is required. Performing near-real-time analysis allows for significant improvements in operational efficiency and margins. Line shutdowns can be avoided by detecting process anomalies that affect yield. Edge computing systems collect data to create smart tools such as digital twins.
In traditional cloud computing architectures, data accumulates in cloud storage. Because the majority of this data is useless, businesses waste a lot of money storing data they will almost never use. Edge computing can help by only sending useful data to the cloud for processing.
Edge computing can improve efficiency and reduce bandwidth usage. It can be beneficial for businesses with large and complex security systems. For example, data from motion-detection cameras is continuously streamed to a cloud server. But, with edge computing, each motion-detection camera would have its own internal computer. It would run the application and send footage to the cloud server as needed.
Edge computing is built into IoT, digital signage, and the IP fabric that is being assembled for retail and enterprise. The future of business depends on data mining and the collection of trillions of data points. Edge computing enables businesses to gather this data. It also helps them to use cloud services in conjunction with on-premise equipment and sensors. This helps to collect real-world data without requiring consent.
Edge computing eliminates the need to send collected data back to a central server. This translates to lower operating costs and fewer storage requirements.
Medical IoT devices at the edge can aid in the faster and earlier detection of abnormal changes. This would result in faster response times. It would also increase the ability of doctors to diagnose and treat patients. Storage and processing, as well as sensors, are becoming more affordable with wearables.
Edge computing is an ideal candidate to take advantage of technologies like the Internet of Things.
Edge computing's entire method is aimed at bringing data processing closer to the source of the data. To bring what will soon be essential features to enterprises that adopt it, such as real-time data processing and analytics. By allowing retailers to collect and analyze data at the source, they gain access to a plethora of new services and revenue streams. This includes opportunities to use technology offerings like AI and digital signage technologies.
Many retailers rely on their edge computing infrastructure to provide insights. It helps them to best leverage their technological resources. Thus they are able to promote the highest level of operational efficiency. This isn't limited to improving customer experiences or understanding trends.
Artificial intelligence and machine learning technologies can combine with edge computing. They aim to improve retailers' abilities and analyze customer and operational data in great detail.
Edge computing also helps to connect better and synchronize in-store, online, and mobile services. It also improves operational efficiency while also enabling unique and bespoke customer experiences. It also helps to open up new revenue streams on new and future platforms.
There is a continued expansion of the Internet of Things. Also, the increased connectivity with systems and technologies has opened up many ways to connect and engage with customers. Retailers can also tailor their advertising and marketing efforts to match their customers' tastes. They can also recommend new products or offers based on their previous purchasing decisions. By using edge computing, they can gather and analyze data such as purchase history.
Edge computing is also assisting retailers in gaining insights into how customers are likely to behave in-store and online. It does so by analyzing data collected when they use physical or online services.
This data can help create predictive models of future behavior. Also, taking into account what drives decision-making with the help of AI or automation technologies. Understanding how customers react to certain trends allows retailers to tailor the shopping experiences to their preferences.
IoT and edge computing can help to understand data better. It also improves customer experiences as well as operational efficiency. Edge computing systems are also helpful to new security and surveillance technologies. It also provides improved security and surveillance capabilities to cameras and sensors.
Multi-functional motion sensors help to determine which areas of the store receive the most human traffic. They also increase customer and employee security. Edge computing can improve biometric security. Thus allowing better access for employees while also preventing unauthorized access.
Retailers can make the most of edge computing to give their customers the best experience possible. As a result, edge computing is one of the most effective ways to improve the business and reintroduce customers to retail stores.
Edge computing can also help revitalize retail shopping. It can do so by allowing online shopping apps and in-store devices to communicate with one another. We may see more advancement in technology and its application. This would be a result of the introduction of various computing solutions. The initiative to use technology to provide the best possible experience for customers sets the standard for how other businesses should proceed. | <urn:uuid:37692709-9d22-46b9-bb71-023881d9d736> | CC-MAIN-2024-38 | https://www.knowledgenile.com/blogs/edge-computing-can-visualize-businesses-in-real-time | 2024-09-15T09:17:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00087.warc.gz | en | 0.931627 | 1,555 | 2.6875 | 3 |
When a company takes on a new employee, managers may need to give that individual access to certain systems, features, apps or accounts. This is known as provisioning, and deprovisioning removes that access when the employee leaves the organization or changes roles. What do you need to know about deprovisioning?
Definition and Importance of Deprovisioning in IT
Deprovisioning isn’t just about disabling a login; it encompasses the thorough purge and retraction of all access privileges spanning the breadth of an enterprise’s digital landscape. Its importance in IT cannot be overstated, especially in an era where data breaches are not just common but highly detrimental to both finances and reputations.
The Role of Deprovisioning in Organizational Security
The efficacy of an organization’s security hinges on how well it can control who has access to what data and when. That’s where deprovisioning proves indispensable. Beyond the obvious security imperatives, it underpins regulatory compliance, ensuring that firms adhere to strict privacy and data protection laws that abound in today’s global market.
How Provisioning Works
Provisioning can take place at one of four different levels:
- Network provisioning establishes a network that can be connected to devices and servers and accessed by the users. This is the approach phone companies use when they offer wireless solutions to their customers.
- It is also possible to set up a server within the network (such as physical hardware in a data centre) that can then be used to connect networks and storage.
- With application provisioning, an admin can manage various infrastructure items to optimize performance across different environments.
- User provisioning gives certain rights and permissions to individual people so that they can access systems, resources, files, networks or apps.
Occasionally, the company will need to remove access in the deprovisioning process, more often at the user level. Every company needs to get this right, as any errors can have significant security implications.
How Deprovisioning Works
As the organization becomes more complex, provisioning grows and becomes more involved. So, with greater complexity, one individual may have access to a large number of devices and extensive access rights. Until recently, deprovisioning would involve a great deal of money and work as the HR team went back and forth with the IT department to ensure that they revoked all access.
However, it is possible to automate deprovisioning today through identity and access management tools (IAM). These integrate with the company directory, so these tools can be pressed into action once an employee moves department or leaves the organization.
What Are the Key Benefits of Deprovisioning?
These are some key benefits associated with user deprovisioning:
- It will be a lot easier to offboard employees. The system can quickly identify their usernames, roles and profiles and view any assigned access permissions and user accounts. All such permissions and access can then be terminated, no matter how complex the various entitlement rules may be.
- The systems can use HR-driven identity management (IM) tools that will automatically prevent former employees from having any further online access. These tools can completely eliminate the chance of any “zombie” accounts within the system. Otherwise, there would always be a risk that those accounts sit idle, presenting a growing security risk and unnecessary threat.
The Deprovisioning Process Explained
For any organization that takes security seriously, deprovisioning is a multi-layered process requiring meticulous attention to detail. It goes beyond simply deactivating a user account; it’s an orchestration of steps that, when performed collectively and correctly, seal any entry points that could potentially lead to security issues.
Steps Involved in Effective Deprovisioning
Effective deprovisioning walks through several stages, starting with the identification of accounts that need deactivation. This involves real-time communication with the HR department to receive prompt updates on employee status changes. Following identification, the user’s access to email, intranet, and various systems must be revoked. Company assets like keycards, laptops, or phones should be retrieved, and any proprietary or sensitive information in their possession must be either returned or securely destroyed.
To effectively deprovision access in an organization, follow these steps:
1. Identify Access Points
- Description: Begin by identifying all access points that an employee may have. This includes all systems, applications, databases, network resources, and physical access.
- Key Actions: Create a comprehensive list of all access points, detailing what each one entails and which employees currently have access.
2. Inventory User Accounts and Access Rights
- Description: Compile an inventory of all user accounts and their corresponding access rights.
- Key Actions: Use tools to audit current user permissions and roles. Ensure this inventory is updated regularly to reflect any changes.
3. Develop a Deprovisioning Policy
- Description: Establish a clear, documented deprovisioning policy that outlines the process and responsibilities.
- Key Actions: Define the steps for deprovisioning, assign responsibilities, and ensure all stakeholders are aware of their roles.
4. Implement Access Recertification
- Description: Regularly review and certify access rights to ensure they are still necessary and compliant with policies.
- Key Actions: Schedule periodic audits to recertify access rights. Use governance tools to streamline the process.
5. Utilize Identity and Access Management (IAM) Tools
- Description: Leverage IAM tools to automate the deprovisioning process.
- Key Actions: Implement IAM solutions like Azure Active Directory to manage and automate user access and deprovisioning.
6. Integrate with HR Systems
- Description: Ensure that your IAM tools are integrated with HR systems to automatically trigger deprovisioning when an employee leaves or changes roles.
- Key Actions: Set up integrations between HR and IT systems to automatically update access rights based on employment status.
7. Automate the Deprovisioning Process
- Description: Automate as much of the deprovisioning process as possible to reduce human error and increase efficiency.
- Key Actions: Configure IAM tools to automatically revoke access when triggered by changes in the HR system.
8. Terminate Access to All Systems and Accounts
- Description: Revoke access to all identified systems and accounts immediately when an employee leaves.
- Key Actions: Ensure that all user accounts are disabled or deleted across all systems. Remove access badges and any physical access privileges.
9. Monitor and Verify
- Description: Monitor the deprovisioning process to ensure it has been completed correctly.
- Key Actions: Use auditing tools to verify that access has been completely revoked. Conduct follow-up checks to ensure no residual access remains.
10. Update Documentation and Access Inventories
- Description: Update all relevant documentation and access inventories to reflect the changes made.
- Key Actions: Maintain accurate records of deprovisioned accounts and access points. Ensure this information is readily available for future audits.
11. Review and Improve the Process
- Description: Regularly review the deprovisioning process to identify areas for improvement.
- Key Actions: Gather feedback from stakeholders, analyze any incidents related to deprovisioning, and update the process accordingly.
12. Train Employees on Deprovisioning Procedures
- Description: Ensure that all employees involved in the deprovisioning process are properly trained.
- Key Actions: Conduct regular training sessions and provide clear documentation on deprovisioning procedures.
Automation and Manual Efforts in Deprovisioning
While automation stands as the most efficient avenue for deprovisioning in many modern systems, manual intervention can’t be entirely discounted, especially in complex IT ecosystems. Automated systems excel in routine, well-defined deprovisioning tasks, doing what humans do but faster and without fatigue. Conversely, manual efforts are indispensable when situations demand a nuanced approach – for instance, when dealing with legacy systems that don’t interface well with modern IAM solutions.
Fun Fact: The first IAM solutions in history were manual, with an actual person overseeing user rights management. Now, software can do most of the grunt work!
However automated the process may be, it’s crucial to maintain administrative oversight to account for exceptions and to manage unforeseen complexities.
The Functions of Deprovisioning
The axiom ‘With great power comes great responsibility’ rings true when managing user privileges in an organizational context. Deprovisioning isn’t merely a subtractive function; it’s a gatekeeping strategy that ensures that only the right individuals have the right access at the right time.
Access Control and Permissions Management
At the heart of deprovisioning lies the need for robust access control and permissions management. These functions ensure that users can only interact with IT systems in ways that are necessary for their roles. When an employee exits the company, it’s vital to promptly revoke their permissions to stave off the risks of unauthorized access. By tightly managing permissions, organizations can preserve the integrity of their data and systems and reduce the risk of internal threats.
Key Takeaway: Tight permissions tuning is a proactive measure against potential compromise—making it an indispensable part of the deprovisioning process.
Asset Recovery and Account Management
Deprovisioning also entails the retrieval of any physical assets. This administered collection not only inhibits a data breach but also safeguards company property. On the account management side, system administrators must ensure that the accounts of departing users are either rendered inactive or completely deleted from all systems. This task includes revoking email aliases, digital signatures, and access tokens, effectively locking out any potential for misuse.
Note: “In the world of security, there is no such thing as absolute safety. Deprovisioning is one essential step in minimizing risk.”
In addition, certain conditions may necessitate the transfer of responsibilities and access rights from the departing user to their successor or team, a task that demands precision to ensure business continuity.
The Impact of Deprovisioning on IT Security
Security in the IT realm is a dynamic battlefield where threats constantly evolve, and proactive defence strategies such as deprovisioning play a pivotal role. Effective deprovisioning ensures that the end of an employee’s tenure doesn’t morph into a security loophole. Let’s delve into how deprovisioning tangibly impacts IT security.
Preventing Unauthorized Access
Unauthorized access sits atop the list of threats that deprovisioning thwarts. Former employees, if left with access, can unintentionally become conduits for breaches, be it through negligence or malicious intent. Hence, a prompt deprovisioning process prevents these potential ties, leaving no room for exploitation.
Compliance with Standards and Regulations
Many industries are bound by stringent regulatory standards like GDPR, HIPAA, or PIPEDA that mandate diligent user access management. Non-compliance can lead to significant fines and legal consequences. Through thorough deprovisioning, organizations ensure that they are not only secure but also in compliance with these international standards.
Fun Fact: The GDPR can impose fines of up to 4% of annual global turnover for non-compliance, emphasizing the financial impact of lax deprovisioning practices.
A streamlined deprovisioning process demonstrates to regulators and stakeholders alike that a company prioritizes data protection as part of its operational ethos.
Deprovisioning Best Practices
It’s important to set up these systems correctly at the outset when provisioning and deprovisioning in cloud computing — if they are to work flawlessly going forward. Otherwise, these automated tools could create access rights that go against company policies and could even breach complex regulations. So, each company should consider an IT control process known as access recertification. They will then audit the access privileges for each user to confirm they are correct and still adhere to compliance regulations or internal policies. Such work can be performed manually or automatically using access governance software. Then, auditors and security personnel can verify that the rules and workflows within each provisioning system are correct, based on industry best practices.
Here are the best practices to ensure a smooth, secure process:
1. Automate the Process
- Why It Matters: Manual deprovisioning is prone to errors and delays, creating security gaps.
- Best Practice: Use Identity and Access Management (IAM) tools to automate deprovisioning. Automating ensures consistency and minimizes the risk of human error.
2. Integrate with HR Systems
- Why It Matters: Delays in communication between HR and IT can leave access open longer than necessary.
- Best Practice: Integrate IAM tools with your HR system. This enables real-time updates when employees leave or change roles, ensuring prompt access removal.
3. Conduct Regular Audits
- Why It Matters: Over time, access rights can become outdated or redundant, increasing security risks.
- Best Practice: Schedule regular audits of all user accounts and access permissions. Use these audits to verify that only current employees have the necessary access and that all former employees’ access is fully revoked.
4. Implement Access Recertification
- Why It Matters: Ongoing access needs change, and what was once necessary may no longer be.
- Best Practice: Periodically recertify access rights to ensure they are still required and appropriate. This helps maintain up-to-date and relevant access controls.
5. Standardize Policies Across the Organization
- Why It Matters: Inconsistent deprovisioning practices can lead to security vulnerabilities.
- Best Practice: Develop and enforce standardized deprovisioning policies across all departments. Ensure everyone follows the same procedures to maintain uniform security standards.
6. Use Role-Based Access Control (RBAC)
- Why It Matters: Managing individual user permissions can be complex and error-prone.
- Best Practice: Implement Role-Based Access Control (RBAC) to assign permissions based on roles rather than individuals. This simplifies the process and ensures consistent access management.
7. Monitor for Zombie Accounts
- Why It Matters: Dormant accounts are prime targets for unauthorized access.
- Best Practice: Regularly scan for and eliminate inactive or “zombie” accounts. Ensuring that no unused accounts linger in your system reduces potential security threats.
8. Ensure Comprehensive Coverage
- Why It Matters: Overlooking any access points can leave your organization vulnerable.
- Best Practice: Make sure deprovisioning covers all possible access points, including network access, application access, cloud services, and physical access. Leave no stone unturned.
9. Maintain Documentation
- Why It Matters: Proper documentation is essential for accountability and compliance.
- Best Practice: Keep detailed records of deprovisioning actions, including who performed them and when. This documentation can be crucial for audits and investigations.
10. Provide Training and Awareness
- Why It Matters: Everyone involved needs to understand the importance and procedures of deprovisioning.
- Best Practice: Regularly train employees on deprovisioning processes and the importance of maintaining security. Awareness and understanding can significantly improve compliance and effectiveness.
11. Use Multi-Factor Authentication (MFA)
- Why It Matters: Extra layers of security help prevent unauthorized access.
- Best Practice: Implement Multi-Factor Authentication (MFA) to ensure that even if credentials are compromised, unauthorized access is still prevented.
12. Set Up Alerts and Notifications
- Why It Matters: Immediate response to deprovisioning failures or anomalies is critical.
- Best Practice: Configure your IAM system to send alerts and notifications for any deprovisioning errors or unusual activity. Promptly addressing these issues can prevent potential breaches.
13. Review and Improve Regularly
- Why It Matters: The threat landscape is constantly evolving, and so should your deprovisioning process.
- Best Practice: Continuously review and refine your deprovisioning practices. Stay informed about new threats and technologies, and adapt your processes accordingly.
How Can Deprovisioning Make a Company More Secure?
If a company does not have an adequate user provisioning and deprovisioning system, they face significant risks. After all, the average cost of a data breach today is $148 per record or up to $7.91 million per breach. Needless to say, this could significantly impact a company’s performance for years ahead, and 60% of small businesses fail within six months of a major breach like this.
Deprovisioning Challenges: The Good, The Bad, and The Ugly
Navigating the world of deprovisioning isn’t always smooth sailing. Here are some of the most common challenges you might face, and how to tackle them head-on:
1. Human Error Hiccups
We’re all human, and humans make mistakes. Manual deprovisioning is a hotbed for oversight and errors.
- Challenge: Missing accounts, forgetting to revoke access, or incorrectly disabling permissions.
- Solution: Automate the process with IAM tools. Let the machines do the heavy lifting—they don’t forget!
2. Complex IT Environments
The more complex your IT environment, the trickier deprovisioning becomes.
- Challenge: Multiple systems, apps, and platforms mean more places to revoke access. It’s like playing whack-a-mole.
- Solution: Centralize your access management. Use a comprehensive IAM solution to keep everything in one place.
3. Delayed Updates
Timing is everything. Delays in updating access rights can leave you vulnerable.
- Challenge: Slow communication between HR and IT can result in lingering access after an employee leaves.
- Solution: Integrate your HR systems with IAM tools for real-time updates. Speed is your friend here.
4. Zombie Accounts
Accounts that should be dead but are still lurking around. Creepy, right?
- Challenge: These forgotten accounts are prime targets for unauthorized access and data breaches.
- Solution: Regular audits and recertification. Sweep your system for these digital zombies and eliminate them.
5. Inconsistent Policies
Not all departments are on the same page, leading to a patchwork of deprovisioning practices.
- Challenge: Inconsistent application of policies can create security gaps and compliance issues.
- Solution: Standardize your deprovisioning policies across the organization. Clear, consistent guidelines for everyone.
6. Regulatory Compliance
Keeping up with ever-changing regulations is a full-time job.
- Challenge: Failure to comply can result in hefty fines and damage to your reputation.
- Solution: Implement access governance software to ensure your deprovisioning process meets all regulatory requirements. Stay ahead of the curve.
7. User Resistance
Not everyone likes change. Employees might resist new deprovisioning protocols.
- Challenge: Pushback can slow down implementation and create loopholes.
- Solution: Educate and train your team on the importance of deprovisioning. Make it easy for them to follow procedures. A little pizza party might help, too.
8. Technical Glitches
Technology is great, but it’s not infallible. Bugs and glitches can disrupt the deprovisioning process.
- Challenge: Software failures can lead to incomplete or incorrect deprovisioning.
- Solution: Regularly update and maintain your IAM tools. Have a robust IT support system in place to troubleshoot issues quickly.
9. Lack of Visibility
You can’t manage what you can’t see. Lack of visibility into who has access to what is a big problem.
- Challenge: Without clear insight, you risk missing unauthorized access points.
- Solution: Use analytics and reporting tools within your IAM solution to maintain full visibility. Knowledge is power.
10. Coordination Between Departments
Deprovisioning often requires coordination between multiple departments—HR, IT, security, and more.
- Challenge: Miscommunication can result in incomplete deprovisioning.
- Solution: Foster cross-departmental collaboration. Regular meetings and a clear communication plan can help keep everyone on the same page.
Tools and Technologies for Deprovisioning
Tools and technologies specifically designed for deprovisioning are critical assets for organizations aiming to enhance their security posture and streamline the offboarding process.
Identity and Access Management (IAM) Systems
Identity and Access Management systems are pivotal in providing an automated and centralized approach to managing user identities and permissions. Organizations rely on IAM systems to control access rights across their IT environments, significantly reducing the effort and potential for error inherent in manual processes.
- Insights: Implementing IAM systems translates to a fortified security landscape where access is granted based on verified identities and predefined policies.
IAM systems play a leading role, helping to simplify and enforce consistent deprovisioning practices across the enterprise.
IAM systems Compared
Feature/Aspect | Staff Pick: Microsoft Azure Active Directory (Azure AD) |
Okta Identity Cloud | Ping Identity | IBM Security IGI | SailPoint IdentityIQ | ForgeRock Identity Platform |
Overview | Cloud-based IAM, integrated with Microsoft | Cloud-first IAM, user-friendly | Enterprise-focused IAM | Comprehensive IAM with governance | Leader in identity governance | Open-source, unified IAM |
Single Sign-On (SSO) | Yes | Yes | Yes | Yes | Yes | Yes |
Multi-Factor Authentication (MFA) | Yes | Yes | Yes | Yes | Yes | Yes |
Conditional Access | Yes | Yes | Yes | Yes | Yes | Yes |
Lifecycle Management | Yes | Yes | Yes | Yes | Yes | Yes |
API Access Management | Yes | Yes | Yes | Yes | Yes | Yes |
Adaptive Authentication | Yes | Yes | Yes | Yes | Yes | Yes |
Identity Governance | Moderate | Moderate | High | High | High | High |
Access Certifications | Moderate | Moderate | High | High | High | High |
Policy Management | Yes | Yes | Yes | Yes | Yes | Yes |
Risk Management | Yes | Yes | Yes | Yes | Yes | Yes |
Integration | Extensive, especially with Microsoft | Extensive | Extensive | Good, especially IBM tools | Extensive | Good, requires expertise |
Customization | Moderate | Moderate | High | High | High | Very High |
Ease of Use | Moderate | High | Moderate | Moderate | Moderate | Moderate |
Cost | High | Moderate to High | High | High | High | Moderate |
Scalability | High | High | High | High | High | High |
Community Support | Strong | Strong | Strong | Moderate | Strong | Strong |
Deployment Options | Cloud, hybrid | Cloud | Cloud, on-premises, hybrid | On-premises, hybrid | Cloud, on-premises, hybrid | Cloud, on-premises, hybrid |
Compliance Features | High | High | High | High | High | High |
Best For | Organizations using Microsoft ecosystems | Versatile, easy to implement | Large enterprises needing customization | Large organizations with complex governance needs | Organizations with stringent compliance requirements | Organizations needing high flexibility and customization |
Pros | Deep integration with Microsoft products, strong security features, scalable | User-friendly interface, wide integrations, strong support | High customizability, strong security and compliance | Strong governance, good integration with IBM security tools | Comprehensive analytics and reporting, scalable | Highly customizable, strong community support |
Cons | Complexity and higher cost | Can be expensive for smaller organizations, limited features in basic plan | Higher cost, steep learning curve | Complexity and cost, requires significant resources | Higher cost, resource-intensive implementation | Requires in-house expertise, higher initial setup complexity |
Automating Deprovisioning with Software Solutions
Software solutions specifically designed for deprovisioning can play a significant role in enhancing the efficiency and reliability of the process. These solutions integrate with various systems to ensure a quick and coordinated removal of user access while minimizing manual intervention.
Did You Know? Advanced deprovisioning solutions can track and report deprovisioning activities, providing invaluable documentation for audits and compliance checks.
By automating routine tasks, such software can free up IT staff to focus on more complex challenges, further enhancing security and operational efficiency.
Deprovisioning in Different Environments
Modern enterprises operate across diverse IT environments, and an effective deprovisioning strategy must account for their nuances. The differences between cloud and on-premise infrastructure, for example, necessitate tailored approaches to ensure that deprovisioning is thorough and effective in all contexts.
Deprovisioning in the Cloud
Cloud services add layers of complexity to the deprovisioning process due to their distributed nature and the shared responsibility model. Administrators must collaborate with service providers to ensure deprovisioning actions take effect across all cloud-based platforms and applications.
- Key Insight: Efficient cloud deprovisioning hinges on integration and communication between internal IT and cloud service providers.
Precise coordination is required to confirm that all instances of user access within the cloud environment are identified and addressed.
On-Premises vs. Hybrid Infrastructure Challenges
In contrast, on-premises environments often involve direct control over all assets, but may lack the agility of cloud services. Comprehending the interconnectedness of these assets is critical for full-spectrum deprovisioning. Hybrid environments combine these challenges, necessitating a balanced approach that can adapt to the fluidity of these mixed infrastructures.
- Important Note: Mastery of both on-premises and cloud deprovisioning is essential in hybrid environments to ensure no gaps are left in the security perimeter.
Organizations must be proficient in managing the distinct demands of each model to ensure a robust deprovisioning procedure is maintained.
The Future of Deprovisioning
In a constantly evolving digital landscape, the future of deprovisioning is set to include advancements in automation, artificial intelligence (AI), and machine learning to create more efficient and secure IT environments.
AI and Machine Learning in Deprovisioning
The adoption of AI and machine learning in deprovisioning offers a futuristic outlook on how organizations handle offboarding. These technologies are poised to predict and identify potential security loopholes by learning from patterns in data access and user behaviour.
- Future Forecast: As AI becomes more prevalent, it will anticipate users’ departure and automate many deprovisioning tasks, enhancing accuracy and speed.
Predictive algorithms could automatically generate deprovisioning tasks ahead of an employee’s departure, ensuring a seamless transition and further securing the IT infrastructure.
Enter Microsoft Azure Active Directory
Microsoft operates Azure Active Directory to store information about objects on a network and to make it easy for users to access. It’s a database with critical information about a specific environment, identifying assets and users and determining how individuals can access it. Within Active Directory is Azure Active Directory, a cloud-based identity and access management service.
Azure Active Directory allows the company’s employees to access various external resources, including thousands of SaaS applications. Crucially, it allows these employees to reach internal resources through the corporate network or cloud apps developed by the company.
Azure can automatically provision user identities and rules for applications. It can then maintain these user identities or remove them through deprovisioning when a status or role may change. It can handle these issues on-premises or in a virtual machine without the need for firewalls. It’ll also help synchronize data between systems so that identities are always correct across apps and systems.
Azure has additional benefits, including customization. This allows a company to use certain attribute mappings to define what user data moves between source and target systems. It can seamlessly deploy in any brownfield scenario, matching existing identities and precipitating easy integration, even if the user is already within the target system. Further, Azure will generate critical event alerts if needed, and the user can define custom alerts according to their specific business needs.
Softlanding Helps Organizations Like Yours Systemize Deprovisioning
To get more information about deprovisioning and how your company can automate the process through Azure Active Directory, get in touch with Softlanding. We are an IT company that provides managed and professional IT services and are a Microsoft Solutions partner.
We deploy and implement Microsoft Solutions from Azure to Microsoft 365 and Microsoft Teams.
- Contact our team here!
What is the difference between deprovisioning and offboarding?
Deprovisioning refers specifically to the process of revoking digital access privileges when an employee leaves a company, whereas offboarding is a broader term that includes deprovisioning as well as other steps of concluding an employee’s tenure, such as exit interviews and final pay processing.
How long should the deprovisioning process take?
Ideally, the deprovisioning process should be completed within 24 hours of an employee’s departure to minimize security risks. However, the complexity of the IT environment and the tools in use could affect this timeframe.
Can deprovisioning be reversed if it was done in error?
Yes, in most systems, deprovisioning actions can be reversed if they were performed in error. This usually involves reinstating the user account and associated permissions.
What are some common pitfalls in setting up a deprovisioning process?
Common pitfalls include a lack of clear policies, failure to promptly execute deprovisioning tasks, insufficient communication between departments, and overlooking the need for regular audits and updates to the process.
Who in an organization should be responsible for deprovisioning?
While specific tasks might be assigned to IT administrators or HR personnel, having a cross-functional team that includes IT, HR, legal, and security stakeholders ensures a comprehensive approach to deprovisioning, addressing all dimensions of access and compliance. | <urn:uuid:e42e1bb4-a719-4b9a-ac29-24e79b936862> | CC-MAIN-2024-38 | https://www.softlanding.ca/blog/what-is-deprovisioning/ | 2024-09-15T11:03:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00087.warc.gz | en | 0.886855 | 6,273 | 2.640625 | 3 |
The term "Budget Deficits" refers to a scenario in which the government's budget expenditures exceed its budget receipts. Depending on the sort of receipts and expenditures considered, a Budget might have various types of deficits. Revenue deficits, fiscal deficits, and primary deficits are the three forms of budget deficits.
Read more on Business Deficits.
What is a Revenue Deficit?
When realized net income falls short of planned net income, a Revenue Deficit arises. This occurs when real revenue and/or actual expenditures do not match anticipated revenue and expenditures. This is the polar opposite of a revenue surplus, which happens when actual net income surpasses planned net income.
This indicates that the government's own profits are insufficient to fund its departments' day-to-day activities. When the government spends more than it makes, it is forced to borrow money from other sources to make ends meet.
( Must Read: COVID-19 Impact on Financial Markets )
So we understood these points about Revenue deficit:
1. It is a measure of the government's expenses as well as the use of savings from other sectors to fund a portion of its expenditure.
2. It demonstrates that the government would need to borrow money not only to finance its investment but also to meet its consumption needs.
3. It accumulates debt and interest liabilities, forcing the government to cut spending.
Hence, Revenue Deficit refers to a shortfall in revenue. Budgeting is traditionally divided into two accounts: revenue account and capital account.
( Also Read: What is Inflation? Demand-pull and Cost-push )
Revenue Deficit Formula
Revenue Expenditure – Revenue Receipts = Revenue Deficit
Total Revenue Expenditure – (Tax Revenue + Non Tax Revenue) = Revenue Deficit
The term "revenue deficit" refers to the difference between the government's revenue receipts and expenditures. Revenue receipts, on the other hand, are those that do not result in a liability or a loss in assets. It is then subdivided into two sections:
1. Tax Receipt (Direct Tax, Indirect Tax)
2. Non-Tax Revenue Revenue Receipts
The term "expenditure" refers to spending that does not result in the development of assets or the reduction of obligations. It's further broken down into two categories:
1. Plan revenue expenditure
2. Non-plan revenue expenditure
Watch this video on Deficits & Debts
What is a revenue account, exactly?
The Revenue Account refers to the revenue on the current account, which includes both tax and non-tax income from the government. The money earned by the government's various taxes, such as income tax, gift tax, customs duty, GST, and so on, is referred to as tax receipts.
Administrative revenue, commercial revenue from public sector undertakings, gifts, and contributions, penalties, money from the sale of spectrum, escheat, and fees such as court fees, fines, and so on are all examples of nontax revenues.
( Also Read: What is Mortgage? Types and Components )
Expenditure on the Revenue Account is a type of government spending that is used to buy goods and services for consumption within a certain fiscal year. As a result, this spending is referred to as consumption expenditure. Salaries, interest payments, and subsidies are all examples of this.
This is how Revenue Deficit Occurs
Read this article on How The United Kingdom saw a rise in the budget deficit to the highest rate since 1946
Understanding Revenue Deficit
The difference between the expected and actual quantity of income is measured by a revenue deficit, which is not to be confused with a fiscal deficit. A revenue deficit indicates that a company's or government's income is insufficient to support its basic operations. When this happens, it may borrow money or sell existing assets to make up for the drop in revenue.
( Suggested: Introduction to Gross Domestic Product (GDP) )
To close a revenue gap, a government might either raise taxes or decrease spending. Similarly, a firm with a revenue deficit might increase its profitability by reducing variable expenses like materials and labor. Because most fixed expenses are determined by contracts, such as a building lease, they are more difficult to alter.
How Revenue Deficit is different from Fiscal Deficit
The existing budget's revenue deficit reflects indebtedness due to overall income receipts and spending proposed in the budget. When revenue spending exceeds revenue collection, this is known as a revenue deficit. This section covers all transactions that have an impact on the government's current revenue and spending.
Fiscal deficit, on the other hand, is a metric that indicates how reliant the government is on borrowing. The fiscal deficit represents the government's projected borrowings; the larger the fiscal deficit, the higher the government borrowings.
( Suggested: Largest Public Sector Banks )
Consequences of a Revenue Deficit
The following are the repercussions of a revenue shortfall:
The government must borrow or sell assets to satisfy its revenue expenditures. This way, they could manage some of their revenue deficit easily.
Loss of Social Welfare
The government may be forced to cut spending on a variety of welfare programs and public subsidies, resulting in a loss of social welfare.
Implications of Revenue deficit
Increased liabilities and decreased creditworthiness
The government may need to borrow money from the general public, the RBI, the World Bank, and other sources, which not only increases liabilities but also decreases creditworthiness.
The government has the option of disinvesting, which entails selling a stake in a public sector business to either foreign firms or private people.
( Also Read: Introduction to Investment Banking )
The government uses capital receipts to satisfy consumption expenditures. Because borrowed money does not qualify as investments, they are utilized for consumption, resulting in inflation.
( Also Read: Ethical Banking - Everything you need to know )
Example of Revenue Deficit
Company XYZ forecasted $100 million in revenue for 2018 and $80 million in expenses, resulting in a predicted net income of $20 million. The company's actual revenue was $85 million, while its expenditures were $83 million, resulting in a realized net income of $2 million at the end of the year. This resulted in an $18 million revenue shortfall.
Both the expenditures and revenue estimates were wrong, which might have a detrimental impact on future operations and cash flows. If the topic of this example were a government, financing for necessary public expenditures like roads and schools may be compromised.
The firm may avoid future revenue deficits by identifying and implementing cost-cutting initiatives. It can look into more cost-effective business models, such as identifying suppliers that can provide goods at a reduced cost or vertically integrating operations throughout its supply chain. The firm might also spend on employee training to improve productivity.
( Must Read: Credit Risk- Types, Assessment, and Reduction )
Bottom Line: How is the Revenue Deficit resolved?
The term "revenue deficit" refers to a shortfall that can be made up by capital receipts, such as borrowings. But, to be more specific, it is a future payback burden that does not correspond to any investment.
To close a revenue gap, the government can either raise taxes or decrease spending. A revenue imbalance, if not addressed, can hurt the government's credit rating. Because there isn’t enough money to pay the expense of a revenue gap, the government's projected expenditures may be jeopardized.
The government should make every effort to save costs and eliminate needless spending. The government may prevent a revenue deficit by identifying and implementing cost-cutting initiatives.
Because the government must replace the uncovered difference with capital receipts, either via borrowing or the sale of its assets, the revenue deficit reflects dissaving on the government account. | <urn:uuid:fe5c27ac-d495-4529-827e-5b1be43cea14> | CC-MAIN-2024-38 | https://www.analyticssteps.com/blogs/all-about-revenue-deficit | 2024-09-16T13:45:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00887.warc.gz | en | 0.950494 | 1,600 | 4.28125 | 4 |
Multi-IMSI technology is essential for cellular IoT providers that plan on deploying globally or creating mobile applications. Multi-IMSI is an abbreviation for Multiple International Mobile Subscriber Identities. An IMSI is a key component of a Subscriber Identity Module (SIM), which is stored on a SIM card. Each one allows a device to connect to a limited number of networks. It enables subscribers to switch carriers as needed and connect to significantly more networks.
What is an IMSI?
To understand what a Multi-IMSI is, you first need to know what IMSI is.
An International Mobile Subscriber Identity (IMSI) is a unique 15-digit number that identifies every mobile network user globally. The IMSI is divided into two parts. The first part is a six-digit or five-digit number based on the North American or European standards, respectively. The initial set of numbers identifies the mobile network operator in a specific country to which the user is subscribed. The second part of the IMSI number is allocated by the network operator to uniquely identify the subscriber.
The IMSI number is not always printed on the SIM card and you would have to look deep in the settings for it to show on your device. The IMSI is stored in the Subscriber Identity Module (SIM) inside mobile devices and is sent by the mobile device to the appropriate network in which the SIM is associated. When a mobile device is powered on, the device indicates its IMSI number to the associated cellular network to perform a location update – this is referred to as the attach procedure. This update is also carried out when a handover occurs to another base station (cell tower). This update is received by the corresponding visitor location register which updates the home location register of the new location. When the mobile device is powered off, an IMSI detach procedure occurs. This shows that the device is no longer connected.
What is a Multi-IMSI SIM?
Now that you know what an IMSI is, a multi-IMSI is simply a SIM with more IMSIs. That means that a SIM card can decide which IMSI to use in any country. Having multiple IMSIs can help you get better data costs and coverage based on the country used. When a SIM card enters a different country, it automatically selects the best IMSI.
Benefits of a Multi-IMSI
You don’t notice any differences looking at a multi-IMSI SIM, but there are plenty of benefits.
1. Better coverage. A single network provider might not have agreements around the world with all the operators and, even if they do, they might not offer the best rates.
2. National Roaming. You cannot access multiple networks in the country where the SIM card originates from. You might have noticed that your regular phone SIM card is attached to your provider and you cannot use other networks in your area – Multi-IMSI fixes that.
3. Better rates. You can get better rates both locally and globally based on the provider you choose to use.
4. Worldwide Access. Multi-IMSI IoT SIM offers cards with multiple IMSIs that, if needed, provide access to different countries and continents at a better rate. This means that a single SIM card is enough for global deployment.
As some IoT devices are moving constantly (such as GPS trackers), having good rates and coverage is important.
eUICC and eSIM
eUICC is a new technology that offers the ability to change service providers over the air (OTA). That means you do not have to switch operators by replacing the card, but you can do it remotely which is convenient as you do not have to store these profiles on the card itself and these profiles can be changed anytime depending on your needs.
There is some confusion on the difference between eUICC and eSIMs. As some refer to eSIM and eUICC (Embedded Universal Integrated Circuit Card) interchangeably. eUICC is the programmable part of the SIM, while eSIM is embedded directly into a device without the ability to remove it. It can have over-the-air support, but it is not something that you could use in a regular SIM card slot. eUICC can be removable (just like any other regular SIM card) and non-removable (embedded) SIM cards.
eUICC vs. Multi-IMSI
Both eUICC and Multi-IMSI are great technologies that allow options to change coverage, pricing, and MNOs. eUICC is a technology that seems very beneficial, but it can be tricky for non-enterprise customers. With Multi-IMSI cards, IMSIs can be written into the SIM card during the manufacturing process and can be changed over the air.
However, the ability to change these profiles over the air is not a straightforward change as one might expect. You can have profiles tailored for one continent such as North America, and if you decide to move your device to Europe, you can switch your profiles that are more tailored for European use. In order to swap the SIM profile the selected carrier needs to support the provisioning as well. Often this depends on the manufacturer of the SIM cards. These manufacturers provide the software to perform the swap and this is proprietary for each manufacturer. The support of this feature isn’t free of charge, so expect additional fees for both the more complex SIM as well as the licensing fees that are needed to support eUICC. It gives flexibility and means that you always do not have to swap the card out, you can simply update it. The costs and processes make it a little harder to get started with, but it’s a great option for large enterprise customers.
Technologies for Flexibility
To decide which product is better for you, evaluate where you are going to be using your IoT device and how important it is for you to switch profiles to restricted roaming markets such as Turkey and Brazil. | <urn:uuid:a6cca01e-bac6-404d-b069-6efa94e11d27> | CC-MAIN-2024-38 | https://www.iotforall.com/what-is-a-multi-imsi-sim | 2024-09-16T12:14:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00887.warc.gz | en | 0.953098 | 1,248 | 2.90625 | 3 |
Normal logic about ransomware says that when local governments, agencies and large companies pay a ransom, it encourages more criminals to launch more attacks and get more money. But unfortunately that is not the case according to experts. According to a Axios news article,
“By the numbers: Riviera Beach and Lake City, Florida, paid a combined $1.1 million in ransom over about a week in June.
- Meanwhile, Atlanta spent $17 million restoring systems rather than pay a $50,000 ransom last year.
- Baltimore is likely to spend $10 million restoring its own systems refusing to pay a $75,000 ransom this year. The disruption to its city services may cost another $8 million.
The intrigue: For some cities, the best response might be to pay the ransom, then use the millions of dollars that would have been spent on recovery to strengthen cyber defenses before the next attack.
- “If you don’t learn from the past, you will end up being ransomed again,” said Deborah Golden, the new head of Deloitte’s cyber consultancy.
- Whether a city pays, doesn’t pay or has yet to be attacked, prevention will often save money.” | <urn:uuid:2aa2c403-cdb0-4722-95dc-8fc708f361f3> | CC-MAIN-2024-38 | https://www.getadvantage.com/ransomware-might-pay/ | 2024-09-17T19:34:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00787.warc.gz | en | 0.95936 | 256 | 2.59375 | 3 |
Honeypot techniques have been used in cybersecurity for decades to catch cyber attackers or malicious actors attempting to gain unauthorised access to a company network. The principle is incredibly simple: rather than trying to hunt attackers outright, IT teams can prepare an enticing area within the network and wait for the bad actors to come to them. Though far from new, this strategy remains high on the threat intelligence agenda, with bodies including AWS and the National Grid recently investing in such methods.
In today’s world of skilled and persistent attackers, honeypots can be a key tool to solving a difficult problem: how to detect an advanced attacker on a busy network?
Once in a network, modern attackers hide in the noise and by mimicking normal user behaviour, stealing and abusing credentials, for example. They are increasingly hard to detect. With honeypots, organisations create something that appears to be a legitimate asset, so any attempt to access it is instantly suspicious. Rather than trying to swat the fly from afar, it gets itself trapped in the sticky honeypot.
Honeypots for research
Researchers have been making effective use of honeypots. By creating fake computers, fake services or fake people, it is possible to see what kind of malicious activity is occurring on the internet. Particularly interesting examples of honeypot research include Kippo, which pretends to be a service and lets an attacker in after a number of password attempts in order to study what attackers do once on a system. Trend Micro created a number of SCADA/ICS honeypots that appeared to be industrial networks, and found that attackers quickly compromised these services, with ominous implications for people running real internet-connected SCADA systems.
Cybercrime: the scourge of the digital economy — How cyber criminals are causing financial damage for firms.
Using honeypots gives researchers deeper insight into what hackers are looking for, attempting to do and compromise within a corporate network. The research can then be used to help companies focus their defensive efforts.
Honeypots in business
Business use of honeypots is often limited. Where businesses use honeypots as part of their defences, they typically rely on traditional honeypots, i.e. a non-existent computer on the network or perhaps an entire network range, and then alert on any attempt to connect to the computer or range.
This can be effective, for instance by identifying an attacker who has gained access to the internal network and is port-scanning the entire range. However, many advanced attackers do not resort to ‘noisy’ techniques such as port scanning once on the internal network, they instead often rely on subtle lateral movement such as obtaining network maps and connecting directly to servers of interest. To catch such advanced attackers requires more sophisticated honeypots.
Attackers will often attempt to obtain administrative credentials to aid their movement around networks. They can do this by a number of means, from password-guessing attacks against administrative accounts, to more advanced attacks that allow them to carry out actions with the permissions of anyone using the computer they are accessing. Organisations can therefore create ‘honeytokens’, which are administrative accounts where an attempt to use the account alerts security staff to the presence of an attacker.
Use ‘honeypots’ to catch cybercrminals, says ENISA — Guidance has been shared regarding effective use of honeypot techniques for long-term security.
Any important asset that organisations fear an attacker may compromise as either a step towards their goal or as the goal itself can be the basis for a honeypot. Successful honeypots include fake files that an attacker might try to access, with attempted access to those files triggering an alert. Organisations have found that fake file servers that might lure an attacker, such as one described in network diagrams as ‘Backup Fileshare’ can also be successful.
A controversial form of honeypot is the creation of a fake person of interest. More targeted attackers will identify key individuals in an organisation and then target them directly with spear phishing emails. Once in a network, many attacking groups will attempt to steal the inboxes of people they consider important.
By creating a fake email account, it may be possible to gather intelligence relating to the malware being sent against real, high value employees. More importantly, any attempt to access the honeypot email account will alert the security team to the presence of an attacker stealing inbox contents.
However, there are some points to be careful of when using honeypots. Firstly, they can require time and resource to implement, both of which may already be limited in the organisation. Honeypots can also take time to integrate into the organisation’s alerting infrastructure.
CIO survival tips in a zero trust environment — How chief information officers can get the best out of zero trust protocols.
Certain types of honeypot, such as file and fileshares, can often be visited by curious employees and it can only take a small number of such benign triggers for the alert to lose its value in the eyes of the security team. The creation of email accounts or people can be difficult, particularly if that person is listed externally as there could be regulatory issues from publically listing a fake high value employee, which would be necessary for the effectiveness of the honeypot.
Five tips for a successful honeypot:
- Base the honeypot on a real asset you’re concerned might be compromised.
- Reference the honeypot anywhere you reference real assets.
- Make sure honeypots are known to only those few running them.
- Have a process for rapidly investigating alerts generated by the honeypot.
- Have a process for investigating real assets should honeypot alerts indicate an attacker.
Honeypots can be a highly effective and efficient way of alerting security teams to attackers, even those who are more advanced. However, to be effective requires the honeypot to be well implemented, maintained, and monitored.
Sourced from David Chismon, senior researcher at MWR InfoSecurity
Top cyber security monitoring tools for your business — How can your business effectively and cost-efficiently bolster its cyber security defence arsenal? Antony Savvas looks at some useful tools. | <urn:uuid:5808c652-de12-49c8-b344-6bce7412d17e> | CC-MAIN-2024-38 | https://www.information-age.com/how-set-cybersecurity-honeypot-your-business-30347/ | 2024-09-17T18:26:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00787.warc.gz | en | 0.949002 | 1,258 | 3.28125 | 3 |
In today's interconnected world, where digital technology permeates every aspect of our lives, the importance of cyber resiliency cannot be overstated. Cyber threats are constantly evolving, and organizations, governments, and individuals are all vulnerable to the potentially devastating consequences of cyberattacks. In this blog post, we will explore the concept of cyber resiliency, why it's essential, and how you can build and strengthen it to protect your digital future.
Cyber resiliency is the capacity to withstand, adapt to, and recover from cyberattacks, disruptions, or failures. It is a proactive and comprehensive approach to cybersecurity that goes beyond traditional measures such as firewalls and antivirus software. Instead, it focuses on preparing for and mitigating the impact of cyber incidents to ensure that an organization can continue its operations and protect its assets even in the face of adversity.
The Evolving Cyber Threat Landscape: Cyber threats are becoming increasingly sophisticated and pervasive. Attackers use a wide range of tactics, from phishing and malware to ransomware and zero-day exploits. Without cyber resiliency, organizations are at risk of significant financial losses, reputation damage, and legal consequences.
High Stakes: The reliance on digital systems and data is higher than ever before. Critical infrastructure, healthcare, finance, and government operations all rely heavily on technology. Any disruption or compromise of these systems can have far-reaching consequences, including threats to public safety.
Regulatory Compliance: Many industries are subject to strict regulatory requirements regarding cybersecurity and data protection. Failing to meet these standards can result in hefty fines and legal action. Cyber resiliency helps organizations meet these compliance requirements.
1. Risk Assessment: Begin by identifying your organization's vulnerabilities and assessing the potential impact of various cyber threats. Understanding your risk profile is crucial for developing an effective cyber resiliency strategy.
2. Incident Response Plan: Develop a comprehensive incident response plan that outlines how your organization will react in the event of a cyber incident. This plan should cover communication protocols, containment procedures, and recovery strategies.
3. Employee Training: Invest in cybersecurity training for your employees. Human error is a leading cause of security breaches, so educating your staff on best practices and potential threats is essential.
4. Continuous Monitoring: Implement robust monitoring tools and practices to detect threats in real-time. Early detection can prevent small incidents from turning into major breaches.
5. Data Backup and Recovery: Regularly back up your data, and test your recovery processes to ensure they work as intended. Ransomware attacks can be less damaging if you can quickly restore your systems from backup.
6. Cyber Insurance: Consider purchasing cyber insurance to mitigate the financial impact of a cyber incident. This can help cover costs related to data recovery, legal fees, and public relations efforts.
7. Collaboration and Information Sharing: Collaborate with other organizations and share threat intelligence. A collective approach to cybersecurity can help identify and mitigate threats more effectively.
8. Regular Updates and Patch Management: Keep all software and systems up-to-date with the latest security patches. Vulnerabilities in outdated software are often exploited by attackers.
Cyber resiliency is not a one-time effort but an ongoing process that requires a proactive and holistic approach to cybersecurity. In a world where cyber threats are constantly evolving, organizations and individuals must be prepared to withstand and recover from cyber incidents. By implementing the strategies outlined in this blog post, you can build a strong foundation for cyber resiliency and protect your digital future. Remember, in the world of cybersecurity, it's not a matter of if but when an attack will occur, so preparation is key.
Verification, Visibility, and Velocity are paramount in achieving cyber resilience for all critical infrastructure networks. Verification involves understanding the current defense mechanisms, identifying vulnerabilities, and rectifying gaps within the security infrastructure. It serves as the foundational step, allowing for a comprehensive assessment and enhancement of cybersecurity measures. Visibility is crucial for effective network management and response to cyber threats, offering a clear understanding of critical assets and their associated protections. It provides insights into the network structure and protection mechanisms, enabling informed decision-making and risk mitigation.
On the other hand, Velocity emphasizes real-time assessment and response to rapidly evolving cyber threats. Cybercriminals adapt swiftly, necessitating a proactive and agile response to stay ahead of potential risks and minimize potential damage. Velocity injects speed into cyber resilience, enabling immediate risk identification and swift mitigation actions, ultimately fortifying the overall cybersecurity posture. Network Perception's NP-View supports these imperatives by offering a uniform process for metadata storage, facilitating visualization of network topology for enhanced comprehension, and enabling real-time risk assessment and response, bolstering cyber resilience within utility networks. Read more on the 3 -principles- Verification, Visibility, and Velocity in improving cyber resiliency here.
If you want more insight, please contact us at firstname.lastname@example.org | <urn:uuid:0110adf6-96a0-4feb-9b2e-254a7c7ecd60> | CC-MAIN-2024-38 | https://www.network-perception.com/blog/building-cyber-resiliency-protecting-your-digital-future | 2024-09-17T19:11:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00787.warc.gz | en | 0.922836 | 1,010 | 2.90625 | 3 |
In the previous post in this series on technology education, we discussed what EdTech is and how technology is changing education. Continuing our discussion on this topic, we will talk a little about the future of education and the exorbitant growth of edtechs in recent months, driven by socioeconomic issues, but also by social isolation resulting from the COVID-19 pandemic.
According to INEP (National Institute for Educational Studies and Research in Brazil), the number of new e-learning students in the higher education segment in Brazil grew 378.9% from 2009 to 2019, while new students in the traditional model grew only 17.8%. This data is representative of a time when there was still no news of the pandemic, which demonstrates that the sector is already showing growth even before the distancing obligation for health reasons.
We’ve mentioned, in previous posts, how the new normal post-lockdown has forced people to adapt to new realities and that many habits we’ve acquired over the past few months have caused definite changes in the way we relate to individuals and things. These changes also reflect the way we use technology today to do even the most everyday activities that were previously done in person, such as meeting friends, shopping, working and studying - our topic today.
The moments of lockdown and social isolation were very difficult, but learning to deal with online education, once optional, now being the only possible way to continue studying, ended up accelerating the process of modernization of education. Most people were forced to learn to use solutions developed by edtechs for educational institutions in order to study remotely.
The truth is that we had to adapt again, and in this re-adaptation process most people ended up discovering several new technological tools, which were unknown so far. The good part is that, once familiarized with the new technologies for education, many started to pursue courses independently in order to productively use the time we were forced to stay at home. The result is that many people ended up changing professions in these long months of quarantine, with 100% online training, or ended up discovering new teaching strategies for their children, which will enable the acceleration of hybrid education in the months that will elapse after the pandemic.
Unfortunately, internet connection and digital inclusion are not a reality for everyone in underdeveloped countries. A massive investment in infrastructure will be necessary to enable the use of these new technologies in education. It is important that actions to solve these inequality issues, evidenced during the pandemic, are made by the responsible governments, so everyone can enjoy the benefits of educational technologies in the future.
What are the advantages of online and hybrid education?
Among the many advantages of the online and hybrid teaching models, we highlight some that we believe to be of great relevance, such as:
- Flexibility and autonomy: the possibility of being able to study during the available time, instead of having to reshuffle professional and personal commitments, is one of the greatest benefits of these models. In addition, the speed of learning is also flexible, allowing you to resume content when needed, or scroll through topics faster when you prefer.
- Online communication and collaboration: the online interaction that distance learning provides tends to develop communication skills in the virtual environment, as result, online communication at work can be improved through the participation of these forums and discussions.
- Diversity of spaces: in the hybrid methodology, there is the benefit of learning in the virtual environment and in physical spaces at the same time, enabling different experiences.
When talking about online education, it is important to stress that we are not just referring to schools or children. In fact, the benefits presented above indicate one of the greatest advantages of expanding online education: the idea of lifelong learning.
What is lifelong learning?
The concept of lifelong learning is not that new. Since the 1960s, when this term was created, the idea has been simple: continuous learning throughout life.
Despite being an old concept, with the boom of edtechs and the demand for online education, lifelong learning has been disseminated again, now adapted to the new job market.
With computerization, and increasingly autonomous and independent professionals, the study-work-retirement structure was broken, making it possible to add, at any time, new training to the curriculum and start new professional trajectories.
Being a lifelong apprentice is something that combines very well with the 21st century professionals, as there is always new knowledge to be acquired and updating needs to be constant in a world where technology is developing so quickly. And this same technology has enabled the construction of new skills for these apprentices/professionals.
Some companies encourage lifelong learning initiatives, establishing partnerships with edtechs or educational institutions for their employees and enabling professional development with a wide range of certification courses, from language learning to specialization or MBA.
The growth of EdTechs
The types of learning mentioned above, and others more, are possible thanks to companies that create technological tools designed for education. In recent years, EdTechs had an exponential growth, which highlights the trend of growth and migration to hybrid and online education.
We are preparing a super special content for you with trends for the edtechs market, but for now, we will pay attention to observing, in general, how this sector is shaping up.
Surely, you’ve already heard or studied on some app by a big international edtech company. Below, we have some of them appearing as great unicorns in the technological education segment:
Source: Edtech Distrito Report 2020
Whether mobile apps to study new languages, like Duolingo, or learning platforms that host professional certification, specialization or master’s courses, such as Coursera, the fact is that edtechs already permeate the path of new professionals, through learning done independently, or through development plans promoted by companies for their employees.
In Brazil, the sector is also heating up and the UOL EdTech group is one of the great representatives of the growth of this segment, creating solutions for digital educational institutions and for corporate education. The company recently announced the purchase of another famous edtech in the country, Passei Direto, which is a famous online portal for educational content shared by students and teaching experts.
Success cases in edge computing for edtechs
At Azion we have some success cases in educational technology. With the infrastructure of our edge platform, educational institutions have been able to deliver great results and educational content to their students and teachers in recent months.
Some edtechs or digital education institutions may face some problems when delivering their content and our goal is to facilitate this delivery in the best possible way.
With the adoption of an edge platform, it is possible to accelerate the delivery of content, such as streaming and downloading videos with high availability and resilience, in addition to ensuring cybersecurity for students and teachers across the entire learning platform.
For high definition videos to be streamed with maximum performance to multiple locations, for example, it is possible to add an intermediate cache layer with Azion Tiered Cache, in which content is replicated and can be made available for long periods of time, regardless of purges and caching updates.
With regard to cyber security for professionals and students involved in digital education, it is possible to protect personal data, or other specific content, through Azion Edge Functions with the implementation of authentication and access authorization. In addition, our Edge Firewall allows the development of advanced protection rules to expand real-time access control, contributing to the availability of the learning platform in the midst of complex security events.
Edtechs and educational institutions, therefore, need to think about the challenges and trends that online education will bring in the coming years, and ensure that their students have an optimized and uninterrupted usage experience, guarantee of information security and full engagement during the use of teaching tools. For that, they must consider all the benefits of Edge Computing and its applications to address all these challenges in an agile way and take advantage of all the trends for this market.
Curious to know how we can help improve the performance and security of your edtech or educational institution? You can create a free account to try our products and services today. Or, if you prefer, contact our team of experts and find out which Azion solutions are best for your company. | <urn:uuid:1cdbbdbd-bce2-4b99-8e2b-496066421928> | CC-MAIN-2024-38 | https://www.azion.com/en/blog/edtech-future/ | 2024-09-19T01:58:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00687.warc.gz | en | 0.9598 | 1,692 | 2.9375 | 3 |
Smart manufacturing technologies are changing the game in the industry. New innovations and industry 4.0 are creating a more efficient and productive factory with better data reporting, giving manufacturers an easier time using real data to make informed decisions.
Read on to learn more about the technologies and scope of smart manufacturing and how it can be used to grow businesses exponentially to stay ahead of competitors and streamline processes.
What is Smart Manufacturing?
Smart manufacturing is a term used to describe the digitization of the manufacturing processes, including supply chains, production, logistics and distribution, sales, and stocking. It’s made up of many unique technologies and solutions that each play a role in the production process.
Related: What is Smart Manufacturing?
Smart manufacturing utilizes the internet of things and industry 4.0, to connect these technologies, equipment, and software. Afterward, these technologies work together to record, report, and analyze streams of data using AI, machine learning, and automation to make more accurate predictions and decisions.
This connectivity and collaboration between machines helps to streamline supply chains, inventory management, and distribution. Connecting operations and helping manufacturers stay on the cutting edge of technology while becoming more efficient, productive, and competitive in their industry.
The Technologies that Make Up Smart Manufacturing and Industry 4.0
As mentioned above, smart manufacturing is made up of many different types of modern technologies.
Here are a few of the most common smart manufacturing technologies and how they help manufacturers optimize and streamline their operations.
AI and Machine Learning
AI and machine learning in smart manufacturing work together to gather more data and make data analysis faster and more accurate. Advanced AI and machine learning is more able now to also make simple decisions based on the analyzed data, decisions like predictive maintenance to anticipate upcoming needs.
Automation in manufacturing streamlines repetitive processes by giving them to robots who can perform them faster, more efficiently, and more accurately. This also allows humans to focus on tasks that require more critical thinking.
Cloud storage in manufacturing means not having to store critical data on-site, saving you space, money, and improving the overall security of the data. Cloud usage in manufacturing can also link multiple locations, combine datasets, and help you get a better overview of your business. | <urn:uuid:dfddd0a8-285f-45ce-9bbc-c28289eb98ac> | CC-MAIN-2024-38 | https://www.impactmybiz.com/blog/defining-the-scope-of-smart-manufacturing-and-industry-4-0/ | 2024-09-20T07:51:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00587.warc.gz | en | 0.938333 | 457 | 3.140625 | 3 |
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
CWE-79 - Cross Site Scripting
Cross-Site Scripting, commonly referred to as XSS, is the most dominant class of vulnerabilities. It allows an attacker to inject malicious code into a pregnable web application and victimize its users. The exploitation of such a weakness can cause severe issues such as account takeover, and sensitive data exfiltration. Because of the prevalence of XSS vulnerabilities and their high rate of exploitation, it has remained in the OWASP top 10 vulnerabilities for years. | <urn:uuid:4546b373-25a5-49c5-b55e-d9d315bd97cc> | CC-MAIN-2024-38 | https://devhub.checkmarx.com/cve-details/cve-2012-6708/ | 2024-09-08T03:11:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00787.warc.gz | en | 0.885694 | 242 | 3.171875 | 3 |
What is ARIN?
The American Registry for Internet Numbers (ARIN) is a vital organization in the realm of internet infrastructure. Essentially, it oversees the allocation and management of Internet Protocol (IP) addresses and Autonomous System Numbers (ASNs) in North America. This responsibility is crucial for the smooth operation of the internet.
The Role of ARIN
ARIN serves as a cornerstone for the internet by managing the distribution of IP addresses and ASNs. These elements are the backbone of internet connectivity, enabling devices and networks to communicate effectively. Through its stewardship, ARIN ensures an organized and fair allocation of these resources.
How ARIN Works
ARIN operates through policies developed by the community. This approach ensures transparency and fairness in the allocation of resources. Moreover, ARIN provides tools and services for members and the community, aiding in the efficient management of IP addresses and ASNs.
ARIN’s Importance in Internet Infrastructure
The significance of ARIN cannot be overstated. By managing IP addresses and ASNs, ARIN supports the growth and stability of the internet. This role is essential for businesses, governments, and individuals relying on robust and secure online communication.
In conclusion, the American Registry for Internet Numbers (ARIN) plays a pivotal role in the digital age. Its management of IP addresses and ASNs ensures the internet remains a global platform for innovation and connectivity. Understanding ARIN’s function is crucial for anyone involved in the tech industry or interested in the mechanics of the internet.
For more information, visit the official ARIN website: https://www.arin.net. | <urn:uuid:f36571dd-f2e5-4631-8618-ea6c2431c180> | CC-MAIN-2024-38 | https://abusix.com/glossary/american-registry-for-internet-numbers/ | 2024-09-11T19:25:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00487.warc.gz | en | 0.907371 | 330 | 2.953125 | 3 |
HTTP (HyperText Transfer Protocol) was designed to support a stateless, request-response model of transferring data from a server to a client. Its first version, 1.0, supported a purely 1:1 request to connection ratio (that is, one request-response pair was supported per connection).
Version 1.1 expanded that ratio to be N:1—that is, many requests per connection. This was done to address the growing complexity of web pages, including the many objects and elements that need to be transferred from the server to the client.
With the adoption of 2.0, HTTP continued to support a many-request-per-connection model. Its most radical changes involve the exchange of headers and a move from text-based transfer to binary.
Somewhere along the line, HTTP became more than just a simple mechanism for transferring text and images from a server to a client; it became a platform for applications. The ubiquity of the browser, cross-platform nature, and ease with which applications could be deployed without the heavy cost of supporting multiple operating systems and environments was certainly appealing. Unfortunately, HTTP was not designed to be an application transport protocol. It was designed to transfer documents. Even modern uses of HTTP such as that of APIs assume a document-like payload. A good example of this is JSON, a key-value pair data format transferred as text. Though documents and application protocols are generally text-based, the resemblance ends there.
Traditional applications require some way to maintain their state, while documents do not. Applications are built on logical flows and processes, both of which require that the application know where the user is at the time, and that requires state. Despite the inherently stateless nature of HTTP, it has become the de facto application transport protocol of the web. In what is certainly one of the most widely accepted, useful hacks in technical history, HTTP was given the means by which state could be tracked throughout the use of an application. That "hack" is where sessions and cookies come into play.
Sessions are the way in which web and application servers maintain state. These simple chunks of memory are associated with every TCP connection made to a web or application server, and serve as in-memory storage for information in HTTP-based applications.
When a user connects to a server for the first time, a session is created and associated with that connection. Developers then use that session as a place to store bits of application-relevant data. This data can range from important information such as a customer ID to less consequential data such as how you like to see the front page of the site displayed.
The best example of session usefulness is shopping carts, because nearly all of us have shopped online at one time or another. Items in a shopping cart remain over the course of a "session" because every item in your shopping cart is represented in some way in the session on the server. Another good example is wizard-style product configuration or customization applications. These "mini" applications enable you to browse a set of options and select them; at the end, you are usually shocked by the estimated cost of all the bells and whistles you added. As you click through each "screen of options," the other options you chose are stored in the session so they can be easily retrieved, added, or deleted.
Modern applications are designed to be stateless, but their architectures may not comply with that principle. Modern methods of scale often rely on architectural patterns like sharding, which requires routing requests based on some indexable data, like username or account number. This requires a kind of stateful approach, in that the indexable data is carried along with each request to ensure proper routing and application behavior. In this regard, modern “stateless” applications and APIs often require similar care and feeding as their stateful predecessors.
The problem is that sessions are tied to connections, and connections left idle for too long time out. Also, the definition of "too long" for connections is quite a bit different than when it is applied to sessions. The default configuration for some web servers, for example, is to close a connection once it has been idle—that is, no more requests have been made—for 15 seconds. Conversely, a session in those web servers, by default, will remain in memory for 300 seconds, or 5 minutes. Obviously the two are at odds with one another, because once the connection times out, what good is the session if it's associated with the connection?
You might think you could simply increase the connection time-out value to match the session and address this disparity. Increasing the time-out means that you're potentially going to expend memory to maintain a connection that may or may not be used. This can decrease the total concurrent user capacity of your server as well as ultimately impede its performance. And you certainly don't want to decrease the session timeout to match the connection time out, because most people take more than five minutes to shop around or customize their new toy.
Important note: While HTTP/2 addresses some of these issues, it introduces others related to maintaining state. While not required by the specification, major browsers only allow HTTP/2 over TLS/SSL. Both protocols require persistence to avoid the performance cost of renegotiation, which in turn requires session awareness, a.k.a stateful behavior.
Thus, what you end up with is sessions that remain as memory on the server even after their associated connections have been terminated due to inactivity, chewing up valuable resources and potentially angering users for whom your application just doesn't work.
Cookies are bits of data stored on the client by the browser. Cookies can, and do, store all sorts of interesting tidbits about you, your applications, and the sites you visit. The term "cookie" is derived from "magic cookie," a well-known concept in UNIX computing that inspired both the idea and the name. Cookies are created and shared between the browser and the server via the HTTP Header, Cookie.
Cookie: JSESSIONID=9597856473431 Cache-Control: no-cache Host: 127.0.0.2:8080 Connection: Keep-Alive
The browser automatically knows it should store the cookie in the HTTP header in a file on your computer, and it keeps track of cookies on a per-domain basis. The cookies for any given domain are always passed to the server by the browser in the HTTP headers, so developers of web applications can retrieve those values simply by asking for them on the server-side of the application.
The session/connection length problem is solved is through a cookie. Almost all modern web applications generate a "session ID" and pass it along as a cookie. This enables the application to find the session on the server even after the connection from which the session was created is closed. Through this exchange of session IDs, state is maintained even for a stateless protocol like HTTP. But what happens when the use of a web application outgrows the capability of a single web or application server? Usually a load balancer, or in today's architectures an Application Delivery Controller (ADC), is introduced to scale the application such that all users are satisfied with the availability and performance.
In modern applications, cookies may still be used but other HTTP headers become critical. API keys for authentication and authorization are often transported via an HTTP header, as well as other custom headers that carry data necessary for routing and proper scale of the backend services. Whether using the traditional “cookie” to carry this data or some other HTTP header is less important than recognizing its importance to the overall architecture.
The problem with this is load balancing algorithms are generally concerned only with distributing requests across servers. Load balancing techniques are based on industry standard algorithms like round robin, least connections, or fastest response time. None of them are stateful, and it is possible for the same user to have each request made to an application be distributed to a different server. This makes all the work done to implement state for HTTP useless, because the data stored in one server's session is rarely shared with other servers in the "pool."
This is where the concept of persistence comes in handy.
Persistence—otherwise known as stickiness—is a technique implemented by ADCs to ensure requests from a single user are always distributed to the server on which they started. Some load balancing products and services describe this technique as “sticky sessions”, which is a completely appropriate moniker.
Persistence has long been used in load balancing SSL/TLS-enabled sites because once the negotiation process—a compute intensive one—has been completed and keys exchanged, it would significantly degrade performance to start the process again. Thus, ADCs implemented SSL session persistence to ensure that users were always directed to the same server to which they first connected.
Over the years, browser implementations have necessitated the development of a technique to avoid costly renegotiation of those sessions. That technique is called cookie-based persistence.
Rather than rely on the SSL/TLS session ID, the load balancer would insert a cookie to uniquely identify the session the first time a client accessed the site and then refer to that cookie in subsequent requests to persist the connection to the appropriate server.
The concept of cookie-based persistence has since been applied to application sessions, using session ID information generated by web and application servers to ensure that user requests are always directed to the same server during the same session. Without this capability, applications requiring load balancing would need to find another way to share session information or resort to increasing session and connection time outs to the point that the number of servers needed to support its user base would quickly grow unmanageable.
Although the most common form of persistence is implemented using session IDs passed in the HTTP header, ADCs today can persist on other pieces of data as well. Any data that can be stored in a cookie or derived from the IP, TCP, or HTTP headers can be used to persist a session. In fact, any data within an application message that uniquely identifies the user can be used by an intelligent ADC to persist a connection between the browser and a server.
HTTP may be a stateless protocol, but we have managed to force-fit state into the ubiquitous protocol. Using persistence and Application Delivery Controllers, it is possible to architect highly available, performant web applications without breaking the somewhat brittle integration of cookies and sessions required to maintain state.
These features are what give HTTP state, though its implementation and execution remain stateless. Without cookies, sessions, and persistence, we surely would have found a stateful protocol on which to build our applications. Instead, features and functionality found in Application Delivery Controllers mediate between browsers (clients) and servers to provide this functionality, extending the useful of HTTP beyond static web pages and traditional applications to modern microservices-based architectures and the digital economy darling, the API. | <urn:uuid:d1ea0881-1981-4466-a4a7-221b28f0855a> | CC-MAIN-2024-38 | https://www.f5.com/pt_br/resources/white-papers/cookies-sessions-and-persistence | 2024-09-13T01:46:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00387.warc.gz | en | 0.949558 | 2,229 | 3.421875 | 3 |
In today's digital age, cybersecurity is more important than ever. As businesses increasingly rely on technology to operate, it becomes critical to ensure that systems are secure and protected from potential cyber threats. One of the essential components of cybersecurity is performing regular assessments to identify any gaps or vulnerabilities that could be exploited by malicious actors. Two commonly used methods for this are Gap Analysis and Cybersecurity Assessment. However, many people are confused about the differences between these two processes. In this blog, we will explore the distinctions between Gap Analysis and Cybersecurity Assessment and why they are essential.
Gap Analysis is a process that identifies the gap between where an organization is currently and where it wants to be in terms of its cybersecurity posture. It involves comparing the current state of cybersecurity within the organization to the desired state or industry best practices. Gap Analysis is performed to identify the areas where the organization falls short and to develop an action plan to close the gap.
Gap Analysis typically involves a review of policies, procedures, and controls in place to secure the organization's data, networks, and systems. It may also include interviews with key stakeholders, such as IT staff and business leaders, to gain an understanding of the organization's current cybersecurity practices. The result of a Gap Analysis is a report that highlights the gaps between the current and desired cybersecurity posture and provides recommendations to close those gaps.
Cybersecurity Framework Assessment
A Cybersecurity Assessment is a comprehensive evaluation of an organization's security posture. It involves a thorough examination of the organization's security infrastructure, policies, procedures, and practices to identify any vulnerabilities or weaknesses that could be exploited by attackers. The goal of an Assessment is to provide a comprehensive overview of the organization's security posture, including areas that need improvement, and to make recommendations to strengthen its security.
An Assessment is typically conducted by a team of cybersecurity professionals who use a combination of automated tools and manual testing techniques to evaluate an organization's security posture. The assessment may include vulnerability scanning, penetration testing, and social engineering testing to identify any weaknesses that could be exploited by attackers. The result of a Cybersecurity Assessment is a detailed report that outlines the organization's security strengths and weaknesses, provides recommendations for improvement, and includes a prioritized list of action items.
The Difference between Gap Analysis and Cybersecurity Assessment
While Gap Analysis and Cybersecurity Assessment share some similarities, such as identifying areas of weakness and providing recommendations for improvement, there are some key differences between the two.
Gap Analysis is a narrower process that focuses on identifying gaps between the current and desired state of an organization's cybersecurity posture. It is typically performed to address a specific issue or to prepare for a specific compliance requirement. Gap Analysis is generally less detailed and less comprehensive than a Cybersecurity Assessment.
On the other hand, a Cybersecurity Assessment is a more comprehensive evaluation of an organization's security posture. It involves a broader range of testing techniques and is designed to provide a comprehensive overview of the organization's security strengths and weaknesses. A Cybersecurity Assessment is typically more in-depth and time-consuming than a Gap Analysis.
Another significant difference between the two is that Gap Analysis is generally performed internally by an organization's IT staff or outsourced IT Provider, while a Cybersecurity Assessment is typically conducted by a specialized team of cybersecurity professionals. Cybersecurity Assessment requires more expertise and specialized tools, making it more expensive than a Gap Analysis.
In conclusion, both Gap Analysis and Cybersecurity Assessment are essential processes for identifying and addressing vulnerabilities in an organization's security posture. Gap Analysis is a useful tool for identifying gaps between the current and desired state of cybersecurity, while a Cybersecurity Assessment provides a more comprehensive evaluation of an organization's security posture. The choice between the two depends on the organization's specific needs and goals. | <urn:uuid:6fa9d903-aa23-47e9-9001-1f0358349708> | CC-MAIN-2024-38 | https://www.frameworksec.com/post/what-is-the-difference-between-a-gap-analysis-and-cybersecurity-framework-assessment | 2024-09-13T00:45:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00387.warc.gz | en | 0.9497 | 757 | 2.875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.