text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
By: Ruchi Bhatnagar, Deepak Sinha COVID-19 is now not a new term for anyone as it threatens the humanity since 2019. This pandemic turns back the clock of global economy in to a severe contraction; results psychological, social and financial suffers. Researchers unceasingly try to find light in the pandemic darkness and reshaping ways of living and working in a permanently changed environment. IoT; a network of smart things now a vital technology and its adoption enhancing and fascinating psychological, sociological and financial aspects of life. This new wave of IoT transform the world from old wearables to smart appliances, automated Cars to Connected Home, smart retails to new business models, smart Analytics to Machine learning and from old health care systems to smart health care. IoT is not a single technology but a pool of several enabling technologies such as cloud, 5G, AI, Deep learning and M2M interaction and day to day advancement in these enabling technologies opens the door of futuristic world that may not get affected by such pandemic situations. This article explores versatile fields where these technologies adopted successfully and will be changed the tech world forever. IoT is not only used in smart agriculture, smart energy management, health care systems, smart traffic monitoring, smart home and such other technologies but also has a wide scope in different fields as it encompasses with recent trends of Cloud’s edge computing architecture, 5 G adoption in telecommunication, Big data management within cloud, the M2M interaction between devices using AI techniques and deep learning method. These key enabling techniques widen the scope of IoT in versatile applications areas when COVID-19 is spreading day by day and whole world has forced to stop in their homes only. These technologies become new guide for psychological, sociological and financial development in such situation and advances the provisions already provided by IoT. Some recent tech development holds up by these techniques highlighting here. Table 1. Edge native Enterprises and their Application Areas Enterprise | Amazon AWS | Dell Technologies | Google Cloud | Hewlett Packard | IBM | Technology | Cloud-edge Hybrid model | Open manage mobile App | Edge TPU | Telcos | OpenShift technology | Application Areas | Provide Solutions for Connected Vehicle, IoT Device Simulator, and AWS IoT Camera Connector | offers Edge Gateways for manufacturers, retailers, and digital cities | Google offers a line of connected home products for edge computing. | is well-positioned to serve larger companies | organizations in the telecommunications, retail, and automobile industries | Edge Native Intelligent Applications The IoT recent advancement accelerated by edge data centers of Cloud; the shifting of data processing to the edge gears the working efficiency of public and private enterprises with enhance customer experiences. The scalable nature of edge computing also makes it an ideal solution for fast-growing, agile companies, especially if they’re already making use of colocation data centers and cloud infrastructure. At the Edges of cloud, the real time data processing capability enhances business decision making and corrective action planning more accurately on most current data at the time of Covid-19 where business conditions are constantly shifting; and operational responsiveness is need for enterprise. According to survey 84% of business now demands on edge applications. Table 1 represents some top Enterprises with Edge Native Intelligent Computing solutions and their application areas : 5G Enabled Intelligent Automation 5G is not only advancement but also a revolution in wireless technology that can support many more devices per node; its vast bandwidth, blazing speed, and low latency releases powerful advances and enabling the IoT in each and every sector. Its Intelligent automation and contributions forward the industries towards new growth and adoption of digitalization. The use of 5G in network connectivity changed devices functionality and their scope in the world of internet. To meet the differing prerequisites of the IoT, 5G mobile networks must guarantee that massive devices and new services such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications, Critical Communications, and Network Operations. The 5G mobile network will improve the range of IoT applications such as multiple Home networks, automated connected homes, voice based routines, lights, locks and security; that all becomes automated with the advent of 5G. Apart for that all the challenges and opportunities provided by 5G is timely, accurately and efficiently to hit the economic growth of different enterprises in pandemic situation, that now becomes enhances day by day. The expected 5G Adoption since wave of pandemic blemishes the world is approx. 340 million in 2021, 1 billion in 2023 and 2.7 billion in 2025. Following figure elaborate its vast uses in versatile fields during pandemic. AI & IoT battle against COVID-19 During the pandemic crisis every fields needs to be changed their working criteria with more efficiency and secure way; thus new enabling techniques have been explored. Artificial Intelligence is one of such enabling technology that act as a catalyst in the field of health care with the collaboration of IoT and popular as IoMT (Internet of Medical Things) . AI provides significant medical revolution during the COVID-19 pandemic crisis. Scratch from Corona patient’s identification to management of social distancing, from data collection, analysis and management of monitoring data to know the exact spread of such virus, from smart city to urban intelligence and mass production of suitable COVID-19 vaccines; it gives new way to medical science. This new era of Intelligent health care must be long run and updating each and every application area with the advent of AI. Table 2: M2M & Deep Learning IoT Applications and their scope Application Areas | Scope | Techniques Used | Industrial IoT | Develop secure intelligent systems | Machine and Deep learning Virtues 4 | Geophysical engineering applications | to monitor earthquakes and send early warning signals to prevent destruction of buildings and loss of life | MEMS (micro electro-mechanical system). 5 | Multi-Role Robotic Systems for IoRT | applications of IoRT and existing robotic systems based on Humanoid, Mobile, Flying and Swarm envisaged for future IoRT systems. | IoRT is a mix of diverse technologies like Cloud Computing, Artificial Intelligence (AI), Machine Learning and Internet of Things (IoT). 6 | Smart Transportation | Traffic prediction, Traffic monitoring and Autonomous driving | DBN, CNN, FCN-LSTM 7 | M2M & Deep Learning Facilitation IoT means integration of billions of smart devices and their functionality is greatly influenced by M2M interaction as there is least or no human interaction. These devices provide the autonomous features of things which is also an important aspect of covid -19 infection as it spreads by manual touch . Thus the enterprises use M2M and Deep Learning Algorithms for automaticity of devices and applications. These enabling technology yet associated with security challenges crafts a new path for different application areas with wider scope and new techniques. Some examples listed in Table 2. Open issues and challenges Advancement in enabling technologies needs to be addressed some challenges such as Edge native IoT computing faces data distributing capability, data accumulation, control and management challenge, 5G adoption suffers from deployment and coverage , security and privacy, while the AI, M2M and Deep Learning methods of computing challenged by IoT standards, Intelligent analysis and action mechanism, adoption of Algorithms and their interoperability. Future directions and Conclusion When the world is fighting against covid-19; including many technology, IoT with its enabling technologies have also drawn the devotion of researchers and mark a remarkable impact on our lives and world also. During the pandemic IoT has shown an encouraging result in sociological, psychological and financial aspects of life. In this article we highlight the recent advancement in technologies; their concerned applications and the adoption of these techniques to build a new tech world. The important challenges faced by each technology becomes a future research concern for researchers to make the world safe and automated. The Adoption of IoT due to its enormous growth in every sector paves a thousands of opportunities also with sustainable solutions to the real-time challenges faced by world in terms of education, health, business and security. - Joanna 2021 “Top Edge computing companies in 2021” - Abid Haleem and mohd. Javaid 2020 “Medical 4.0 and Its Role in Healthcare During COVID-19 Pandemic: A Review” https://doi.org/10.1142/S2424862220300045 - Jayne Locke 2021 “ How the Pandemic Accelerated the Need of IoT “ - Parjanay Sharma & et. all “ Role of machine learning and deep learning in securing 5G-driven industrial IoT applications” - Kamil Dilimiler et. all. Deep learning, machine learning and internet of things in geophysical engineering applications: An overview” - R. S. Batth, A. Nayyar and A. Nagpal, “Internet of Robotic Things: Driving Intelligent Robotics of Future – Concept, Architecture, Applications and Technologies,” - XIAOQIANG MA et all. 2019 “ A Survey on Deep Learning Empowered IoT Applications” - Ted Kritsonis April 5, 2019. “The top five challenges for 5G “ Cite this article as: Ruchi Bhatnagar and Deepak Sinha (2021) Smart Adoption of IoT in Covid 19 Pandemic paves new era of sustainable future world, Insights2Techinfo, pp.1 - How does Natural Language Processing apply to IoT? - Artificial Intelligence and Machine Learning for Smart and Secure Healthcare System - Internet-of-Medical-Things (IoMT): An Unexplored Dimension in Healthcare - Technological Advancements in Healthcare Industry
<urn:uuid:5d9a948f-9206-4bd9-97db-ce88b1a8e2dc>
CC-MAIN-2024-38
https://insights2techinfo.com/smart-adoption-of-iot-in-covid-19-pandemic-paves-new-era-of-sustainable-future-world/
2024-09-18T15:57:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00027.warc.gz
en
0.909067
1,998
2.734375
3
By Sewvandi Wickramasinghe What is Design Thinking? Design Thinking is a continuous process in which we try to understand the user, keep an open mind, and reframe issues in order to come up with new methods and answers that may not be obvious at first. Design thinking is human-centered, which means it considers how people interact with a product or service rather than how someone else or an organization believes they will interact with it. It is a problem-solving method centered on solutions. It is both a style of thinking and functioning as well as a set of practical techniques. Why Design Thinking? Its goal is to meet a certain human requirement. - Once those pain areas are discovered, design thinking may lead to solutions. Deals with vague or difficult-to-define situations. - Consumers frequently don’t understand or can’t articulate the problem they’re trying to solve. Consumers frequently have no idea what problem they are trying to solve or are unable to articulate it. However, with careful observation, one may discover problems based on what they see in real customer behavior rather than working off of their own assumptions about the consumer. This leads to more creative solutions. - Design thinking can help bring to light some of these previously unseen pain areas that would otherwise go unnoticed. Taking a step-by-step approach to solving those difficulties frequently results in non-obvious, creative solutions. Organizations function more quickly and efficiently as a result of this. - Design thinking advocates building prototypes and then testing them to determine how successful they are, rather than studying an issue for a long period without coming up with a solution. 5 stages of the design-thinking process A five-stage framework guides design thinking. It’s worth noting that the five phases, stages, or modes aren’t usually in order. They don’t have to happen in any particular order, and they can frequently happen in parallel and recur repeatedly. The designer watches customers in this initial step to acquire a better knowledge of how they interact with or are influenced by a product or issue. The assumptions must be made with empathy, which includes refraining from passing judgement and without imposing previous assumptions about the requirements of the customer. Observing with compassion is effective because it can reveal difficulties that the customer was unaware of or couldn’t express themselves. It’s now much easier to comprehend the human needs for which you’re designing and developing. You use your observations from the first stage to identify the problem you’re seeking to address in this second step. Consider the challenges your customers face, what they consistently battle with, and what you’ve learned from how they’re impacted by the problem. You’ll be able to identify the problem they’re dealing with once you’ve integrated your results. The next stage is to come up with solutions to the problem you’ve identified. These brainstorming sessions can be done in a group, in a corporate environment that fosters creativity and cooperation, in an innovation program, or they can be done alone. The essential thing is to come up with a variety of diverse concepts. You’ll come up with a couple ideas to move on with at the end of this procedure. This is the stage where concepts are transformed into practical solutions. Prototypes aren’t supposed to be flawless. The purpose of a prototype is to quickly create a tangible form of a concept in order to test how well it is received by customers. This is the stage of testing where you get feedback on your work. You’ll probably have to return to one or more of the previous stages after the fifth step is completed. Maybe the testing revealed that you need to create a new prototype, in which case you’d go back to the fourth stage. Or it might be that you’ve misunderstood the customer’s requirements. If that’s the case, you’ll have to restart the procedure from the beginning. Design Thinking ‘Outside the Box’ Design Thinking is frequently referred to as “thinking beyond the box.” One aspect of outside-the-box thinking involves falsifying previous assumptions in order to establish whether or not they are correct. The solution-generation method will assist us in producing ideas that represent the actual limits and aspects of an issue once we have questioned and explored its conditions. Design Thinking assists us in doing appropriate research, prototyping, and testing our goods and services in order to discover new methods to improve the product, service, and design. Design Thinking is Accessible to All Design thinking isn’t only for designers, it’s also for creative employees, freelancers, and executives who want to incorporate design thinking into every level of an organization, product, or service to create new business and societal options. You can visit www.axonect.com to know more about its Axonect Product Suite respectively.
<urn:uuid:567df827-ef6e-4b4c-b2b2-7efaa9061bb6>
CC-MAIN-2024-38
https://www.axiatadigitallabs.com/2024/04/09/stepping-into-design-thinking/
2024-09-21T01:09:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00727.warc.gz
en
0.951207
1,038
3.265625
3
Nowadays, where the internet has become a part of our daily lives, safeguarding our digital presence is of utmost importance. As we explore the realm, we encounter a landscape filled with both opportunities and vulnerabilities. This is where the significance of internet security becomes evident: acting as a shield against potential threats that are out there and waiting for the right moment to strike. At its core, internet security encompasses a range of security measures, strategies, and solutions aimed at strengthening our experiences and overall security. It involves implementing protocols, technologies, and practices to ensure the safety and privacy of our computer systems, networks, and online interactions. What is internet security? Internet security is any cybersecurity measure taken to protect online transactions, data, and activities. The intent is to protect internet users from hackers and limit or eliminate data breaches. It’s a particular component of the larger ideas of cybersecurity and computer security, involving topics including browser security, online behavior, and network security. Online security serves as a protection wall against the harmful potential consequences of online threats like malware that aim to compromise our personal information, financial data, and the overall integrity of our digital identities. What is internet security software? Internet security software protects us from dangers that exist within the web of interconnected computer networks. These software tools are like gatekeepers who analyze and filter internet traffic meticulously to identify any potential threats. While cyberattacks have become increasingly common, internet security plays a more crucial role in our lives in preventing them on time. We spend a large proportion of our lives online, where we communicate with friends, browse different sites, use social media platforms, listen to our favorite songs, or watch movies. While doing this and entertaining ourselves, sometimes we forget about the potential risks that the internet hides, but internet security does not forget about them and covers our backs. When discussing security, we enter a realm that encompasses concerns about facing these online threats. The diverse ecosystem of the internet gives rise to vulnerabilities that require our attention before it is too late to be secured. Because, as we know, there are countless harmful threats waiting to strike at the right moment, from spam messages flooding our inboxes to phishing attacks. Internet security software acts as our protector by examining our emails, scanning attachments, and filtering out potentially harmful content, and that can be a priceless help in dealing with that kind of disaster scenario. Furthermore, as we explore the internet using web browsers, the significance of securing the surfing programs we use is another fundamental element in our safety puzzle. The security software collaborates with these browsers to scrutinize websites for hazards even before we access them. This symbiotic relationship ensures that we can confidently surf the web without falling into the traps that may lie ahead by helping to process the web's filtering of the unwanted content that is just a click away. This type of security is much more than just an assortment of tools; it embodies a mindset and a deliberate choice to protect our digital well-being. Thanks to that fact, we can feel safe and confident while using the internet on a daily basis without any concerns. How important is Internet Security? We can confidently say it is a well-known fact that internet security is priceless and lifesaving for every person using the internet. You may ask why? Because it helps keeping ourselves and our families safe, when using the internet while browsing the web, because as we said above there are many internet security threats. And some of them can be extremely dangerous and harmful. So, yes internet users know very good the blessing of the having the right security features protecting every person from identity theft, data theft, and any computer damage caused by a malicious websites and other harmful viruses. Nowadays, we live in a digital world where everybody is using the internet no matter the age, from our kids to our parents. It has just become an invariable part of our lives. Thanks to the invention of the internet we have a countless possibilities to reach our loved ones with video conversations, or find new friends in the social media platforms, or just relaxing and listening to our favorite song. For instance 20 years ago, nobody would believe that we will have access to Wi-Fi networks everywhere we go, even in the supermarket. On the other hand there are countless internet threats, which can reach our private data and use it against us, so it is our responsibility to protect ourselves the best possible way. Consider the countless internet security threats that lurk in the digital shadows, waiting for the right moment to strike. Cybercriminals, armed with sophisticated techniques, relentlessly endeavor to steal data – our very digital identities – for their own criminal purposes with the only goal to gain benefits, nowadays, it is even easier for them, because the mobile devices are one of their main target to receive the desired information. A cornerstone of internet security lies in the protection of our login credentials – the keys to your digital accounts. By using strong passwords and not using same password for each of our accounts, and practicing healthy security habits, we fortify the first line of defense against unauthorized access. Additionally, the implementation of an ad blocker can provide an additional layer of protection, shielding us from potentially malicious attacks that can lead to a lot of headaches. Imagine a scenario, where your online accounts, for instance your social media profiles, financial platforms, or even your email – fall in the hands of cybercriminals. Have you experienced a situation like this? I hope not, because the consequences can be devastating and irreversible. It is within this context that the true essence of internet security shines bright, providing a sense of assurance and tranquility in our digital lives. In a world where our interconnectedness grows day by day, the importance of internet security cannot be overstated. It is a shield against the perils that threaten to compromise our digital autonomy, where home network security acts as a guardian of our fortress. By embracing the principles of internet security, we follow a path towards a more secure and empowered digital future, and keep peace of mind while browsing the web. What are the benefits of internet security solution? There are countless benefits of internet security solutions in our present digital realm, where computer networks weave intricate webs of interconnectivity, the importance of safeguarding our digital information cannot be overstated, we must take the best measure to keep our data online secure as good as possible, because this can lead to serious consequences for our private life. Internet has become a integral part of our everyday life, because everyone of us has social media accounts, digital bank accounts and there we store very important private information, that we don't want to get in the wrong hands. Yet, unfortunately there are even more threats stalking us online. Internet security solutions are the ones which guide us through the disrupting events we may face on a daily basis. For example, one of the most common cyberattacks happen through our personal email, so having a strong email security is vital. Let's name these benefits of the security solutions and have a better understanding of the pros they provide for us. First and foremost, imagine a fortress safeguarding your most precious digital treasures – your data. In a world where data breaches and cyberattacks are constantly growing as a number, data security becomes a non-negotiable priority. Internet security solutions employ an arsenal of strategies, from data loss prevention mechanisms to multi-factor authentication protocols, ensuring that your sensitive information remains locked away from the greedy eyes of the cybercriminals. Internet security solutions should scan the web traffic for sensitive and protected types of data and prevent them from being exposed outside of the organization or your home. Speaking of guardianship, let's talk about the unparalleled shield that these security solutions provide against the ever-growing number of online threats. Picture this, remote access hackers attempting to breach your digital accounts and to access your personal information. You can prevent further damages with the installation of antivirus software and the fortification of your digital boundaries, you can rest assured that your online accounts remain impervious to malicious attacks or other harmful events. Now, let's dive into the heart of convenience – the art of remote accessibility. In a world where mobility is everywhere around us, the ability to access your digital realm from a different and remote locations is a blessing, but the right mobile security remains vital. But, as we step into this realm, we must act with caution. Here, the power of the latest security solutions comes to the fore once again, offering you a cloak of active protection against potential vulnerabilities, that can lead to a bunch of headaches. Perhaps the biggest benefit of the security solutions is the tranquility they provide upon you – a serenity that stems from knowing that your digital information is secured by an impenetrable shield. As you explore the vast expanse of the digital realm, every click, every interaction, every transaction is imbued with an extra layer of trust. Your online presence becomes a sanctuary, where the intricate dance of data and connectivity unfolds in harmony, unmarred by the threats that lurk beyond. So, while you navigate the wondrous labyrinth of the internet, you will need the best security measures to help you avoiding any cyber attack, data breach, or ID theft and always remember, that internet security solutions are not mere tools, they are the keepers of your digital legacy and identity. What is the difference between cyber security and internet security? There are many ways that businesses, or even individuals, can stay protected from online cybersecurity threats. But despite the wide availability of internet safety options, you would wonder why there are still so many victims. The casualties from cyberattacks seem to keep increasing over the years! Why is this so? Perhaps one reason is that, with the many security measures available, most people don’t know which one to use. Internet Security refers to the measures taken by any enterprise or organization to secure its computer network and data using both hardware and software systems. Every company or organization that handles a large amount of data has a variety of solutions against many cyber threats, with the main purpose of keeping their data secure. Cybersecurity plays a key role in safeguarding computer networks and data from threats and ensuring that cybercriminals cannot steal sensitive information. It involves implementing network security systems and employing data encryption methods to protect both individuals and businesses. In essence, cybersecurity is about fortifying our systems against malicious attacks, security breaches, and unauthorized access from potential cyberattackers. It aims to enhance system security and prevent any disruptions or damages. While there may be vulnerabilities that cannot always be eliminated completely, cybersecurity endeavors to minimize their impact as much as possible. Is internet security the same as antivirus software? Is internet security the same as antivirus software? Allow me to guide you through this complex and interesting question that keeps the keys to the safety zone in the infinity web space. Let's play an imagination game and think of your computer as a fortress, a sanctuary where your digital world resides. Now imagine internet security as the vigilant guard stationed at the gates, scrutinizing every individual who seeks entry into that fortress. It encompasses a wide array of measures, from robust firewalls to sophisticated encryption protocols, to ensure that unauthorized users remain barred from your digital life. Internet security is the gatekeeper that safeguards your online interactions, preventing potential threats from infiltrating your computer system and web apps by using the right internet security measures. On the other hand, antivirus software is the vigilant protector of your computer's inner sanctum. Just as a knight dons armor to safeguard himself in the upcoming battles, antivirus software shields your computer system from the insidious forces of malware scripts and software. It stands as a digital guard, ever vigilant and unyielding, scanning every corner and cranny for any signs of malicious software that could compromise your operating systems and disrupt your digital tranquility. Now, let's look even more closely at the main question. Is internet security and antivirus software the same, or do they differ? While they share a common goal—safeguarding your digital world and identity—they are distinct players in the constant battle of digital protection. Internet security encompasses a broader scope, focusing on securing your entire online presence, while antivirus software zeroes in on your computer system itself, ensuring that malicious programs do not find a foothold. Consider this scenario: you've installed antivirus software on your computer—a formidable guardian that scans, detects, and neutralizes threats at the very doorstep of your operating system. However, without the right security measures, your digital interactions could still be vulnerable. Think of it as a well-protected fortress with an unlocked backdoor: your computer system remains secure, yet unauthorized users could infiltrate your online activities. It doesn't sounds good, right? In summary, internet security and antivirus software are complementary forces, each essential to fortifying your digital defenses. While antivirus software guards your computer system, internet security controls access, shielding you from an array of online vulnerabilities. Together, they create an impregnable fortress, protecting your digital kingdom from potential invaders. What do you need to know about internet security? When you are browsing the internet, you are exposed to every kind of cyberthreat. Long gone are the days when attackers only focused on businesses and corporations. As an individual home user, you are just as vulnerable to attack. And if your children use your computer for play or work, you are just "upping the ante". Cybercriminals can outwit even the best of us, and children don't stand a chance against their tactics. So, what do you need to know about internet security? You need to know as much as possible, and so we have developed this article to provide you with the comprehensive information you will need to arm yourself and keep your computer and data safe. What are the most common internet security threats? There are a variety of internet security threats and malware types, including viruses, trojans, ransomware, worms, and phishing attacks. A computer virus is malicious code that attaches itself to clean files, replicates itself, and tries to infect other clean files. You can inadvertently execute a virus by opening an infected email attachment, running an infected executable file, visiting a website, or clicking on a website ad. Although computer viruses are rare today—representing less than 10% of all malware attacks—they are no less malicious than other security threats. Trojans are a metaphorical reference to the Trojan Horse, and as was similarly the case in Greek mythology, today's Trojans disguise themselves as something legitimate and harmless, such as a legitimate application or malware hidden within a legitimate application. Trojans act discreetly, opening backdoor to give attackers or other malware variants easy access to systems. A backdoor is a stealthy method of bypassing normal authentication or encryption on a system. It can be used for either securing remote access to a system or for obtaining access to privileged information in order to corrupt or steal it. Ransomware is one of the most dangerous types of malware. Originally designed to take control of a system by locking a user out until they pay the cybercriminal a ransom to restore access, modern variants of ransomware will encrypt your data and may even exfiltrate data off your system to increase the attackers' leverage. Worms are computer programs that copy themselves from one computer to the next. It does not require human interaction to create these copies and can spread rapidly and in great volume. Phishing is a common attack technique that utilizes deceptive communications from a seemingly reputable source to gain access to your personal and sensitive information. Attackers phish for this information using email, instant messages, SMS, and websites. The attacker impersonates a trustworthy organization, such as a bank, government institution, or legitimate business, to take advantage of your trust and trick you into clicking a malicious link, downloading a malicious attachment (malware), or disclosing confidential information such as personally identifiable information (PII), financial information, or your credentials. Malware and malvertising Malvertising, or malicious advertising, is the term for criminally controlled advertisements within Internet-connected programs, usually web browsers, that intentionally harm people and businesses with all manner of malware and assorted scams. In other words, malvertising uses what looks like legitimate online advertising to distribute malware and other threats with little to no user interaction required. This is one of the methods that malware software is using to penetrate your computer or mobile device and spread with the speed of the light, causing damage and a lot of headaches for the owners. Malvertising can appear on any advertisement on any site where you see different ads, even the ones you visit as part of your everyday Internet browsing can be seriously harmful without even expecting it. Malware, or malicious software on the other hand, is a program or file that has the intention to harm your computer, network, or server. It can be prevented by using the proper security methods, like installing antivirus software and choosing wisely the content you download to your personal devices. You should never underestimate the level of security for this kind of thread. Hacking and remote access Hacking involves the process of identifying and exploiting vulnerabilities in a computer system or network, it is searching for a weak spot to hit and penetrate the protection level with the intention of gaining access to personal or organizational data. While hacking can be performed without intent, it is often perceived negatively due to its association with cybercrime. Anyone who uses a computer connected to the Internet needs to be aware of the risks posed by computer hackers and online predators, because nobody is protected for facing this scenario. These cybercriminals often employ tactics like phishing scams, spam emails or instant messages, and fake websites to deliver malware that can compromise your computer's security. Their mean tactics often succeed, so you have to be extremely careful, when visiting new websites. If you don't have the right firewall protection in first place, hackers may attempt to access your information and personal files, yet they will succeed in case they find a vulnerability in your system or network. They can eavesdrop on your conversations and explore the backend of your personal website, for example. Often, by using fake identities, these predators can manipulate you into discovering personal and financial details or even worse scenarios, which can lead to serious consequences. Another dangerous way to become a victim of hacking is when your computer gets hacked remotely, which poses a risk as it allows unauthorized individuals to take control of devices. Typically, attackers utilize payloads to establish control. These payloads are often delivered through methods such as engineering or phishing attacks. Once the payload successfully infiltrates the system, the attack commences. And this is extremely dangerous, leading to catastrophic consequences for the victims. Because the cybercriminals gain access to your devices, they can start doing whatever they want with your personal information, in most cases, this is related to criminal purposes or a demand for ransom. Internet Security Tips: How to Protect Yourself Online While you cannot stop every cyberattack, there are rules you can follow to mitigate threats and/or more easily recover if you become a victim. 1. Choose a strong password. Always choose a strong, unique password for every website and application you use is vital for ensuring your highest level of protection. In many cases, a website, app, or online account may give you requirements for creating a password, e.g., it must be at least X characters and include at least X numbers and X symbols, etc. We can recommend you to have at least 15 symbols, including uppercase and lowercase letters, and even some special characters. You mustn't share your passwords with anyone, either online or offline. Also, don't use spreadsheets or Word documents to remember your passwords by writing them in the file. If you are breached, these documents will be available to the attacker. Instead, you can use a secure password manager to keep track of your passwords. 2. Multi-factor authorization is your best tactic to prevent many attacks. Multi-factor authentication (MFA) provides a second layer of protection for your digital accounts, above and beyond your password. With MFA, you log onto your online account, but instead of getting immediate access, you must provide additional information, such as a personal identification number (PIN), a one-time verification code, answers to questions that only you know, and so on. In some cases, MFA sends a text message to your mobile phone. Many websites now let you set MFA because it is the most highly recommended defense for blocking an attacker from hijacking your account. If your password is stolen, the thief will not be able to access your account because another verification method is required. 3. Education is the second-best tactic to prevent attacks. Stay abreast of what is happening in the cybersecurity space. There is plenty of information available online about the latest types of attacks. Information is power when it comes to combating cyberthreats. The more you know about prevalent or new attacks, the better decisions you will make when it comes to clicking on links, visiting strange websites, opening unexpected emails, or downloading documents. 4. Choose a secure browser. Which provides additional features to better stop various cyberattacks while you are browsing the internet. For example, some browsers will display a warning message if you try to visit a site that contains malware. The most secure browsers include Brave Browser, Tor Browser, Firefox Browser, Iridium Browser, Epic Privacy Browser, and GNU IceCat Browser. Google Chrome, Microsoft Edge, Safari, and Opera are less secure, and experts suggest you use a virtual private network (VPN) when using these browsers for better protection. 5. Use a firewall. It is critical that you have a firewall on your network. A firewall is a network security system that monitors incoming and outgoing traffic between trusted and untrusted networks and blocks suspicious traffic based on security rules. A firewall is your first line of defense to mitigate online cyberattacks. 6. Install software updates as soon as possible. Cybercriminals take advantage of software security gaps, which is one of the reasons why software providers provide updates. Always update your cybersecurity software and other applications as soon as possible to ensure security gaps are closed and your system is protected. 7. Only use secure networks. If you are sitting at a coffee shop or in your doctor's office browsing the internet, you are not on a secure network. This means you are more vulnerable to an attack. Always use a secure network regardless of your location, and keep passwords unique and secure as well. 8. Be careful what you click on and download. Think before you click! If you receive a strange email, are visiting a strange website, or get an unexpected online advertisement, be careful what you do. Cybercriminals continuously refine their attack strategies by playing on your emotions: creating fear, taking advantage of your curiosity, asking for help, or enticing you into feeling empathy or sympathy. 9. Be sure to create an image backup of your system. If you are attacked, you can lose your data, and the only way to recover is with your backup. In fact, security experts agree that you should always follow this rule: keep your data in three places (a production copy and two backups), across two media, with one backup stored offsite, such as in the cloud. 10. Invest in cyber protection software. While cybersecurity software can protect you from a breach, it does not fully protect your systems, applications, and data. Alternatively, cyber protection is an integrated solution that combines cybersecurity, backup, and disaster recovery, ensuring your PC or Mac is secure and protected no matter what happens, whether it be a malicious attack, deleted data resulting from human error, corrupted data due to hardware or software malfunctions, or a human-made or natural disaster. In short, any event that causes data loss 11. Use an identity theft protection service. Identity theft is becoming increasingly sophisticated as scammers use a range of strategies and tools to steal your information. If they succeed, these fraudsters can take advantage of your identity by opening credit accounts, filing tax returns, or even completely assuming someone's Identity. Here emerges the need to use an identity theft protection service. In case you fall victim to cybercriminals using this mean approach, identity theft protection companies offer invaluable support. They will guide you through the process involved in resolving issues and reclaiming your stolen identity. They will also help place fraud alerts on your credit reports, notifying creditors that you may be a victim of identity theft. This additional layer of security ensures that creditors must verify your identity before opening new or existing accounts. This type of protection can, for sure, save you a lot of money and prevent criminal actions related to your financial balances and your identity itself. Using that type of security gives you peace of mind, knowing that you have a shoulder on which to rely when a scenario like this happens. 12. Use VPN software. Installing and using a virtual private network (VPN) is another method to protect yourself while online, as all of your internet traffic passes through an encrypted tunnel that's difficult for hackers to break into. It also covers your location by assigning a different IP address for your Internet activity than your actual one. This is essential when using the Internet on public Wi-Fi networks at shopping centers, cafes, airports, and gyms, where there are a lot of Wi-Fi threats. Remember that you get what you pay for. Free or cheap VPNs aren't a good option because they can be provided by cybercriminals with the purpose of stealing your identity, the app itself can be malicious, or it may not work at all. The worst scenario is that they could be actively spying on you and your activities. Acronis True Image — internet security software for your home Acronis True Image (formerly Acronis Cyber Protect Home Office) offers everything you need to safeguard your home PC or Mac and backup data from all of today's threats — from disk failures to ransomware attacks. Thanks to its unique integration of backup and cybersecurity in one solution, it saves you time and reduces the cost, complexity, and risks caused by managing multiple solutions. So don't hesitate to ensure the best protection for you and your business. We are here to protect your most valuable assets, your data and information. The most reliable, efficient and easy AI-based cyber protection A Swiss company founded in Singapore in 2003, Acronis has 15 offices worldwide and employees in 50+ countries. Acronis Cyber Protect Cloud is available in 26 languages in 150 countries and is used by over 20,000 service providers to protect over 750,000 businesses.
<urn:uuid:adf1601d-a06f-464b-bb16-3e28d3f4c71f>
CC-MAIN-2024-38
https://www.acronis.com/en-us/blog/posts/what-is-internet-security/
2024-09-08T23:46:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00027.warc.gz
en
0.929593
5,434
3.21875
3
Zeolites are a group of minerals consisting of hydrated aluminosilicate minerals. They are made from interlinked tetrahedra of alumina and silica. Zeolites occur naturally, but can also be manufactured by using raw materials like aluminium, silica, and kaolin. Zeolite is known as an exceptionally stable compound that is mainly used as a cation exchanger and molecular sieve. Due to its porous molecular structure, zeolite is used as an ion exchanger in water filters and water softeners. It is also used in petroleum refining for separating hydrocarbons, drying of gases and liquids, and for pollution control through selective molecular adsorption. Natural zeolites are used in fertilizers and soil amendment procedures as well. Zeolites have several excellent properties such as high resistance to oxidization, high melting point, and high pressure resistance. Moreover, they do not dissolve easily in water and inorganic solvents. The global zeolite market is predicted to grow at a CAGR of over 2.4% during 2019 to 2024. The market will get a major boost from the water treatment industry, as zeolite is being increasingly used for cleaning waste water. It is also being used as a refrigeration adsorbent, which will also boost the growth of the zeolite market. Apart from these, many commercial detergents contain synthetic zeolites which increase their washing efficiency. Synthetic zeolites are characterized by high adsorption capacity for liquid components, especially surfactants. They are mainly used for making compact and super compact detergents. So, growing demand for detergents and increasing awareness about hygiene will drive the growth of the global zeolite market. Zeolite is also used for antimicrobial protection and increasing use of zeolite antiseptics is expected to increase the demand for zeolite in the coming days. In petroleum refining of products like diesel, gasoline, and other fuels, zeolite is used as a catalyst. Therefore, rising refining output will impact the zeolite market positively. Stringent environmental protection regulations enacted by governments of various countries will give a major boost to the global zeolite market. The increasing use of natural zeolites by the agricultural sector for trapping heavy metals will also increase the global demand for zeolites. Geographically, the Asia-Pacific region is dominating the global zeolite market. It is the largest and the fastest growing region in the global market. The dominance of the region can be explained by the presence of several water treatment and detergent industries. In the Asia-Pacific region, the largest consumption has been recorded in China and India. Major companies operating in the global zeolites market include Albemarle Corporation, BASF SE, Honeywell International, Inc., Arkema Group, Clariant AG, W.R. Grace & Co., TOSOH Corporation, Union Showa KK, Zeochem AG, Zeolyst International, Huiying Chemical Industry, KNT Group, Chemiewerk Bad Kostritz GMBH, National Aluminium Company Limited, and PQ Corporation.
<urn:uuid:c9b64813-85b1-46d9-a099-1f97cb6ee4eb>
CC-MAIN-2024-38
https://www.alltheresearch.com/blog/increasing-demand-for-detergents-is-to-drive-the-global-zeolites-market
2024-09-11T06:20:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00727.warc.gz
en
0.936875
654
2.796875
3
The Domain Name System (DNS) is the Internet’s phone book, allowing people to remember www.information-age.com, for example, rather than an IP address. Indeed, the creation of this vast look-up table contributed hugely towards the widespread adoption of the Internet outside its original circle of academic users and enthusiasts. But that well-ordered world was thrown into confusion last month when a vulnerability that opened up the possibility of outsiders gaining control over parts of the DNS phone book came to light at the notorious Black Hat security conference in Dan Kaminsky, director of penetration testing for security services company IOActive who had discovered the vulnerability some weeks earlier, had delayed revealing the specifics until the conference in order to encourage the owners of DNS servers to patch their systems. But when he did, it was standing room only at the conference – even Kaminsky’s grandmother was in the audience. The vulnerability took advantage of ‘cache poisoning’, whereby a legitimate address in the DNS cache server at an ISP (or a large company) is replaced with a counterfeit address capable of redirecting millions of unsuspecting surfers to rogue sites. Cache servers essentially serve as traffic control, preventing the Internet’s authoritative DNS servers from being overrun with repeat requests. Poisoning addresses in the DNS cache server of a large ISP is a potential gold mine for phishers (especially those targeting online banking), and also for mischief-makers targeting popular sites such as Google and Microsoft. Businesses have had traffic aimed at their websites redirected to a different set of pages – often pages filled with advertising or part of a phishing scam. As a result, they have suffered damage to their reputations through no fault of their own. DNS servers also store lists of email servers, and poisoning these addresses allows attackers to intercept mail or even replace legitimate attachments with malicious files. Dr Paul Mockapetris, who in conjunction with Jon Postel invented the DNS system at the The concept of cache poisoning arose as long ago as 1988 during a class he held on DNS. “People figured out a way to send stuff to DNS servers, but [back then] they were mostly professors at universities,” he says. The clever exploit that Kaminsky had discovered, he explains, “was a way to attack a server continually”. “With the old way, you had to guess a 16-bit value – one chance in 64,000 – and then you had to wait, because there was only a small window in which the server would listen. But Dan figured out how to pick values that [the servers] knew not to exist, keep it listening, take a bunch of shots at it and poison the data.” By the time Kaminsky appeared at the Black Hat conference, a sizeable proportion of Internet users had their DNS cache servers patched. ‘UDP source port randomisation’ was a quick fix that took that one in 64,000 chance up to one in four billion. Mockapetris is critical of Kaminsky for his “theatrics”, but concedes that “he needed to attract attention”. “I would have done things differently [and] been more circumspect. Dan had a bunch of not-so-good choices he had to pick from.” Network naming and addressing technology provider Nominum, of which Mockapetris is chief scientist and chairman, worked with its carrier and Internet service provider customers to protect about half the Internet’s users from the vulnerability Kaminsky outlined. But while they were able to apply them quickly, Mockapetris remains doubtful that the source port randomisation (SPR) protections are enough to prevent a determined attacker. “Just relying on a brute force [defence] is not enough. A gigabit network is common in many enterprises, and two servers mounting an attack on one could probably crack [the protections] in under 10 hours,” he explains. “It is also easier to poison a name server in countries like Indeed, a day after the Black Hat event a group of security researchers claimed to have defeated the SPR protections using a brute force attack, albeit over a rare (and expensive) 10 gigabit connection. Suspecting that ‘good enough’ protection might not go down well with many enterprises, and perhaps noticing a market opportunity, Nominum released a new version of its Vantio DNS server platform that negates the chance of a brute force attack. As with anything deep in the guts of the Internet, the specifics are highly technical, “but if I know I’m under attack I can increase my level of suspicion,” Mockapetris says, by way of explanation. For most Internet users, the problem has been resolved at an ISP level. But that doesn’t stop Mockapetris from lamenting how different things are from when he came up “with just the first floor” of the DNS architecture. “In 25 years I’ve never spent that much time thinking how to attack the system. In the old days you would give anyone access – now you have to worry about things like denial of service attacks. The population of the net has changed,” he says. “I think it will take a while for people who like to attack systems to get bored with this one,” he adds sadly. Find more stories in the Security & Continuity Briefing Room
<urn:uuid:f8725e36-84bd-4834-8eea-e53975d54501>
CC-MAIN-2024-38
https://www.information-age.com/domain-name-and-shame-tactics-24544/
2024-09-12T11:39:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00627.warc.gz
en
0.965971
1,133
2.75
3
Data has become a ubiquitous commodity in today's world, the importance of which is felt by companies, consumers, and countries alike. In the last two decades, the global technological race has pushed businesses to go international, propelling the movement of data between servers across national borders. The expanding customer base and widening supply chain also require a global workforce and infrastructure for international operations. Moreover, technologies like cloud computing, IoT, and data analytics have also brought in an upsurge in the collection and transfer of data. With globalization becoming a necessity in the information industry, cross-border data flow is essential and unavoidable in today's technological landscape. Though this movement of data across borders is essential for businesses and consumers, it has also raised concerns around data privacy and security. This was highlighted in the July 2020 Schrems II judgment by the CJEU that deemed the transfer of personal data by EU companies to the US-based on the EU-US Privacy Shield framework as illegal. The ruling invalidated the Privacy Shield, citing potential interference from US surveillance agencies. As a result, companies performing cross-border data transfers based on standard contract clauses (SCCs) will be subject to stricter requirements around data protection. In such a regulatory environment, companies need to be extremely vigilant around the laws and stipulations related to data in all the countries they operate. Regulations like the EU's General Data Protection Regulation (GDPR), Personal Information Protection and Electronic Documents Act (PIPEDA), and the California Data Privacy Act (CCPA), among others, have made companies refine their strategy around cross-border data. To restrict the increasing amount of cross-border data flows, governments are introducing laws and regulations for companies to follow the approach of data localization. What is Data Localization? Data localization limits the flow of data within the geographic borders of the country where the data was created. It can include restricting, controlling, or banning the international transfer of data with the objective to safeguard citizen information. Seamless transfer of data provides uninterrupted access to information and services irrespective of the user's location. Restricting the movement of data introduces major challenges for businesses across segments, including international commerce, technology, health and safety, and organizations (typically non-profits) focused on social welfare, etc. Several countries have implemented data localization laws in the past couple of years, and many are likely to follow suit. In 2016, China enacted its Cybersecurity Law that mandates operators and businesses dealing with critical information infrastructure to store personal information and important data in China. Similarly, Russia's Federal Law No. 242-FZ, which came into effect in 2015, requires entities with Russian customers to physically store their data within Russia. Data localization laws can compel businesses to process the data locally, store a copy of the data locally, or seek additional consent for data transfer requests. These stipulations can put foreign companies at a disadvantage as they make the data transfer harder and add to the overall capital. Even a small and solely internet-based company will have to develop the necessary infrastructure to meet that region's data regulations, which is especially challenging. The EU regulations have galvanized several countries into enforcing similar laws, the violation of which can negatively impact one's business. This is reflected in the case of India's decision to bar American Express and Diners Club International from adding new domestic customers from May 1. India's 'Storage of Payment System Data' directive mandates payment system providers to store data related to transactions, payments, instructions, and customer information in systems within India. Similar lapses can hamper the company's operations, resulting in significant loss of time and money. Having an effective governance program can help you understand and be compliant with the regional laws and regulations. In light of the EU regulations, organizations are expected to adopt the latest and more robust techniques to meet the existing and upcoming guidelines around data localization. Several tech giants are embracing an encrypt-everything strategy. Employing a crypto-agile network will provide protection and security for all the data being transmitted across the internet while also being compliant with regional laws and statutes. Data encryption is more than essential right now, considering the hybrid workplace and remote working models, which make it difficult to keep sensitive data under the company's control and within regional borders. Moreover, employees using external devices and third-party web applications bring new challenges in protecting data and preventing data loss. Data encryption protects our passwords, credit card details, technological inventions, and all types of confidential information. Without encryption, data is vulnerable to exploitation and illegal use. Encryption involves translating readable data into non-readable data (ciphertext) so that it can be decoded only using the decryption key. It is performed to prevent unauthorized access of data when it is being transmitted and while it is at rest. Data protection solutions can encrypt employee emails, devices, and data. The modern encryption algorithms devised for data protection provide confidentiality, authentication, integrity, and non-repudiation to promote key security initiatives. Encryption can be considered a temporary and suitable solution for now. In September 2020, the Data Protection Authority of Germany's Baden-Württemberg recognized end-to-end encryption as an acceptable measure for providing additional protection to data. The authority observed that encryption can provide an adequate level of safety if the encryption keys are accessible only to the data exporter, and the data cannot be decrypted even by intelligence services. However, encryption may not be an effective solution for every scenario. For instance, end-to-end encryption will not ensure privacy if data is outsourced to an entity (outside of the EEA) that is contracted to process the personal data in an intelligible manner. Are Organizations Prepared Enough? The CJEU's Schrems II decision given in mid-2020 has brought about a dramatic shift in the way data is transferred outside of the EU. The IAPP-FTI Consulting Privacy Governance Report 2020 found that of the 65% of respondents who transfer data outside of the EU, 55% use the now invalid EU-US Privacy Shield as the transfer mechanism. However, 88% of the respondents rely on SCCs, and this number is expected to grow as companies using the EU-US Privacy Shield will have to shift towards other mechanisms to enable data transfers. About 75% of the firms implied that they plan to switch to SCCs, while 45% to 53% would add additional contract-based, technical-based, or policy-based safeguards. Most companies find it challenging to comply with data and privacy regulations due to the difficulty in identifying cross-border data flows. This has been highlighted by one such instance where the company was ineffective in recognizing the movement of data outside of the EU. In March 2021, the Bavarian DPA issued a notice to a company that used Mailchimp to send newsletters to its German customers. It concluded that as Mailchimp is based in the US, the transfer of email addresses from Germany to the US was unlawful based on the Schrems II ruling. Such cases are a reminder that both, big corporations with the requisite tools and infrastructure as well as small companies, can lack in their oversight procedure due to inaccurate knowledge regarding the flow of data. Companies need to be aware of the existing and upcoming regulations and fully understand their impact on the business in order to develop appropriate remedies. It is crucial to prepare an inventory tracking the transfer of data across various servers and third-party vendors. By identifying and mapping all their data flows, companies can get a clear picture of all their cross-border activities. To begin with, organizations having the latest and comprehensive record of processing activities (RoPAs) can perform a detailed analysis of cross-border exposure and remediation. Reassessing the accuracy of their RoPAs in accordance with the new requirements and managing the current RoPAs for the long term will enhance privacy management and help develop relevant remediation strategies. Organizations should conduct regular transfer impact assessments (TIAs) to monitor the movement of data. Any actions performed during this stage should be documented to identify key risk areas and map out suitable steps for remediation. Mapping these flows in the Data Map as you conduct the RoPAs or TIAs can help understand the flow of data. By recording and identifying all the transactions, companies can determine which actions are prohibited as per that country's regulations. Necessary policies can be formulated based on these findings to allow cross-border data flow without any legal hurdles. Meru Data offers a wide range of applications and services to automate, streamline and secure your information governance (IG) processes. We offer simplified solutions for data mapping, retention, disposition, and compliance with regulations like GDPR and CCPA. Our flexible and business-centric information governance IG programs provide visibility to information flows within and outside the organization, making it easier to track and monitor the movement of data. By enabling collaboration between the legal, privacy, IT, and business users, our systems allow the successful implementation of governance programs.
<urn:uuid:8b22987f-cda7-49c9-a466-0ba1e4c5f904>
CC-MAIN-2024-38
https://www.merudata.com/single-post/cross-border-data-should-data-be-barricaded
2024-09-12T12:29:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00627.warc.gz
en
0.938583
1,844
2.75
3
Despite the strict regulations on data that are placed on the health care market, many hospitals and doctors are finding efficiencies with the use of wireless technologies. The latest in cloud-based systems and new Wi-Fi standards is helping even complex hospital environments improve upon patient care and staff work flows. Wireless RFID tags help track the location of nurses and other staff in some hospitals, enabling managers to improve processes and ensure efficient use of their skilled workers. RFID tags placed on equipment such as IV pumps and monitors ensure equipment is where it should be, when it should be there. And later, tracking the data of equipment usage allows the hospitals to find inefficiencies in its resource usages and correct them. The book “Wi-Fi Enabled Healthcare,” provides many other examples of Wi-Fi usage in health care settings. It outlines design issues and details challenges that health care businesses face with the usage of wireless Internet and devices. The book covers topics such as: - Brief history of Wi-Fi - Wireless architecture considerations - Preparation for a wireless site survey - Wi-Fi security - Wireless guest services In our downloads area, chapter 6 of this publication is available for reading. This chapter covers mobile medical devices, including testing of devices and the network, failover and redundancy issues, and various devices that can use Wi-Fi, including: - Mobile X-ray machines - Medication dispensing systems - IV pumps - Ultrasound devices - Hemodialysis machines The use of these plus tablets, laptops and smartphones enable real-time connectivity among hospital staff. At any time, equipment can be located and reallocated when necessary. Staff can be moved to critical sites in an emergency and they can see, at a glance, where vital equipment is located. Through all of this, patient care can be made quick and more effective. The chapter continually reinforces the need for security and redundancy. Systems must be kept running and secure through all areas of the hospital or health care site: When it comes to patient data, securing medical devices and their data is vital to providing safe and effective healthcare. As Wi-Fi is growing the risks associated with the technology are inherent and are becoming more lucrative for hackers to try and take advantage of. Some of these risks are associated with security, availability, quality of service (QoS), and privacy. As the healthcare industry continues to expand and enter the ever-growing wireless space, including patient monitoring equipment, physicians’ PDAs and laptops, and wireless-enabled medical devices, the risks associated with their use also rise. Some healthcare organizations have stayed ahead by deploying secured wireless networks for their medical devices. They often have to tweak their network to accommodate nonstandard or legacy medical devices. The book is an educational tool for any health care organization looking to move toward becoming more Wi-Fi friendly. It is also helpful for current hospitals or doctors’ offices where the IT staff is considering updates or ways to further use wireless technologies to make work flows more efficient.
<urn:uuid:5da346df-ad80-4a31-bdbe-55dd3755dce0>
CC-MAIN-2024-38
https://www.itbusinessedge.com/networking/wireless-technologies-are-changing-the-world-of-health-care/
2024-09-13T19:03:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00527.warc.gz
en
0.949244
621
2.59375
3
Data governance is the framework that guides an organization’s approach to collecting, storing, processing, and securing their data. Data governance protocols allow companies to better adhere to business and regulatory rules, protect their data, and enable agile data operations to deliver greater business benefits. However, approaches to data governance can vary considerably. In this blog, we’ll look at the two that are most commonly applied: passive, or traditional, data governance and active data governance. In short, here’s what they mean: - Passive data governance: With this approach, data is first input by users. Then, business and governance rules are applied to the data afterwards. This includes cleaning operations, identifying and removing duplicates, and creating exceptions. - Active data governance: Here the goal is to assess and verify data quality before it is input, thus removing the strain on resources later in the life cycle. It uses AI or machine learning to improve data cataloging, and takes a more proactive stance of assessing data quality at the point of collection rather than simply seeking to ingest as large a quantity as possible. The choice between which approach an organization takes can have a significant impact on their success in achieving their data governance goals. The differences between them delineate how data governance is enforced at critical stages of the data lifecycle. Differences between passive vs. active data governance The differences between passive and active data governance frameworks center on whether data governance is performed retroactively to existing data or proactively along its lifecycle. There is not necessarily a right or wrong approach, but active data governance is geared towards creating greater agility in DataOps. Below are the major differences between the two approaches. With traditional data governance, a quantity over quality approach is often taken, with the presumption that issues with the data will be addressed later on. Active data governance seeks to assess data from before it enters the system through a variety of measures. These include working more closely with users on defining proper data collection and deploying automated systems to intelligently identify data quality issues being replicated in collection. Data rules and dictionaries Data governance relies on pre-decided rules being consistently applied across all data operations. With passive data governance, this happens in the form of manually updated data dictionaries, terminology glossaries, and data catalogs. Active data governance also applies these governance rules but integrates technologies to allow for automatic building and optimizing of rule repositories. This ensures their consistent application across all the organization’s data. Passive data governance seeks to identify and correct problems with data quality in the system. While this is a positive undertaking, it is still reactive rather than proactive. An active data governance framework seeks to identify how a data quality issue arose in the first place and track it from where it is found to where it occurred so that the issue won’t be consistently replicated. This action may take more time than simply fixing the immediate mistakes but will deliver consistent gains over time. Modern data storage mostly consists of hybrid cloud and on-site or multi-cloud setups. Passive data governance applies relevant governance rules within these siloes, meaning that data quality may still be high but at the cost of both extra work and data duplication. In this instance, the goal of active data governance is to provide a comprehensive overview for administrators across the entirety of their fragmented ecosystem. Only with complete end-to-end visibility can uniform application of data governance protocols be ensured. Data auditing and tracking Siloes and lack of uniform and effective data cataloging prevent the formation of complete data lineages. This can have a major impact on operations such as composing datasets for analysis as well as data auditing, an essential part of data safety and ensuring regulatory compliance. Under a passive data governance model, the lack of prior efforts to prevent duplication or ineffective cataloging mean that auditing costs more in terms of time and resources. Active data governance attempts to create a holistic plan for the entire data lifecycle by incorporating automated data tracking and lineage tools, along with more effective cataloging processes. This means that both data location and auditing have already been assisted by the pre-planning and ongoing work of the whole system. Active data governance is agile Data governance frameworks are critical for enabling enterprises to maintain consistency in data operations, ensuring regulatory compliance and business returns. In data governance, however, there are two different approaches: passive and active data governance. Passive data governance takes a retroactive and reactive approach, preferring to ingest all data possible and then apply business rules to data once it is held. Active data governance on the other hand seeks to ensure data quality and effective cataloging right from the beginning of its lifecycle. Active data governance looks to create agile data operations and give end-to-end data visibility to administrators. One tool that can enable this is the virtualized data platform. By creating a virtualized data layer over all of an organization’s data assets, governance protocols can be proactively applied to all data, wherever it is. The integration of automated tools also allows for uniform cataloging and application of business rules, while simultaneously reducing strain on DataOps resources. About Prateek Panda Prateek Panda is Director of Marketing at Intertrust Technologies and leads global marketing for Intertrust’s device identity solutions. His expertise in product marketing and product management stem from his experience as the founder of a cybersecurity company with products in the mobile application security space.
<urn:uuid:b26239bc-c16b-4ecc-9b1b-c72a116bb42c>
CC-MAIN-2024-38
https://www.intertrust.com/blog/passive-vs-active-data-governance/
2024-09-14T23:51:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00427.warc.gz
en
0.930054
1,102
2.78125
3
Let’s learn Clipboard Settings on Windows, Clear Clipboard Data, and the Group Policy Settings. Windows Clipboard is the best feature that helps users to store data temporarily. A clipboard in Windows will enhance your experience while you are using devices. A Clipboard is very useful for users in daily life. This post lets users easily enable Clipboard and Group Policy settings on their Windows 11 PC. Different methods of clearing clipboard data are available in Windows. A clipboard is an area that helps the users to store data temporarily. Clipboard in Windows PC allows users to copy and paste text, images, etc. You can also copy data from one PC to another with a cloud-based Clipboard. In Windows Clipboard, you can easily copy text or images from anywhere. The copied images or text can be easily pinned for finding quickly and saved while clearing the clipboard history. Windows Clipboard is the easiest and simple way to copy texts and images on Your PC. What are the Features of Windows Clipboard on Windows PC? Windows Clipboard gives facilities that give users many features, and it facilitates users to coping content without help from other applications. The following are the Features of Windows Clipboard. 1. Easy way to coping texts or images 2. Provide Emojis, GIF, Kaomoji etc. 3. Allows Sharing content from cloud-based devices 4. Easy to clear history 5. Helps to pin texts or images How to Use Windows Clipboard on Windows 11 PC You can easily copy content from anywhere to Windows Clipboard. Using Windows Clipboard makes users copy multiple contents. You can easily find the Windows Clipboard from the below list and screenshot. - Select Settings from Start Menu The copied contents will be automatically pasted to the Windows Clipboard, which helps users view the clipboard history in Clipboard. You can also cut the contents and copy them to Clipboard. - Select the System tab from the Settings page - Select the Clipboard option from System Windows Clipboard History is the facility provided to the users to view their clipboard History on Windows 11 PC. You can easily save multiple items to your Clipboard and view them anytime until they are cleared from Clipboard History. Clipboard History is Disabled in Windows 11 PC by default. You can easily Enable the Clipboard history from the below Window by toggling the Pane to the right side. Windows Clipboard can be easily selected through the Keyboard shortcut also. - Type Windows Key +V in your Windows 11 PC to open the Windows Clipboard The Home Screen of Windows Clipboard Windows Clipboard on Windows 11 PC is the default feature provided by Microsoft in Windows. You can easily cut, copy and paste contents in Clipboard. Windows keyboard can be easily found by pressing Windows + V on the keyboard. There are many features available in Windows Clipboard. The table and screenshot below show the Windows Clipboard home screen on Windows 11 PC. Home screen options | Used to | Most recently used | It shows the recently used items | Emoji | It helps to show different Emoji | GIF | It offers different types of GIF | Kaomoji | It provides special emojis such as classic Ascii Emotions etc. | Symbols | It shows General Punctuation, currency symbols, etc. | Clipboard history | It shows the Clipboard history | - New Windows 11 Emoji Detailed Review | How to Use Emojis Digital Pictures - New Windows 11 Keyboard Shortcuts Updated List How to Share Clipboard Data Across Devices Windows Clipboard also allows you to share your Clipboard data across devices. You can easily paste text on your other devices, and it helps to save time. The below list and screenshot show how to share clipboard data across devices. - Settings > System > Clipboard - Select the Share across devices option from the Clipboard - Click on Get started option from the below window - Click on Get started button to sync across devices How to Clear Clipboard Data in Windows 11 PC Windows Clipboard allows users to Clear the Clipboard history easily. Clearing Clipboard history helps to add new items to your Clipboard. The below list shows the 2 methods to Clear the Clipboard history. - Clearing History through Windows Settings - Clearing History through Keyboard Shortcut 1. Clearing History through Windows Settings You can easily clear the Windows Clipboard history through the Settings option. It allows users to remove all the Clipboard history with a single click. The below list shows Clearing Clipboard history through Windows Settings. - Settings > System > Clipboard - Select the “Clear clipboard data” option from the Clipboard option - Click on the Clear button from the below Window. 2. Clearing History through Keyboard Shortcut Windows Clipboard can be easily accessed through the Keyboard shortcut also. It is the easiest way to access Windows Clipboard. You can clear single or multiple items in the Clipboard history from the Keyboard shortcut. - Clearing a Single Item in Clipboard History Through a Keyboard Shortcut - Clearing Multiple Items in Clipboard History Through the Keyboard Shortcut 1. Clearing a Single Item in Clipboard History Through a Keyboard Shortcut You can easily clear a single item in the Clipboard history. It helps the users to clear 1 item in the Clipboard history. The below list and screenshot show how to clear a single thing in Clipboard history through a keyboard shortcut. - Press Windows +V to open Windows Clipboard - Click on the 3 dots (see more) option from the below window - Click on the Delete option from the below window 2. Clearing Multiple Items in Clipboard History Through the Keyboard Shortcut Windows Clipboard also allows you to Clear Multiple items in Clipboard history through Keyboard Shortcut. It helps the users to clear all things in Windows Clipboard with a single click. It also helps the users to save time. You can easily pin items that you want to use all the time. The below list and screenshot show how to clear multiple items in the Clipboard history through the keyboard shortcut. - Press Windows + V to open Windows Clipboard. - Select the Clear all option to clear all Clipboard History Privacy Statement on Windows Clipboard Microsoft ensures the privacy statement of users’ data on Windows Clipboard—Microsoft processes users’ data for various purposes. It collects your data through their interactions with you and their products. The below list shows how Microsoft collects your data. - Used to improve and develop Microsoft products - Helps to personalize Microsoft Products - Used to make recommendations - Advertise and market the users through promotional communication and targeting advertising. - Helps to provide Microsoft Products, including updating, securing, and troubleshooting Related Links in Windows Clipboard Related Links in Windows Clipboard are the related links of Windows Clipboard. It includes Getting help with a clipboard and seamlessly transferring contents between devices. It helps to show the other features of Clipboard on Windows 11 PC. Get Help With Clipboard In Windows 11 PC Getting help with the Clipboard option in Windows 11 helps the users easily understand Windows Clipboard. It shows how the Clipboard in Windows 11 PC is working. Users can easily copy text or images from anywhere, which will automatically be copied to Windows Clipboard. Windows Clipboard helps users pin items they want to use all time. The Get Help option also solves users’ problems while using Windows Clipboard. You can easily select the Get help with clipboard option from the below screenshot. Seamlessly Transfer Content Between Your Device It helps the users to copy and paste contents to synced devices. You can easily copy content and paste it into a secure folder. But Windows Clipboard did not allow the users to copy content from a secure folder of the user’s device. You can easily Select and Seamlessly transfer content between your devices. - How to Install and Use Notepad++ on Windows 11 PC | Detailed Review - Snipping Tool in Windows 11 Latest Features | Settings | Uninstall Group Policy Settings for Windows Clipboard in Windows 11 You can easily find Group Policy Settings for Windows Clipboard in Windows 11. Press Windows + R keyboard shortcut and use GPEDIT.MSC in the run box and Click on OK. The below screenshot shows how to open Group Policy Editor on your Windows 11 PC. You can also choose the Run command from Start Menu. You can easily find Group Policy Editor from the below Window. - Right-click the Start menu - Select Run from the context menu - Type GPEDIT.MSC in the run box - Select Computer Configuration from the Group Policy editor - Computer Configuration > Administrative Templates > System > OS Policies OS Policies option in Group Policies includes various settings options to manage Windows Clipboard on Windows 11 PC. The table and screenshot below show the Settings option available in Group Policies Editor. Settings Options | State | Comment | Allow Clipboard History | Not configured | No | Allow Clipboard synchronization across devices | Not configured | No | Enable Activity Feed | Not configured | No | Allow publishing of User Activities | Not configured | No | Allow upload of User Activities | Not configured | No | Allow Clipboard History in Group Policy Editor Group Policy Editor allows Clipboard History on Windows 11 PC. By default, Allow Clipboard History option is Not configured. You can easily Enable or Disable Allow Clipboard History option from the below Window. - This policy setting determines whether the history of Clipboard contents can be stored in memory. - If you Enable this policy setting, the history of Clipboard contents is allowed to be stored. - If you Disable this policy setting, the history of Clipboard contents is not allowed to be stored. - The policy change takes effect immediately. Allow Clipboard Synchronization Across Devices Group Policy Editor allows Clipboard Synchronization Across Devices on Windows 11 PC. By default, Allow Clipboard Synchronization Across Devices option is Not configured. You can easily Enable Allow Clipboard Synchronization Across Devices from the below window. - This policy setting determines whether Clipboard contents can be synchronized across devices. - If you enable this policy setting, Clipboard contents are allowed to be synchronized across devices logged in under the same Microsoft account or Azure AD account. - If you Disable this policy setting, Clipboard contents cannot be shared with other devices. - The policy change takes effect immediately. You can easily Disable Allow Clipboard Synchronization Across Devices on Windows 11 PC. Select the “Allow Clipboard Synchronization Across Devices” option from the OS Policies of Group Policy Editor. Click on the Disabled button from the below Window. The helpful information on finding Clipboard Settings on Windows | Clear Clipboard Data | Group Policy Settings | Please follow us on HTMD Community and visit our website HTMD Forum if you like our content. Gopika S Nair is a computer enthusiast. She loves writing on Windows 11 and related technologies. She is here to share quick tips and tricks with Windows 11 or Windows 10 users.
<urn:uuid:28d8ee16-7fa2-427b-8c06-333d7bdc469d>
CC-MAIN-2024-38
https://www.anoopcnair.com/clipboard-settings-on-windows-clear-clipboard/
2024-09-16T06:03:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00327.warc.gz
en
0.789966
2,315
2.6875
3
Engineers at the University of Pennsylvania unveiled a new technology developed to enable future 6G network technology. The innovation is a single, adjustable filter for mobile devices to prevent interference when using the FR3 band. The filter is about the size of a quarter and propagates a magnetic spin wave. To prevent signal interference in cellular communication from different bands, an average smartphone may contain around 100 filters. This single filter is designed to be tunable. “Being tunable is going to be really important because at these higher frequencies you may not always have a dedicated block of spectrum just for commercial use,” said Troy Olsson, Associate Professor in Electrical and Systems Engineering (ESE) at Penn Engineering in a blog post. The engineers will present the technology at the International microwave symposium next month. Read the full story here.
<urn:uuid:308564c7-78be-47be-a5bb-762983ef2b9a>
CC-MAIN-2024-38
https://www.isemag.com/5g-6g-and-fixed-wireless-access-mobile-evolution/6g/news/55043008/university-researchers-develop-tunable-filter-for-6g
2024-09-16T05:00:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00327.warc.gz
en
0.921689
169
2.703125
3
NASA and USGS referred several suspicious events in 2007 and 2008 to the Pentagon, which investigates interference with satellite operations. China may have been flaunting its scientific capabilities by meddling with U.S. Earth observation satellites in past years, according to space and computer security experts. Two unusual incidents involving signals targeting a U.S. Geological Survey satellite in 2007 and 2008 were referred to the Defense Department for investigation, USGS officials said Monday. NASA also experienced two "suspicious events" with a Terra observational satellite in 2008, officials at the space agency confirmed. An annual report from the U.S.-China Economic and Security Review Commission slated for release Nov. 16 is expected to characterize the events as successful interferences that may be linked to the Chinese government. "I would say they were demonstrating the science and technology to be able to see what they could gain from it," said Charles Vick, a senior analyst at GlobalSecurity.org who has been briefed on other government reports about China's cyber skills. "To a degree one would think that [getting caught] was part of the mentality. It's a warning. We could do this and a few other things." A draft of the congressionally established commission's report stated that, with access to a satellite's controls, "opportunities may also exist to reconnoiter or compromise other terrestrial or space-based networks used by the satellite." Retired Air Force Maj. Gen. Dale W. Meyerrose, the first chief information officer for the Office of the Director of National Intelligence, said the incidents may have been accidents, but even so, they are serious in that whoever was responsible could one day turn against the United States. Also, China likely was paying attention to the exploit and learned from it, said Meyerrose, now a vice president at government contractor Harris Corp. who directs the firm's cybersecurity and information technology divisions. China is scheduled to launch an unmanned spacecraft Tuesday, according to the country's government-controlled English-language newspaper China Daily. Agencies confirm incidents The Landsat-7 spacecraft encountered "anomalous radio frequency events," USGS spokesman Jon Campbell confirmed. The satellite provides the public with free imagery of the earth's surface for research purposes, including global change studies. "USGS provided information about these events and cooperated fully with the Department of Defense, which has responsibility for the investigation of the source of the signals," he said. NASA spokesman Trent J. Perrotto said that after the Terra spacecraft incidents, the space agency also notified the Pentagon, which he said, is responsible for investigating any attempted interference with satellite operations. Terra collects climate and environmental data for scientific investigations. The commission said the interferences could pose a threat if exerted against satellites involved with more sensitive missions. "Access to a satellite's controls could allow an attacker to damage or destroy the satellite," the draft stated. "The attacker could also deny or degrade as well as forge or otherwise manipulate the satellite's transmission." The 2007 Landsat-7 incident came to light only following a similar episode in 2008, according to the commission's draft report. With the Terra satellite, the responsible party completed the requisite steps to command the spacecraft but did not issue commands. The commission's analysis of the incidents was first reported by Bloomberg. USGS and NASA officials said the suspicious episodes did not result in an outside party taking command of the satellites, manipulating data, or extracting information from their equipment. Campbell added, "the analytical aspects of cybersecurity in space -- determining the precise location, source and possible motive behind these signals -- is not our mission," referring questions about detection of the aberrances to Defense officials. One vulnerability that reportedly may have opened the door to outsiders was a public Internet connection at the satellites' ground station in Norway. Lt. Col April Cunningham, a Pentagon spokeswoman, said, "we are monitoring China's development of counterspace capabilities, and improving our space situational awareness and ability to operate in a degraded environment. However, our concern here is not focused on only one country." She did not respond to a request for comment on the investigation. Perrotto said he could not discuss additional details regarding the attempted intrusions. Both agencies said their satellite operations and associated systems are safe and secure. NASA has since created a working group to initiate an agencywide space protection program, Perrotto said. George Smith, a senior fellow at GlobalSecurity.org, said he would be surprised if the Chinese government was behind such sloppy execution, speculating that this may have been practice for a more aggressive attack. "It would seem unusual to me that they would fiddle with satellites -- which gets up the United States' antennae -- and then get caught with it," he said. "That doesn't rule out that this was a nation state doing a test run." Brendan Curry, vice president of Washington operations at the Space Foundation, an advocacy group, also suspected China may have viewed the satellites as a fairly innocuous environment for experimenting with extraterrestrial hacking. As to why the government is making these sensitive events public now, Smith pointed to the federal government's push for additional cyber defense funding. This is only the latest in a string of cyber intrusions widely blamed on China. McAfee investigators reported this year that during a targeted five-year operation, one specific entity penetrated the computers of more than 70 global organizations, including six federal agencies, 13 defense contractors and two computer security firms. The researchers stopped short of attributing the infiltrations to China, but federal officials have traced similar incidents to the nation. A 2010 Defense report stated that many computer systems, "including those owned by the U.S. government, continued to be the target of intrusions that appear to have originated within the [People's Republic of China]. These intrusions focused on exfiltratring information, some of which could be of strategic or military utility." As hackers target U.S. computers with increasing intensity and frequency, the White House on Friday took the unusual step of asking Congress to pass stalled cybersecurity legislation. At first the Obama administration was the slow actor, taking a year to tell Congress which pending measures the president would enact. Now, with pressure to pass other bills, including a Dec. 23 deadline for deficit reduction legislation, the House and Senate are unlikely to agree on comprehensive reforms this year, experts say. Obama cyber czar Howard Schmidt on Friday tried to light a fire, writing on the White House blog, "Unfortunately, time is not on our side. Since the White House delivered the administration's proposal to Congress, a number of new security breaches have been reported. We need congressional leaders to move forward with a cross-committee and bipartisan approach." Sticking points remain over the degree of power the Homeland Security Department ought to have to regulate protections for critical infrastructure companies, including energy and financial services firms. House Republicans are pushing for voluntary incentives, such as tax credits -- not regulations -- to encourage compliance with recommended safeguards. Schmidt also shared some new information, saying that a couple of weeks ago, administration officials "had a very encouraging meeting with a bipartisan group of Senators that ended with agreement to work together to enact cybersecurity legislation as soon as possible. The time is ripe to make proposal into law, and give the government and private sector the extra tools needed to fight those who would harm us." The White House post made no mention of talks with House members. NEXT STORY: Clinton and Brits Confront Hacker Groups
<urn:uuid:9d83fab1-506b-4422-88f3-0380b146e9cf>
CC-MAIN-2024-38
https://www.nextgov.com/cybersecurity/2011/10/us-satellite-breaches-may-be-linked-to-china/50041/
2024-09-20T00:31:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00027.warc.gz
en
0.965836
1,525
2.515625
3
Incident Response Process: The 6 Steps and How to Test They Work What Is Incident Response? Incident response is a process that enables organizations to respond to security breaches in a standardized and timely manner. The incident response process helps an organization identify attacks, limit the scope of damages, and eradicate the root cause. It serves as the first line of defense against security incidents and helps establish best practices to prevent long-term breaches. A security incident may indicate various violations of policies and laws or unauthorized behavior related to the organization’s data and IT assets. Security tools flag many events as possibly violating these policies. These events are investigated and triaged into security incidents that require a rapid or immediate response. Incidents that are not effectively controlled can escalate into a data breach. An effective incident response process ensures that organizations are prepared to respond to various incidents. The goal is to reduce losses, restore business processes and services, and quickly mitigate exploited vulnerabilities. Under attack? Get help from the BlueVoyant incident response team. Incident Response Process Approaches A key difference between various incident response processes is the type of triggers initiating a response. Some triggers involve the tangible effects of an attack, while other triggers provide an early warning based on indirect indications of compromise. The first trigger usually responds to zero-day threats, with users already noticing performance issues. The second type usually preempts issues before they impact the end-users—however, these triggers are more prone to false positives. These triggering approaches determine the main types of incident response processes: Front-loaded prevention—collect threat intelligence data and indicators of compromise to prevent attacks early. This approach helps address threats before they cause damage, but their higher rate of false positives can increase costs. However, the high number of false-positive responses is often an acceptable price to protect critical assets. Back-loaded recovery—collect data on visible threats and incidents, which may already be successful. This approach minimizes false positives but cannot stop attacks early on. It is unsuitable for critical infrastructure or high-stakes applications. Hybrid incident response—combine front-loaded and back-loaded data processing to enable early detection and urgent response. This approach offers a more comprehensive response strategy, but it usually demands a larger investment in time and resources. A hybrid approach should emphasize prevention, with the back-loaded response reserved for non-critical components. The Incident Response Process Incident response plans detail how organizations should respond to cyberattacks. There are six steps to address when developing an incident response plan. This process is inspired by the popular incident response framework developed by the SANS Institute. Related content: Read our guide to the NIST incident response process In the case of a cyber attack, the incident response team needs to be fully prepared. Organizations need step-by-step guidance to define how incident response teams will handle incidents, including internal and external communications and incident documentation. Identification is the detection of malicious activity. This can be based on security and monitoring tools, publicly available threat information, or insider information. An important part of identification is to collect and analyze as much data as possible about malicious activity. Incident response teams must distinguish between benign activity and true malicious behavior. This requires a major effort in reviewing security alerts and determining whether alerts are “false positives” — not real security incidents — or “true positives,” which indicate malicious activity. Containment is an attempt to stop the threat from spreading in the environment and doing more damage. There are two types of containment: Short-term containment—immediate action to prevent the threat from spreading. For example, quarantining an application or isolating a system from the network. Long-term containment—restores systems to production in a clean state before the threat was introduced. This process includes identifying the point of intrusion, assessing the attack surface, and removing any remaining backdoor access. At this stage, the incident response team neutralizes any remaining attacks. As part of this step, the team determines the root cause of the incident, to understand how to prevent similar attacks. At this stage, the incident response team returns systems to normal operation. Compromised accounts are given new, more secure passwords, or replaced with a more secure access method. Vulnerabilities are remediated, functionality is tested, and normal operations resume. 6. Lessons Learned There are lessons to learn from any cybersecurity incident, both at the process level, and because threats are constantly changing and evolving. Learning from experience and pinpointing what went wrong is an important step in improving your ongoing incident response plan. It is a good practice to perform a post-mortem meeting with the entire team to provide feedback on what worked and what didn’t, and raise suggestions for process improvement. Related content: Read our guide to incident response plan Putting Your Incident Response Processes to the Test Incident response testing determines whether an in-house or outsourced incident response process is effective and identifies critical gaps. These gaps could be in the form of ineffective integrations between technology tools, misconfigured security controls, process issues, or miscommunications between team members. Any of these issues could be disastrous if a real attack occurred, so it is critical to discover them early. There are three common ways to test an incident response system: Paper-based testing—a theoretical exercise for testing "what if" scenarios. Paper-based testing has limited effectiveness, but it can still reveal obvious gaps and missing processes. Tabletop exercises—a scheduled activity in which all key stakeholders of the company and incident response provider are seated around a table to respond to a hypothetical security incident. Activity should be planned in advance, including moves by the external attackers. If you have a large team, you can divide them into a "blue team" that represents incident responders and a "red team" that represents the attackers. Simulated attacks—you can perform a realistic, fake attack on your network by contracting with external penetration testers or by using security testers from your own security team. Attacks might be either pre-coordinated with your internal team and incident response provider, or blind attacks without prior notice. A simulated attack is very useful in showing what went wrong and identifying gaps in your internal security systems, processes, or integration with an external incident response provider. BlueVoyant contains, remediates, investigates, and provides litigation support for your cyber crisis. We identify the breach’s root cause while simultaneously eliminating unauthorized access and minimizing business interruption.
<urn:uuid:d3e2dc2c-514a-414a-8c1b-5fa120a322cb>
CC-MAIN-2024-38
https://www.bluevoyant.com/knowledge-center/incident-response-process-the-6-steps-and-how-to-test-they-work
2024-09-07T20:52:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00227.warc.gz
en
0.936489
1,335
3.125
3
The global biofuel market is projected to reach USD 225.9 billion by 2028 from an estimated USD 167.4 billion in 2023, at a CAGR of 6.2% during the forecast period. The worldwide commerce and use of fuels made from organic materials, such as plants, algae, or animal waste, is referred to as the biofuel market. Because they can be supplied more quickly than fossil fuels like coal, oil, and natural gas, these fuels are regarded as renewable. The biofuel market is poised for substantial growth in the coming years, driven by several key factors: These key factors collectively indicate a promising outlook for the biofuel market, with substantial growth expected in the coming years as the world transitions towards a more sustainable and low-carbon energy future. To know about the assumptions considered for the study download the pdf brochure! Increasing Renewable Energy Targets In an effort to lower greenhouse gas emissions and advance energy security, numerous nations are putting laws into place and establishing objectives for renewable energy. In order to fulfill these goals, biofuels—a sustainable energy source—are anticipated to be essential, which will lead to higher demand for both biofuel production and consumption. Rising Environmental Awareness The use of greener and more sustainable energy sources is being fueled by growing worries about air pollution and climate change. Because they emit fewer greenhouse gases than fossil fuels, biofuels are becoming more and more considered as a practical substitute for heating and transportation. The efficiency and economics of producing biofuel are being enhanced by continuous developments in biofuel production technologies, such as enzyme hydrolysis, fermentation, and thermochemical conversion. It is anticipated that these technology advancements would expand the biofuel industry's prospects and propel market expansion. The biofuel sector is expanding the range of feedstocks it uses, moving beyond conventional food crops to include non-food feedstocks such waste materials, algae, and agricultural residues. This diversification improves the robustness and sustainability of the supply chains for biofuels while also lessening rivalry with the production of food. Government Support and Incentives: Through programs like renewable fuel requirements, tax credits, and research grants, governments all over the world are supporting the development and consumption of biofuels. It is anticipated that these encouraging policies will boost funding for the biofuel industry and quicken market expansion. Infrastructure Invested in Biofuel: To accommodate the increasing demand, more money is being spent on refueling stations, production facilities, and distribution networks. For the market to grow and be adopted, infrastructure for the blending and distribution of biofuels must be developed. International Trade Opportunities: As nations look to diversify their energy sources and lessen their reliance on imported fossil fuels, the global biofuel industry presents trade and investment opportunities. Producers of biofuel in areas rich in biomass resources are in a good position to take advantage of global market prospects. The use of biofuels for things other than transportation, such heating, power generation, and aircraft, is being investigated more and more. The range of biofuel uses is growing, and new markets are being opened by emerging technologies like electrofuels and synthetic biology. Biofuel Market by Fuel Type (Ethanol, Biodiesel, Renewable Diesel, and Biojets), Generation (First Generation, Second Generation, Third Generation), End-use, Application (Transportation, Aviation) and Region - Global Forecast to 2028
<urn:uuid:11e20bd7-bca2-4bc8-a39b-96bf638cb0f2>
CC-MAIN-2024-38
https://www.marketsandmarkets.com/ResearchInsight/size-and-share-of-biofuel-market.asp
2024-09-09T03:03:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00127.warc.gz
en
0.945014
712
2.875
3
Data is the oil of our time—the new electricity. It gets collected, moved, refined. The data pipeline encompasses how data travels from point A to point B; from collection to refining; from storage to analysis. It covers the entire data moving process, from where the data is collected, such as on an edge device, where and how it is moved, such as through data streams or batch-processing, and where the data is moved to, such as a data lake or application. The data pipeline should seamlessly get data to where it is going and allow the flow of business to run smoothly. If the pipeline gets held up, quarterly reports can be missed, KPIs uninformed, user behaviour not processed, ad revenue lost, etc. Good pipelines can be the lifeblood of an organization. It used to be that trustworthy members on teams were the endpoints to send this information from one point to another. In today’s world, there are reliable software systems that move the data around. A good pipeline will get your data from its source to its destination timely and securely. Data processing: streamed or batched? Data can be processed in a few ways, but streaming and batching are the most common. Streamed data gets moved from A to B in near real-time. It is a form of reactive programming, and the data stream gets triggered upon a specific user event. When a user posts on Twitter, the tweet is a part of a data stream that gets submitted to the user profile and gets moved immediately to a sort of “global access” data viewing area so all users can see the Tweet. When Twitter runs a fact-check on President Trump, it is processing the tweet as a piece of data through a stream, combined with a micro-service, to offer the analysis. Batch processing is good for processing high volumes of data. Its endpoints can wait a day, a week, a month for the information, so the data can get moved at scheduled times. Examples of data that gets batched might be end-of-quarter reports or marketing data—data that isn’t needed immediately. Data that is used for analysis, where the analysts wish to do a one-time, or infrequent, detailed report on something can be batched. Data in the pipeline doesn’t have to be transformed. But, if transformations do occur, they’ll be part of the data pipeline. Data transformations can be of many, many kinds. Data transformations might convert Word documents and PDF file formats submitted by a user to raw text documents for uniform storage in a data lake. Transformed data could be something as simple as changing the data type from an integer value to a string value. It could be something more complex, where picture data gets classified as Emotional Indicator for marketing, Striking Visuals for the video content team, and Contains a Plant for the image classification team. The picture can get mixed and mangled, chopped and distorted, partitioned so it arrives at each party the way they wish to receive it. Whether the data is streamed or batched, and how it is transformed, depends on where that data is heading. The medium is the message, and the destination is the medium in which that data is presented. Whether the data is used on a personal device to view the stats of a ball game, processed for facial recognition, compiled for a quarterly report, or prepared to train a machine learning model, data is passed around and arrives at a destination, presented in a format appealing to a reader. And that appealing format is often helped by data visualization techniques. Data pipeline use cases Not every business needs to do it and not every application requires a pipeline. Data pipelines are feature specific. The kinds of features that may require a data pipeline are ones that: - Store large amounts of data. - Acquire data from multiple sources. - Store data in the cloud. - Require quick access for analysis or reporting. Each major cloud provider has its own tools to help build a data pipeline. Data versioning is an important part of the Data Pipeline. The CI/CD DevOps workflow needs to rollback its versions occasionally, like when a new one fails and the old one proves to work. The same concept appears with organization’s data. Other KPIs of the data pipeline are: - Versioning. Keep a version history of the data. - Latency. Length of time it takes to pass data from point A to point B. - Scalability. The pipeline’s ability to handle small or large amounts of data flow. - Querying. The ability to query the data from sources for analysis. - Monitoring. Check details throughout the data pipeline from event trigger, to transformation, to the final output. - Testing. Test the pipeline works.
<urn:uuid:f8ae5b00-691a-40ad-90d0-0c35ee2f0435>
CC-MAIN-2024-38
https://blogs.bmc.com/data-pipeline/
2024-09-11T09:22:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00827.warc.gz
en
0.932634
1,004
2.765625
3
Such tasks as calling a taxi or making an appointment with a doctor are just routine for the majority of us. But for people with disabilities, they can become a real challenge. And here is when such technology as Artificial Intelligence can (and should) enter the game. SwissCognitive Guest Blogger: Artem Pochechuev, Head of Data Science at Sigli – “How AI-Powered Solutions Can Change the Lives of People with Disabilities” We’d like to demonstrate that AI can be much more than just another tool that can enhance and streamline a lot of business processes or support companies in reaching an absolutely new level of their productivity. It can also greatly change the lives of many people and make everyday tasks as simple as possible. And from many perspectives, the latter case can be viewed as even more revolutionary than the first one. Why does it matter? You may ask us why we believe that this topic deserves so much attention and whether our opinion is based only on the emotional aspect. But we already have an answer that is fully based on the recent statistical data. The world desperately needs reliable solutions that will help a great part of the population to socialize and live independently (or at least minimize their dependence on assistance from other people). And to fully realize it, it is necessary to have a look at the following figures. - Around 15% of the population, or estimated 1 billion people, live with disabilities. - According to the data published by the World Health Organization, as of 2022, at least 2.2 billion people have vision impairment of different severity. - As for hearing impairment, at the current moment, over 1.5 billion people all over the world are affected by hearing loss in at least one ear. Nearly 13% of adults have hearing difficulty even when using a hearing aid. - Nearly 20 percent of the world’s population has dyslexia which is the most common of all neuro-cognitive disorders. - Nearly 1 billion people have a mental disorder. Let’s admit that these figures look quite impressive. Very often we can’t even imagine how many people suffer from different types of diseases and impairments that make them experience restrictions and limitations even in the simplest everyday processes. With the development and adoption of AI-powered solutions, every person will have the possibility to live in a world where his or her needs are well-understood and taken into account. And we can’t miss this chance and just ignore the possibilities that are provided to us by artificial intelligence. Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR! Solutions that can fully change the game To better understand the importance of such solutions you need to think about the inconveniences that people with impairments face every day. Let’s start with the solutions that are already available to a wide audience, for example, virtual assistants like Siri and Alexa. Without any doubt, people who do not have any health problems also can benefit from using them. But in their case, that’s the question of comfort. Nevertheless, in the case of people with disabilities, it can become a must. Many of us ask Siri to Google something for us just because we are busy (or lazy). People with visual impairments can do it because they do not want to ask their relatives to find and read something to them. The same principle works even with standard phone functions like calls and messages. Speech recognition, text-to-speech, and speech-to-text features are real game changers. Thanks to virtual assistants, people with partial and complete vision loss can use smartphones (not only button cell phones) as now they can ask AI-powered tools to dial a number or send a message. But the AI-powered interaction with smartphones is only one of the examples of how artificial intelligence changes the way people can manage and operate various devices. The integration of AI into smart home systems can also brightly demonstrate the power of this technology. Smart speakers in such systems can become key elements. They can fulfill a lot of tasks based on voice commands (switch on an oven or turn down music), provide recommendations (for example, they can inform when there is enough natural light but all the lamps in the room are still on), and create various scenarios based on the user’s preferences (temperature, lighting, music. etc.). From one perspective, all these features may seem to be redundant. But let’s have a look at them from the perspective of people with limited mobility. It may be difficult for them to go from one room to another in order to check whether their lamps or devices are not consuming too much energy during the day when they do not need them. And AI can do it for them. Moreover, AI-powered solutions can be enriched with image-recognition functionality. When can it be required at home? When we open a refrigerator, we typically never know where some products are placed and need to start looking for them. But in the case of people with low vision, this task can cause a lot of difficulties. Nevertheless, with a mobile app that will have access to the camera and image-recognition functionality, it will be incomparably easier. A user will need to open this app, point the camera at the shelves, and the app will voice where different products are placed. Similar solutions can also help to read “best before” dates. Some startups are working even on more advanced devices – AI-powered doorbells enriched with smart cameras. Such devices can “look at” visitors who are standing at the doorbell, recognize them based on the uploaded pictures, and notify users. After getting this information, users can make a decision about whether they want to let such visitors in. Such tech products can greatly enhance the security of people with vision loss, especially of those who live alone. But now you may ask us whether artificial intelligence can help people only at home. And based on the examples that we’ve mentioned above, the question is absolutely logical. No worries! AI can also greatly increase people’s mobility and boost their integration into society as well. Already today there are special navigation apps built for users with vision impairments which can fully or partially replace guide dogs. These applications are powered by not only GPS (like standard navigation systems) but also AI tech. Such apps can create routes based on the current traffic, weather, and other conditions and voice detailed instructions for people who can’t read them or use maps on their own. These navigation systems should be much more precise and advanced than traditional apps of this type. They should take into account a lot of external factors in real-time, including possible threats and barriers, and give users highly accurate instructions. With various apps tailored to the needs of people who have mutism (muteness) or hearing disorders and can’t talk, users can better communicate and share their thoughts with others. They can type their ideas or choose any of the ready-made scripts uploaded to the app, and it will transform the text into speech and voice their ideas instead of them. Moreover, there are medical cases when hearing aids can’t help people. And here is when a solution that can transform speech into text will be a supportive tool for communication. As a result, people, regardless of their disorders, can feel that they are full-scale participants in any discussion or dialogue. It’s important to understand that all these examples are only a small part of all possible solutions that can be built for people with disabilities. AI-powered products help them start believing in their forces and their abilities to become a part of society without any restrictions caused by their diseases. Despite the fact that today there are a lot of talks about artificial intelligence and it may seem that everyone is already aware of the capacities of such solutions, there are still a lot of gaps here. And there is still a lot of work to do for making sure that the potential of AI is clear to society. With the introduction of ChatGPT and all the hype around it, a lot of people have a wrong understanding that such language models can be viewed as the main use case of AI. But such chatbots that can answer your questions, create posts for your Instagram, or compose a plan for an English lesson are only one category of applications that can be powered by artificial intelligence and used in our everyday life. In reality, AI can offer much more opportunities. And that’s exactly what we are going to prove in our series of articles devoted to AI-powered solutions for people with disabilities. Stay tuned if you want to know how artificial intelligence can help millions of people to easily cope with a row of tasks that currently may be a real challenge for them. About the Author: In his current position, Artem Pochechuev leads a team of talented engineers. Oversees the development and implementation of data-driven solutions for Sigli’s customers. He is passionate about using the latest technologies and techniques in data science to deliver innovative solutions that drive business value. Outside of work, Artem enjoys cooking, ice-skating, playing piano, and spending time with his family.
<urn:uuid:a6383635-e860-43c2-8de1-2533023ec69b>
CC-MAIN-2024-38
https://swisscognitive.ch/2023/09/12/how-ai-powered-solutions-can-change-the-lives-of-people-with-disabilities/
2024-09-13T21:15:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00627.warc.gz
en
0.964704
1,901
2.890625
3
What Is Web Application Penetration Testing? Web application penetration testing is a security testing method for finding vulnerabilities in web applications. This process simulates cyber attacks under controlled conditions to identify security weaknesses. It involves a comprehensive assessment of the front-end and back-end components of an application, including databases, source code, and APIs. Penetration testing is an in-depth, manual effort. It requires specialized knowledge of cybersecurity, web application architecture, and threat modeling. The objective is to identify vulnerabilities and understand their impact and the threat they pose to the application's overall security posture. The Importance of Web Application Penetration Testing Web application penetration testing is necessary due to the increasing complexity and prevalence of web applications in business operations. These applications often process sensitive data, making them attractive targets for cybercriminals. Penetration testing helps in uncovering potential security flaws that could lead to data breaches, financial loss, and damage to reputation. Penetration testing provides insights into security weaknesses and offers actionable recommendations for mitigation, thereby strengthening the application's defenses against future attacks. Additionally, many industry regulations and standards, such as PCI DSS, explicitly require penetration testing as part of their compliance criteria. Tips from the Expert CTO and Co-Founder Dima Potekhin, CTO and Co-Founder of CyCognito, is an expert in mass-scale data analysis and security. He is an autodidact who has been coding since the age of nine and holds four patents that include processes for large content delivery networks (CDNs) and internet-scale infrastructure. In my experience, here are tips that can help you better enhance your web application penetration testing practices: - Incorporate threat intelligence into your testing: Use real-time threat intelligence to simulate the latest attack techniques. This allows you to stay ahead of emerging threats and test your application against the most current vulnerabilities. - Test for HTTP/2-specific vulnerabilities: As more applications adopt HTTP/2, ensure your penetration tests include scenarios that exploit the unique vulnerabilities of this protocol, such as request smuggling or amplification attacks. - Automate the initial reconnaissance phase: While manual testing is crucial, automating the initial reconnaissance and information gathering can help in identifying low-hanging fruit quickly, allowing testers to focus on more complex vulnerabilities. - Test the effectiveness of your WAF and security controls: Actively attempt to bypass your Web Application Firewall (WAF) and other security controls during testing. This helps in understanding the robustness of these defenses against sophisticated attacks. - Implement continuous penetration testing: Adopt a continuous penetration testing approach, where automated tests run regularly to identify new vulnerabilities as the application evolves. This complements periodic manual testing and ensures ongoing security. These tips should give you an edge in conducting thorough and effective web application penetration tests, addressing both common and advanced threats. Web Vulnerability Scans vs. Web Application Penetration Testing Web vulnerability scans and web application penetration testing serve different purposes in a cybersecurity strategy. Web vulnerability scanning is an automated process that scans a web application for known vulnerabilities listed in databases like the Common Vulnerabilities and Exposures (CVE). It's quick, cost-effective, and suitable for regular security assessments. Penetration testing is a manual, often time-consuming process conducted by skilled professionals. It goes beyond identifying known vulnerabilities to uncovering complex security issues that automated tools might miss. Penetration testing focuses on the exploitation of vulnerabilities and the potential impact, providing a more comprehensive understanding of the application's security. What Are the Types of Web Penetration Testing? Penetration tests can be performed externally or internally. External Penetration Testing External penetration testing targets an application's external-facing components, such as websites and web applications accessible from the Internet. It simulates attacks that external adversaries might perform to identify vulnerabilities that could be exploited from outside the organization. The goal is to evaluate the security of the web application's perimeter and prevent breaches originating from external sources. This type of testing often involves techniques like port scanning, brute force attacks, and targeting web application vulnerabilities. Internal Penetration Testing Internal penetration testing focuses on threats originating from within the organization. It assesses the security posture by simulating an attack from an insider or an attacker who has gained access to the internal network. This type of testing is crucial for identifying vulnerabilities that could lead to privilege escalation, lateral movement, or data breaches. By mimicking the actions of a malicious insider or compromised employee account, internal penetration testing provides insights into an application's resilience against internal threats. It also helps in identifying and mitigating risks associated with insider threats and ensuring that internal defenses are effectively configured. Related content: Read our guide to web application security. 7 Steps of a Successful Web Application Penetration Test Here are some of the processes involved in pen testing web applications. 1. Planning and Reconnaissance Planning defines the scope and objectives of the test, including identifying the target application's critical components and determining the rules of engagement. Reconnaissance, or information gathering, involves collecting as much data as possible about the target application. This can include identifying technologies used, mapping the application, and gathering public information that could aid in the test. This step is crucial for understanding the target application's environment and preparing for the subsequent phases of the penetration test. Effective planning and thorough reconnaissance lay the groundwork for a successful penetration test by identifying potential attack vectors and areas of focus. 2. Scanning and Enumeration Scanning and enumeration involve actively interacting with the target application to discover open ports, services, and vulnerabilities. Tools such as port scanners, vulnerability scanners, and web application scanners are typically used in this phase to automate some of the process. Enumeration takes the process further by extracting more detailed information like service versions and configurations. This step is critical for identifying the attack surface of the web application. The information obtained during scanning and enumeration assists in prioritizing potential vulnerabilities and planning the exploitation phase. 3. Analysis of Security Weaknesses Vulnerability analysis entails reviewing the findings from the scanning and enumeration phase to identify exploitable weaknesses and vulnerabilities. This involves analyzing scan results, verifying weaknesses, and assessing their severity based on potential impact and exploitability. False positives—a frequent occurrence in automated scans—are identified and discarded. The focus here is on understanding the vulnerabilities in the context of the target application and its environment. This phase determines which weaknesses pose a real threat to the application and warrants further examination in the exploitation phase. This phase is where identified vulnerabilities are actively exploited to assess the impact of potential attacks. Exploitation verifies if identified vulnerabilities can be leveraged to gain unauthorized access, escalate privileges, or retrieve sensitive information. Techniques might include SQL injection, cross-site scripting, and exploiting configuration errors. This step is typically the most labor intensive and requires the greatest degree of security expertise. It demonstrates the real-world implications of vulnerabilities. Successful exploitation helps to understand the potential damage and informs the development of mitigation strategies and security enhancements. This phase involves activities carried out after gaining access to the system. This can include data exfiltration, persistence establishment, and exploring the network for further vulnerabilities. The objective is to determine the depth of access that can be achieved and identify additional resources or data that could be compromised. The insights gained during this phase help in understanding the severity of a possible breach and in enhancing incident response and mitigation strategies. It also sheds light on how attackers could pivot within the network. 6. Analysis and Reporting The analysis and reporting phase involves compiling the findings, insights, and recommendations from the penetration test into a comprehensive report. This report details the vulnerabilities discovered, exploitation attempts made, and the potential impact of exploited vulnerabilities. It also provides actionable recommendations for remediation and improving the application's security. A thorough report serves as a roadmap for remediation efforts, helping stakeholders understand the risks and prioritize security improvements. It's also a critical tool for documenting the penetration test findings and guiding future security strategies. 7. Remediation and Re-Testing Remediation involves addressing the identified vulnerabilities based on their priority. This could involve patching software, changing configurations, or enhancing security protocols. After remediation efforts have been implemented, re-testing is conducted to verify that the vulnerabilities have been effectively resolved and no new issues have been introduced. This final step ensures that remediation measures have been successful and that the application's security posture has been improved. It's critical for validating the effectiveness of security improvements and ensuring ongoing protection against cyber threats. Web Application Security with CyCognito CyCognitog identifies web application security risks through scalable, continuous, and comprehensive active testing that ensures a fortified security posture for all external assets. The CyCognito platform helps secure web applications by: - Using payload-based active tests to provide complete visibility into any vulnerability, weakness, or risk in your attack surface. - Going beyond traditional passive scanning methods and targeting vulnerabilities invisible to traditional port scanners. - Employing dynamic application security testing (DAST) to effectively identify critical web application issues, including those listed in the OWASP Top 10 and web security testing guides. - Eliminating gaps in testing coverage, uncovering risks, and reducing complexity and costs. - Offering comprehensive visibility into any risks present in the attack surface, extending beyond the limitations of software-version based detection tools. - Continuously testing all exposed assets and ensuring that security vulnerabilities are discovered quickly across the entire attack surface. - Assessing complex issues like exposed web applications, default logins, vulnerable shared libraries, exposed sensitive data, and misconfigured cloud environments that can’t be evaluated by passive scanning. CyCognito makes managing web application security simple by identifying and testing these assets automatically, continuously, and at scale using CyCognito’s enterprise-grade testing infrastructure. Learn more about CyCognito Active Security Testing
<urn:uuid:8b7aa8bd-e026-4b9a-9df6-29483f6616eb>
CC-MAIN-2024-38
https://www.cycognito.com/learn/application-security/web-application-penetration-testing.php
2024-09-13T21:50:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00627.warc.gz
en
0.915805
2,045
2.828125
3
DIGITAL INTERACTIVE SERVICES How to elevate the demand planning process for business success? Demand planning is an essential part of supply chain planning. It is a method of forecasting demand for a service or product so that organisations can anticipate future needs and be ready to fulfil them. Demand planning involves the creation of a demand plan that is driven by elements such as marketing, and inventory. and distribution points. The demand plan drives the materials requirements planning (MRP) and production planning, which are the next steps in supply chain planning. For complete success, the demand planning process must be a continuous one and involve everyone associated with the product or service. It can become an organisation’s competitive advantage if deployed with the appropriate objectives, processes, metrics, talent and technology. The various steps that ideally define a demand planning process include: - Generation of a baseline system forecast using statistical models or machine learning (ML) algorithms. Future demand is predicted by analysing historical data. - Understanding demand drivers and using micro and macro inputs to gain insights into customer behaviour. - Gathering inputs from all the stakeholders of the product or service, including customers where applicable, on product quality, prices, business deals and margins. - Generation of a demand plan by using all the data collected from different sources. - Obtaining and incorporating inputs from organisational leaders and then publicising the demand plan across the organisation for consensus from concerned departments. What ails the supply chain landscape? Perhaps one of the primary issues plaguing the modern supply chain is the inability of organisations to anticipate demand in real time and then adjust the supply chain. This can, and has, led to delays and no-stock scenarios. Unforeseen bottlenecks and delivery backlogs lead to a drop in warehouse capacities, labour pools and other logistical issues. The global supply chain has undeniably been somewhat optimised but there is clearly a lot of room for improvement. Ways to boost demand planning Organisations must focus on improving their demand planning in order to boost agility and to be able to respond to market changes with ease. - Build a process based on open communication: A well-defined process ensures that all decisions are based on real data, not guesswork. It also helps to keep all the stakeholders apprised of the status. In many organisations, demand planning is the responsibility of analysts and planners working in a silo. This is a sure path to failure since the plan is backed by narrow views, negligible feedback and overall, insufficient data. For success, organisations must ensure that planning teams comprise representatives from all departments and there is regular communication between all the teams. A culture of openness and transparency encourages team members to share ideas, insights and opinions. - Analyse historical data: Data from past sales and market demand need to be collected and analysed to gain critical insights into existing and future trends in demand. Customer behaviour can be better understood by using predictive analytic tools on data. Industry shifts that can affect supply chain operations can be better anticipated too. - Ensure data quality is high: Accurate demand planning requires analysis of high-quality data so organisations must invest in tools and technologies that enhance data quality. They must implement sturdy data management processes to ensure the data is clean and not duplicated and also establish and practise data governance policies at all stages of the process. - Implement advanced analytics and AI-powered tools: These tools can quickly work with large amounts of data and help organisations understand demand patterns so that future forecasts can be more accurate. Team members should be provided with the right training, if needed, to use advanced tools. Data-driven insights can show organisations how to adjust their production and marketing strategies. - Track market changes: Tracking changes in the market is critical for staying ahead. Competitors’ prices and strategies, including any new products or services being introduced to the industry, must be monitored. Customer preferences and any other external factor must be monitored to stay informed and be ready to meet demand. - Add flexibility: Agility and adaptability are key qualities of demand planning. Contingency plans must be developed to be ready for potential disruptions. A responsive supply chain should be able to adjust quickly to changed production and inventory levels. - Test scenarios: Unforeseen problems, and strategies to solve them, can be identified by testing different scenarios. This technique can also help identify potential risks or hurdles associated with certain approaches. All in all, it helps organisations stay prepared. Why is demand planning a necessity? - Higher accuracy: Demand planning enables organisations to work with real, accurate and current data, which in turn helps make more informed decisions about the complete supply chain. - Lower risks: With accurate information about demands and requirements, organisations run a lower risk of both underproduction and overproduction. The natural outcome of this is lower costs and higher customer satisfaction. - Higher efficiency: By leveraging advanced analytic tools, organisations can gain insights quickly and make changes as needed in response to market conditions. - Scalability: The ability to scale up or down because of the use of automated processes creates an agile environment without having to modify staff or equipment. - Higher visibility: Demand planning allows organisations to ensure that the right products are stocked at the right time since there is greater visibility into inventory positioning across all locations and channels involved with the product or service. The benefits of demand planning are many but so far, it has been a rather under-leveraged business process. Doing it right, however, is essential for achieving business goals and maximising shareholder value. Efficient demand planning enables better business outcomes, enhanced working capabilities and efficient resource deployment. * For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organizations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.
<urn:uuid:e7d0e925-ff9c-49fa-98cc-8860430abcee>
CC-MAIN-2024-38
https://www.infosysbpm.com/blogs/digital-business-services/how-to-elevate-the-demand-planning-process-for-business-success.html
2024-09-13T21:31:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00627.warc.gz
en
0.929089
1,288
2.796875
3
Despite growing geopolitical tensions and technological competition, collaboration between AI researchers in the US and China has grown over the past decade. But data from Stanford University’s latest AI Index Report shows that in 2021, the number of AI papers co-authored by researchers in the US and China fell for the first time since at least 2010. This is likely the result of the US’ controversial ‘China Initiative’, designed to protect scientific research from espionage, which was scrapped last month. US and China AI collaboration dips As geopolitical tensions have mounted between the US and China in the last decade, technology has become a key focus of competition between the two great powers. But until recently, this had not dissuaded AI researchers from collaborating with their international peers, says Michael Sellitto, deputy director of the Stanford Institute for Human-centered Artificial Intelligence (HAI). “China and the US are two major sources of AI talent and naturally, there will be many Chinese researchers whom US researchers would want to work with, and vice versa as researchers would want to work with the best in the fields,” he explains. “Moreover, major US tech companies like Microsoft have research teams in China, and Chinese firms like Baidu have research teams in the US as well.” In 2021 however, the total number of AI research papers co-authored by China and US-based researchers declined by 6% compared to the previous year. Sellitto attributes this to the effects of state-level tensions and the US Department of Justice’s controversial China Initiative. Launched in 2018 to crack down on intellectual property theft and espionage, the initiative saw a number of scientists prosecuted for failing to disclose ties to Chinese peers. Last month, the DoJ scrapped the China Initiative, amid complaints that it had "created a climate of fear among Asian Americans". These factors seem to have “cast a pall over collaborations involving Chinese scholars and institutions for some researchers, particularly those of Chinese descent," says Sellitto. Scientific collaboration can be "an opportunity to increase mutual understanding and decrease tensions," says Sellitto. "For example, the United States and Soviet Union promoted academic and cultural exchanges during the Cold World." Attempts to dissuade such collaboration is unlikely to prevent AI research from crossing between the countries, he adds. "Most academic AI research is intended to be published openly; even those working in the industry demand the right to publish," Sellitto says. "So, in large part, knowledge produced would be made available to Chinese researchers, even if there were no US-China collaborations involved in a particular project." China's growing AI research output Collaboration with the US may be in decline, but China’s AI researchers have been working hard, the AI Index Report reveals. Between 2019 and 2021, the number of AI-related patent applications in China increased three-fold. In the last 12 years, Chinese academics have filed a total of 87,343 AI patent applications, more than four times the figure in the US. Only a fraction of these applications have been successful: just 2% of AI-related patent applications submitted by China-based researchers have been granted as of 2021, compared to more than 50% of those submitted by US researchers. A similar trend can be seen in AI conference papers. Up to the end of 2021, almost a third of the world’s AI conference papers had been published in China, compared to the 17% for the US and 19% in Europe. But the ranking of citations is reversed, with US AI conference research papers making up a third of all citations, while Chinese papers make up just 15% of citations. China's prodigious output of conference papers reflects the state's concerted efforts to drive AI research forward, says Daniel Zhang, policy research manager at Stanford Institute for HAI. “For many years, the government incentivised researchers to publish more broadly in a variety of conference venues and journals, so this showed up as a rise in the absolute number of papers being published by Chinese researchers,” he says. There is some indication that the quality of China's research output is improving, he says, with a slight increase in citations in recent years. However, he adds, this may reflect changing citation behaviours by Chinese researchers. “For instance, this could predominantly be Chinese researchers citing other Chinese researchers, or it could be more shared across other countries."
<urn:uuid:1b332f3f-ce80-4350-8711-5292d4056096>
CC-MAIN-2024-38
https://www.techmonitor.ai/policy/geopolitics/us-and-china-tensions-put-ai-collaboration-into-reverse
2024-09-13T20:36:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00627.warc.gz
en
0.965607
902
2.671875
3
There are many iconic elements of the educational environment. Desks, books, whiteboards, and more all have an important role in higher education – but what about printers? Thanks to advancements in education technology, there’s been a major increase in the use of printers in higher education throughout recent years. More than ever, academic institutions are relying on these devices for improved print quality and quantity. Whether it’s creating materials to support coursework and the learning environment, or using prints to promote campus events and raise awareness, there are plenty of great uses for high-quality printers in higher education. HOW PRINTERS IN ACADEMIA SUPPORT CAMPUS LIBRARIES Is there any environment on campus that encapsulates the learning environment more than the library? Those who love books will immediately feel at home, staring at the sprawling shelves of countless great and rare works. Printers in higher education play a crucial role in the efficient management of campus libraries, contributing to a seamless academic environment. These devices serve as essential tools for students, faculty, and staff, facilitating the dissemination of knowledge and supporting various library functions. Furthermore, printers assist in the organization of library resources by: - Generating labels - Creating catalog cards - Printing book lists - Maintaining rental records This contributes to the overall efficiency of library operations, making it easier for librarians to maintain an organized and accessible collection. Printers enable users to reproduce physical copies of academic materials, allowing for easy access to resources such as books, articles, and research papers. This is especially beneficial for students who prefer traditional study methods or need hard copies for reference during classes. HELPING PROFESSORS PRODUCE AND DISTRIBUTE MATERIALS Additionally, printers aid in the creation of instructional materials and handouts, supporting educators in delivering course content. Professors can easily produce class materials, lecture notes, and assignments, streamlining the distribution process and ensuring that students have access to relevant information. Printers in academia not only serve individual academic needs but also play a vital role in the effective management of campus libraries, fostering an environment conducive to learning and research. A CATALYST FOR THE EXPANSION OF AN INSTITUTION’S KNOWLEDGE Printers are pivotal in supporting the expansion of institutional knowledge, particularly in universities where faculty publications are integral to academic growth. Many universities mandate professors to contribute to scholarly publications, and robust print hardware facilitates the effortless creation of these works. Professors can swiftly produce research papers, journals, and educational materials, fostering a culture of knowledge dissemination. The accessibility of printed materials enhances the university library’s repository, ensuring that the latest academic contributions are readily available to students and researchers, thereby amplifying the institution’s intellectual footprint. HOW PRINTERS IN HIGHER EDUCATION SUPPORT CAMPUS ACTIVITIES Printers in higher education are indispensable tools that contribute significantly to the promotion of student activities, awareness, and engagement on campus. These devices play a crucial role in supporting the vibrant student life by enabling the creation and dissemination of promotional materials. Student organizations often rely on printers to produce flyers, posters, and brochures that publicize events, club activities, and campus initiatives. This tangible, printed material helps capture attention and spread information effectively throughout the campus community. Whether advertising club meetings, cultural events, or fundraisers, printers empower students to showcase their activities and foster a sense of community involvement. Moreover, printers facilitate the production of newsletters and magazines that highlight student achievements, campus news, and academic insights. These publications not only serve as a platform for students to share their accomplishments but also contribute to building a cohesive campus culture. In addition to promoting extracurricular activities, printers aid in raising awareness about important issues, campaigns, and social initiatives. Student-led awareness campaigns often utilize printed materials to convey messages, share statistics, and encourage participation in various causes. In essence, printers in academia act as catalysts for student engagement by providing a tangible and effective means of communication. They empower students to share their ideas, organize events, and raise awareness, fostering a dynamic and informed campus community. A TECHNOLOGICAL INVESTMENT FOR ENHANCED LEARNING Printers in higher education signify a strategic investment in technology, playing a pivotal role in embracing digital transformation and elevating the overall learning experience. These devices represent a bridge between the digital and physical realms, facilitating the seamless integration of technology into academic workflows. By supporting the creation of hard copies of digital materials, printers acknowledge the diverse learning preferences of students and faculty. Furthermore, printers contribute to the evolution of educational practices by enabling the quick dissemination of updated materials, reducing reliance on traditional, time-consuming distribution methods. As institutions increasingly adopt digital resources, printers serve as an essential element in the transitional phase, ensuring a smooth and inclusive learning environment. WE’RE A PRINT PARTNER FOR HIGHER EDUCATION ENVIRONMENTS In essence, printers in academia embody a commitment to leveraging technology for improved higher education outcomes and demonstrate a dedication to a holistic and adaptable approach to the modern learning landscape. This type of technology is massively impactful in higher education. It can help maintain libraries, support learning experiences, foster the growth of knowledge, and promote campus activities. Ready to invest in print hardware perfect for your place of learning? Here at Doing Better Business, we offer high-quality print hardware optimized for the needs of fast-paced learning environments. Contact us today to discover more about our selection of devices and learn how they could help your learning environment.
<urn:uuid:43661df8-ac20-459f-9f8a-f0d6da4ebf83>
CC-MAIN-2024-38
https://www.doingbetterbusiness.com/the-evolving-role-of-printers-in-higher-education/
2024-09-18T22:21:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00227.warc.gz
en
0.918405
1,160
2.828125
3
In a world that’s gone digital, it’s no surprise that so have our assets. Protecting digital assets is crucial to maintaining an organization’s operations and revenues, whether it’s sensitive personal identifiable information (PII), business data, or intellectual property. Online threats rapidly evolve to employ complex counterfeiting and spoofing of digital assets and new means to break down the software gates they hide behind. This scenario calls for effective digital asset protection methods to secure the critical data that makes businesses go. It’s essential to recognize the value of digital assets; they’re not just virtual entities but significant income generators. In 2024, revenue from digital assets is anticipated to surge to $80,080m USD, with expectations for continued growth. To safeguard these vital assets, organizations must arm themselves with a comprehensive suite of digital asset protection methods capable of combating a broad spectrum of threats. Let’s explore the world of digital assets and discover how to better protect every organization’s valuable digital treasure chest. What are digital assets? Digital assets are any form of electronic data that holds value to an individual or organization. Common types of digital assets include: This category includes patents, trademarks, copyrights, software, music, films, and books. These assets hold significant value due to the exclusive rights granted to their owners. These assets are where organizations communicate and transact with their customers. They are closely related to intellectual property but are more defined. They include digital properties such as websites, mobile applications, social media profiles, email lists, and more. Representing customers’ sensitive information and personal data serves as an asset by generating insights and enabling personalized service. It may include health, financial, or account data. Privacy and security measures are necessary for risk mitigation and ensuring regulatory compliance with GDPR, HIPAA, and other rules when dealing with personal data assets. Digital content such as images, videos, eBooks, and electronically-accessible presentations serve as assets by providing entertainment, education, and information dissemination. Digital media generates revenue through purchasing, renting, licensing, and advertising. Critical documents, client databases, and proprietary information are assets enabling productivity, innovation, and competitiveness. They also contribute to revenue generation and strategic decision-making in business operations. NFTs, cryptocurrencies such as Bitcoin and Ethereum, smart contracts, and DApps are becoming an essential part of culture and business activity today. They’re the backbone of a decentralized financial system that’s growing alongside the traditional one. What is digital asset protection, and why is it important? Digital asset protection encompasses the strategies and tools organizations use to secure their valuable digital holdings from cyber attacks. Securing your digital assets isn’t just a wise choice—it’s necessary in a threat landscape where it’s common for malefactors to target digital assets. Here are a few examples of attack methods: - Malware is malicious software deployed to disrupt or gain unauthorized access to computer systems. Malware can range from trojans and spyware to viruses and worms, each designed to infiltrate, damage, or take control of a target device. It spreads through compromised downloads, malicious email attachments, or infected external drives. - Phishing uses deceptive emails or websites to steal sensitive information such as passwords, usernames, and payment details. Phishing attacks often involve tricking the recipient into believing the request comes from a trusted source through a fake website, compelling them to part with confidential information or download malware. - Ransomware is malware that locks and encrypts data and then demands payment for its release. This attack can incapacitate organizations by rendering critical data and systems inaccessible, often forcing victims to pay a hefty ransom to restore access. - DDoS (Distributed Denial of Service) attacks overload systems, servers, or networks with traffic to render them unusable. They typically involve a botnet (a compromised computer network) that floods the target with excessive requests, leading to service interruptions or total shutdowns. - Password Attacks attempt to crack or guess passwords to gain unauthorized system access. These attacks can use techniques like brute force, where attackers try every possible password combination, or credential stuffing, where stolen account credentials are used to gain unauthorized access to other accounts. Memcyco’s Digital Impersonation Protector solution protects against brute force and credential stuffing, regardless of whether this data was harvested through impersonating websites. Employing digital asset protection tools and strategies is crucial to thwarting these threats and mitigating risks by ensuring the security of your entire digital ecosystem. These protective measures help organizations prevent data breaches, ensure their competitive advantage, enhance customer trust, and reduce financial losses. Top 11 Digital Asset Protection Methods When considering how to protect your digital assets in 2024, organizations have many powerful protection methods to choose from: 1. Access Control Access control regulates who or what can view or use specific resources in a digital environment. It is a critical component of information security that helps protect digital assets—such as data, applications, services, and the systems that store or process information—by ensuring that only authorized users, systems, or processes can access or perform actions on them. To protect digital assets effectively, organizations must implement robust access control policies that include strong authentication methods, least privilege access (providing users with the minimum access level needed to perform their duties), and regular user access reviews. Integrating access control with other security measures, such as encryption and intrusion detection systems, creates a more comprehensive defense against cyber threats. 2. Antivirus and Antimalware Software Antivirus and antimalware software are essential defenses against malicious software threats like viruses, worms, Trojans, and ransomware. These security solutions safeguard computers, networks, and devices from compromise, by scanning, detecting, and removing malware infections. Regular updates to virus definitions and system scans are crucial for preventing malware from stealing data, damaging critical assets, or causing system outages. 3. Backup Solutions Backup solutions act as a digital insurance policy by creating duplicate copies of sensitive files and data, ensuring the resilience and continuity of an organization’s digital assets during incidents like hardware failures, data corruption, or ransomware attacks. When selecting backup solutions, it’s essential to adhere to the 3-2-1 backup strategy: - Maintain three versions of any crucial document—a primary and two duplicates. - Use two distinct forms of storage media to safeguard against various risks. - Ensure one copy is kept in a separate location away from your home or office. 4. Data Loss Prevention (DLP) Solutions DLP solutions protect sensitive information within an organization’s network and on its endpoints. By monitoring, scanning, and categorizing data—whether in transit or at rest—DLP systems help prevent unauthorized access to confidential data, its sharing, or its leakage. DLP solutions swiftly address data security incidents through content analysis, contextual understanding, and strict policy enforcement. 5. Digital Rights Management (DRM) DRM tools are indispensable for managing access to digital content and safeguarding intellectual property rights. These solutions prevent unauthorized copying, distribution, or piracy of digital content by employing encryption and watermarking. DRM enforces access controls and usage policies to make sure that only authorized users can access or use digital materials, protecting against revenue loss and preserving the value of digital assets. 6. Encryption Software A critical component for securing digital assets, this software encrypts sensitive data using cryptographic algorithms, turning it into an unreadable format to unauthorized users. It ensures the confidentiality of information during network transmission or data storage. Encrypted information remains secure without the proper decryption key, even in the event of data interception. Firewalls control and monitor network traffic according to predefined security rules, effectively acting as a barrier between internal and external networks. These robust barriers scrutinize data packets to permit or deny access based on security criteria, preventing unauthorized access and malicious traffic while allowing legitimate communication. By enforcing security protocols at the network’s edge, firewalls are fundamental in thwarting unauthorized entries, data breaches, and cyber attacks. 8. Healthy Password Management A recent Google study found that 54.8% of cloud compromise factors were attributed to either having a weak password or none at all. Promoting strong password practices is crucial for securing digital assets and user accounts. Encouraging complex passwords, avoiding reuse across multiple platforms, and regularly updating passwords can significantly reduce the risk of breaches. 9. Multi-factor Authentication (MFA) MFA significantly reduces the risk associated with stolen or compromised credentials by requiring additional forms of verification, making unauthorized account access much more challenging. Unfortunately, it’s not foolproof, as some advanced phishing attacks can bypass MFA. To bolster MFA, diversify authentication factors by incorporating something you know (passwords), something you have (security tokens), and something you are (biometrics). Opt for adaptive MFA to tailor authentication needs based on risk and favor authenticator apps over SMS to avoid interception. Educating users on MFA security and phishing prevention, regularly updating MFA systems, and monitoring for suspicious activities is critical. Finally, ensure MFA is a requirement across all access points and periodically audit its effectiveness. Watermarks have long been a method of asserting authenticity. They have evolved from artist signatures on paintings to digital markers that validate the genuineness of images, videos, and websites. In the digital age, watermarking technology ensures that digital assets are recognized as authentic, fostering trust among viewers and customers while combatting digital impersonation and theft. As an example, Memcyco uses a watermark to protect against website impersonation. A customizable, tamper-resistant digital watermark is placed on your website, signaling to employees that this is the real deal, just like physical watermarks used to authenticate banknotes. 11. Website Security Website security is paramount for protecting digital assets when your business relies on websites to interact with customers, process transactions, or deliver services. Organizations employ security measures to safeguard against these threats, including regular website patches and updates to address known vulnerabilities, implementing secure communication protocols such as HTTPS, and deploying web application firewalls (WAFs) to monitor and filter incoming web traffic. Memcyco offers the only real-time website security solution that protects against malicious spoofed websites that defraud customers. Memcyco embeds an active ‘nano defender’ tracking sensor in your company’s authentic site to provide maximum attack visibility in real-time and full protection for your company and its customers. Protect Your Digital Assets with Memcyco Digital assets are valuable targets for cybercriminals. To safeguard their integrity and availability against ever-evolving cyber risks, organizations must utilize robust digital asset protection tools and strategies to protect their business and customers. Memcyco is at the forefront of digital risk protection, leveraging cutting-edge AI technology to protect businesses from digital impersonation fraud. With Memcyco’s innovative technology, companies can rest assured that their digital assets are shielded from the devastating consequences of website spoofing attacks, financial losses, data breaches, and ransomware. Try a free demo today to experience Memcyco’s digital asset protection capabilities firsthand. Chief Marketing Officer
<urn:uuid:5c83b2d6-bafa-4d31-8597-05375c077889>
CC-MAIN-2024-38
https://www.memcyco.com/home/top-digital-asset-protection-methods/
2024-09-20T02:35:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00127.warc.gz
en
0.904291
2,335
2.796875
3
API Discovery is the first step to understanding and ultimately securing your entire API estate. Java API acts as an intermediary in Java applications, connecting components within the programming stack. It enables developers to utilize pre-built Java components and facilitates integration with external resources, streamlining development. In addition, Java API enables task automation and machine-to-machine interactions to save time and improve efficiency.summary. Java API is an application programming interface (API) that functions within software built using the Java software programming language. It’s a deep technology that plays a critical but hard-to-see role in Java applications. This article explains how Java API works, who uses it, and why. The first step in understanding Java API is to get a handle on the Java language. Developed in the early 1990s at Sun Microsystems by James Gosling (later acquired by Oracle), Java was intended to provide a much-needed write-once, run-anywhere (WORA) functionality. In contrast to other programming languages at the time, which were coupled with the operating system (OS), Java enabled developers to write one Java application that could run on almost any hardware or operating system. This capability is possible due to the platform-independent Java Virtual Machine (JVM). When a user installs a JVM on his or her host OS, the JVM adapts to the host and makes it possible to run the Java software. Though some would disagree, Java is considered by many to be the de facto standard. Much is available for free under open-source licenses. However, Java is also the programming language/environment of choice for several industry-leading developer tools. Java has many different versions, spanning consumer PCs to mobile devices and enterprise systems. Like all APIs, Java API acts as an intermediary between different elements of an information system or systems. However, Java API is different from most APIs in that its functionality is largely internal in nature. It sits in between the JVM and the Java program. In practical terms, the Java API is integrated into the Java Development Kit (JDK). It delivers a connection interface between different Java “classes,” as pre-created Java code components are known, as well as between Java user interfaces and the underlying JVM. The Java API explains the function of each class or interface. Using Java API, a Java developer can utilize the various pre-written Java components. There are five types of Java APIs. The first four are the internal type described above. The fifth is more like a standard API that connects browser-based applications with external resources. Then, there is the Web Java API, which is accessed using HTTP with the goal of creating a connection between browser-based apps, as well as services such as storage. The Java REST API is a variant of this approach that uses the REST architectural style to create an external interface to a Java application. It establishes a client-server architecture for the Java application and a client that wants to invoke its functionality. Developers are the main users of Java API, but the API is relevant to anyone who wants to access third-party services in Java. They may be internal developers, partner developers, or open developers who create APIs for open-source projects. Use cases include business-to-business (B2B) apps, business-to-consumer (B2C), app-to-app (A2A), and enterprise applications. A wide variety of other stakeholders also use Java API, though perhaps without realizing it. For example, an enterprise architect might specify that an application have a certain kind of functionality and connectivity. He or she doesn’t care how developers make that happen, but they may be using Java API in the process. Java developers need Java APIs for a variety of reasons. In some cases, their use is unavoidable. The Java application simply requires Java API as a structural component that’s essential to its functionality. Other times, Java API enables features that realize business goals. For example, Java API enables a developer to provide an application user with multiple options on a single screen. This occurs on Facebook and LinkedIn, for example. Java API makes this possible. More broadly, Java API becomes useful in situations where developers want to reduce the workload of the human developer. Java API facilitates automation and machine-to-machine interactions and other types of integration that occur behind the scenes. These capabilities save developers time and make it possible to generate effective applications more quickly than is possible without Java API. Java API plays a critical role in the realization of Java applications. Often operating out of sight, Java API connects one layer of the Java programming stack to another, making the JVM perform its required functions so the Java program can do what it needs to do. Java developers use Java API across a range of use cases, many of which involve integration with other Java applications or external resources. To find the right Java API for your project, start by defining your project requirements, including functionality and performance criteria. Explore the official Java documentation, which provides comprehensive information about available APIs and their usage. Additionally, consider community recommendations and feedback from developers who have experience with various Java APIs. Assessing factors such as reliability, scalability, and security features is crucial. Once you’ve narrowed down your options, conduct thorough testing to ensure the selected Java API meets your project’s needs and standards. Integrating a Java API into your project involves several steps. First, add the API as a dependency in your project’s build tool, such as Maven or Gradle, by specifying the API’s coordinates in the project configuration file. Once added, the build tool fetches the API and its dependencies from the repository. Next, import the API’s classes and methods into your project’s code and utilize them as needed. It’s essential to ensure compatibility and versioning alignment between the API and your project. Additionally, consider using specialized API security testing tools to mitigate potential vulnerabilities. When utilizing Java APIs, it’s essential to prioritize security considerations. Ensure that you’re using up-to-date and well-maintained APIs to mitigate potential vulnerabilities. Handle sensitive data securely, employing encryption and secure communication protocols. Adhere to best practices for authentication and authorization to prevent unauthorized access to your APIs. NoName Security offers comprehensive API security testing solutions to help identify and address security risks in Java APIs. Request a demo to explore how Noname Security can enhance the security posture of your Java API implementations, safeguarding your applications against potential threats and vulnerabilities. Follow some of these best practices, including API security best practices, to ensure smooth integration and maintain security when using Java APIs in your projects. Begin by thoroughly understanding the API’s documentation and functionality before integration. Ensure compatibility with your project’s existing components by verifying version compatibility. Prioritize comprehensive testing to identify and address any potential issues or bugs. Safeguard your applications against potential vulnerabilities by conducting security testing throughout the software development process, evaluating APIs for misconfiguration, monitoring traffic, and authenticating users. Adhering to these practices promotes efficient development and enhances the overall security posture of your Java API implementations. Experience the speed, scale, and security that only Noname can provide. You’ll never look at APIs the same way again.
<urn:uuid:05de418f-28f0-4563-86fd-b526d6a67ca1>
CC-MAIN-2024-38
https://nonamesecurity.com/learn/what-is-java-api/
2024-09-07T23:54:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00327.warc.gz
en
0.923193
1,503
3.609375
4
As artificial intelligence (AI) continues to advance, the demand for more powerful and flexible computing resources grows. One of the most promising advancements in this space is the implementation of composable GPUs, which allow for the dynamic allocation of GPU resources to best match the requirements of different AI models, ranging from small to massive, and support a variety of tasks, including training and inference workloads. In this write-up, we will explore how composable GPUs cater to the dynamic needs of 8 billion (8B), 70 billion (70B), and 400 billion (400B) parameter models, unlocking new levels of GPU efficiency, scalability, management, and performance optimization. Understanding Composable GPUs Composable GPUs are part of a greater technology called composable infrastructure, where computing, storage, and networking resources are disaggregated and then dynamically allocated based on workload demands. For GPUs, this means that instead of having fixed GPU resources dedicated to specific tasks, they are pooled and assigned to servers as needed. This flexibility is particularly valuable in AI and machine learning, where the computational and memory requirements can vary significantly between models. Matching GPU Resources to ModelSize AI models come in various sizes, with the number of parameters being a key indicator of their complexity and memory requirements. ComposableGPUs can help optimize the AI infrastructure for different model sizes being deployed: 8B Parameter Models Small models, such as those with 8 billion parameters, are typically used for tasks that require less computational power and memory.These models can run efficiently on a single high-end GPU or a small cluster ofGPUs. The key benefit of using composable GPUs for these models is the ability to allocate just enough resources to meet their needs without over-provisioning. For instance, an 8B model might only require 40-80 GB of GPU memory. With composable GPUs, a system can allocate the exact amount of GPU memory needed, optimizing resource utilization and reducing power and costs. 70B Parameter Models Medium-sized models with 70 billion parameters require significantly more memory and computational power. These models might need between 350-700 GB of GPU memory. In a traditional setup, this would mean using multiple GPUs in parallel, each contributing a portion of the required memory. Composable GPUs shine in this scenario by enabling seamless scaling. The system can dynamically pool the memory from multiple GPUs to create a single unified memory space that matches the model's requirements.This ensures that the 70B model has sufficient resources without the complexity of managing multiple discrete low GPU density servers. 400B Parameter Models Large-scale models with 400 billion parameters represent the cutting edge of AI research and development. These models require massive amounts of memory, often exceeding 2-4 TB. Such models traditionally run on large GPU clusters, which can be challenging to manage and optimize. With composable GPUs, the system can aggregate the memory and computational power of dozens of GPUs to a single server, creating a massive, unified GPU pool. This approach simplifies the deployment and scaling of these enormous models, reducing CapEx and OpEx and ensuring that they have the necessary memory footprint and computational power for optimal performance. Benefits of Composable GPUs - Resource Efficiency: By dynamically allocating GPU resources based on the specific needs of each model, composableGPUs minimize waste and maximize utilization. This leads to lower power, cost savings, and better performance. - Scalability: Composable GPUs make it easy to scale resources up or down, accommodating models of any size. This flexibility is crucial for AI research, where model sizes and requirements can change rapidly. - Simplified Management: Managing a composableGPU infrastructure is more straightforward than dealing with fixed discrete GPU setups. The dynamic allocation of resources reduces the complexity associated with scaling and resource provisioning. - Performance Optimization: With the ability to match resources precisely to the model's needs, composable GPUs ensure optimal performance, reducing the risk of bottlenecks and underutilization. GPU Memory vs. Model Size The formula to estimate the memory a model takes is: [#n params] x [precision]/ 8 (to convert bits to bytes) + [25% for overhead] For example, an 8B model trained at FP16 would take: 8B x 16/8 x 1.25 =20 GB including overheads. # of GPU vs. Model Size GPU Cost vs. Model Size The rise of composable GPUs represents a significant leap forward in AI computing. By providing the flexibility to allocate GPU resources dynamically, this approach caters to the diverse needs of AI models, from small 8B parameter models to massive 400B parameter models. As AI continues to evolve, the ability to efficiently manage and scale GPU resources will be crucial in unlocking the full potential of AI applications. Composable GPUs are poised to play a key role in this exciting future.
<urn:uuid:fb80b197-d873-418c-88e5-37b66ee73309>
CC-MAIN-2024-38
https://www.liqid.com/blog/optimizing-ai-model-performance-through-dynamic-gpu-allocation
2024-09-08T00:45:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00327.warc.gz
en
0.897008
1,001
2.71875
3
Malware, short for malicious software, is an intrusive software that is designed and created by cybercriminals to steal, damage and destroy computers and computer systems. The common terms you see and hear often such as, worms, Trojan viruses, spyware, adware and ransomware, are examples of different types of malware. Over the last year, malware has disrupted many organizations. A malware attack on small and mid-sized businesses is a scary thought. Private data is stolen and locked away with the intent to leak the data in the most damaging way if a ransom settlement cannot be met. With the help of Webroot, we have compiled a list of the worst malware of 2021 that AgileBlue is monitoring for. - Russian-based private Ransomware-as-a-Service (RaaS) operation offered to other cybercriminals - Known for some of the largest supply chain attacks - Longstanding Ransomware-as-a-Service (RaaS) group also known as Ryuk - Known to leak or auction off data if victims don’t pay the ransom - A persistent botnet with cryptomining payload - Known to remove competing malware to ensure they are the only infection - Infects through email, brute force, and exploits - A banking credential theft Trojan virus, however, has evolved into a modular malware enterprise - Proliferated through spam emails and using fake Adobe Flash Player updates - Specializes in stealing bank credentials via a system that utilizes macros from MS Word - Hides malicious coding within seemingly harmless data - Cobalt Strike - White hat designed penetration testing tool that has been corrupted - Allows an attacker to deploy an agent named ‘Beacon’ on a victim’s machine It’s important to keep your organization safe from all types of malwares. The average cost for ransom for small to mid-sized businesses is averaged at $50,000. Cybercriminals are only getting smarter and more sophisticated. You may ask yourself, how do these attacks keep happening? Phishing is the number cause for these attacks to be successful. This highlights the importance of cyber education within any organization. The key to staying safe is layering your approach to cybersecurity while implementing a cyber strategy. Additionally, you should lock down your Remote Desktop Protocols (RDP). If you haven’t done so already, your organization should have a remediation plan in place. Lastly and most importantly, you should partner with a reputable cybersecurity organization. AgileBlue is a 24/7 Security Operations Center as a Service (SOCaaS) platform that is proven to detect and monitor for all malware. Ready to protect your company with AgileBlue? Request a Demo.
<urn:uuid:b5e7ce32-634d-482a-a70c-4d86d18e7fd0>
CC-MAIN-2024-38
https://agileblue.com/breaking-down-the-top-malware-of-2021/
2024-09-09T06:16:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00227.warc.gz
en
0.921536
563
2.78125
3
Virtual private networks (VPN) are popular solutions for protecting the identity of users and business data online. At the heart of a VPN sits the VPN gateway. In this article we cover what a VPN gateway is, what it does, and what are their benefits over hardware VPN concentrators. Table of contents What is a VPN gateway? A VPN gateway is a network device that creates secure connections between users, online applications, networks, repositories, and other systems. It forms the central node of a virtual private network (VPN) and facilitates secure data transfer over the internet, allowing authorized users to securely communicate with systems without fear of exposing sensitive information. The secure connections that a VPN gateway creates consist of an encrypted tunnel formed between the sender and receiver. This allows them to communicate over public and unsecured networks with a high level of security. A VPN gateway nowadays is a virtual device accessible in the cloud, but legacy VPN gateways would often be hardware (e.g. a router configured to handle the VPN connections). A dedicated device that provides VPN connections is called a VPN concentrator. How does a VPN gateway work? The main task of a VPN gateway is creating secure tunnels between users, networks, or systems over the internet. The way the tunnel is established and secured depends on the selected VPN protocol, such as OpenVPN, IPsec, or IKEv2. The choice of the protocol determines the speed of the connection and encryption strength, so naturally different protocols excel at different tasks. For example, secure access to local systems for remote users would often be encrypted via the IKEv2 protocol, while site-to-site connections connecting two branches would rely on the IPsec protocol. However, modern protocols, like OpenVPN or Wireguard are equally suited for all VPN use cases. VPN providers sometimes use their proprietary VPN protocols, some of which are variations on open-source protocols. VPN gateways do more than establish tunneled connections. Another task of VPN gateways is authenticating users. When a user tries to access the private network, they must authenticate themselves. This authentication can be done simply via a trusted certificate installed on the user’s device, or, in a more sophisticated way, by entering a username and password in the client app, often reinforced with two-factor authentication (2FA) for better security. Another important function of VPN gateways is providing an IP address. Especially a static IP address that permanently identifies the VPN gateway is an important part of company security and remote access, as it is used for IP whitelisting, securing remote access to resources, or publishing online services. Last but not least, VPN gateways can also handle access control, which consists of assigning access rights to users. This can be a powerful security tool of limiting access to applications and thus significantly reducing the risk of cyber threats and their impact. Who is a VPN gateway for? A VPN gateway is the go-to solution for securing remote access among small and medium enterprises (SME). These businesses face the challenges of limited IT resources (e.g. trained networking and security experts) and smaller budgets. These constraints preclude them from deploying and managing complex security solutions. However, a cloud VPN gateway provides a simple, cost-effective, and highly scalable means of securing remote access to local and SaaS resources, making it an excellent fit for SMEs. Benefits of using a cloud VPN gateway Being software-defined, cloud VPN gateways are highly flexible and accessible solutions that provide several benefits for SMEs: Ease of deployment and management Cloud VPN gateways are easy to deploy and manage, even for businesses with limited IT resources. They don’t require any additional hardware, and all their management is done via a web-based user interface. This makes it easy for businesses to quickly set up and configure secure remote access and additional tasks. Cloud VPN gateways are highly scalable; again, thanks to their zero-hardware architecture. Additional capacity is purchased as a service, instead of deploying and managing an additional VPN concentrator, as would be the case in legacy hardware VPNs. This allows SMEs to easily accommodate changes in the number of staff and systems. Similarly to scalability, cloud VPN gateways come at a lower and much more flexible cost than hardware VPN concentrators. They require no upfront cost or maintenance costs as such, just a regular service fee. In addition, cloud VPNs are usually offered as pay-as-you-go services, which makes it very easy for businesses to scale their VPN service up or down depending on their immediate needs. Cloud VPN gateways can be deployed anywhere in the world, providing optimal latency and global reach for remote users. Compared to their hardware counterparts, a cloud VPN gateway provides a superior user experience regardless of the user’s location. How do you deploy a cloud VPN gateway? VPN gateways are deployed as part of cloud services (such as MS Entra ID) or as part of dedicated VPN services, like GoodAccess. Configuring your own VPN gateway is a labor-intensive process that requires knowledge of networking. The upside of that is you get to tweak the gateway precisely to your needs, however, you have to know what you are doing. On the other hand, deploying a GoodAccess VPN gateway takes no effort at all. You simply create an account, enter the name of your team, and pick the gateway nearest to you. The technicalities of configuring have already been taken care of, so you get your VPN gateway as part of a ready-to-go service. You can choose a gateway anywhere in the world, but it’s recommended to choose the one geographically closest for better latency. Wrapping up on VPN gateways VPN gateways provide worldwide secure connectivity and remote access to business systems via encrypted tunnels. Unlike hardware VPN concentrators, VPN gateways offer the benefits of increased scalability and optimized costs thanks to their software-defined architecture. VPN gateways often come as part of VPN-as-a-service solutions, like GoodAccess, where they provide additional functionalities, like 2FA, DNS filtering, or identity-based access controls.
<urn:uuid:a30ecf65-d3b1-45e2-a442-58c698c94835>
CC-MAIN-2024-38
https://www.goodaccess.com/blog/vpn-gateway-everything-you-need-to-know
2024-09-11T17:46:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00027.warc.gz
en
0.938043
1,274
3.203125
3
kosmin - Fotolia Many organisations can benefit from rolling out internet of things (IoT) applications. Manufacturers managing customer equipment, from fitness-trackers to aeroplanes; transport and utility companies monitoring for performance metrics and faults; healthcare companies checking on patient wellbeing via medical devices. Doing so may seem daunting – the complexity of implementing, connecting and managing both the myriad devices and the various technologies concerned is a lot to deal with. Fortunately, the IoT opportunity does not require getting bogged down with configuring servers and networks and developing applications from scratch. There are a growing number of purpose-designed, cloud-based platform-as-a-service (PaaS) offerings to support IoT-specific deployments, which bring together the technology required. IoT applications need three basic components. First, the things themselves – these may be pre-existing devices newly brought online; specially deployed probes and sensors; or software installed on user devices, especially smartphones. Second are gateways that manage groups of devices, perhaps all the sensors in each geographic area, or managing a single larger object, such as a building or vehicle, with multiple things. Gateways play a role in local network management, security and data filtering. Third is a central IT platform – a shared service or hub that the gateways, or sometimes things themselves, feed all their data back to, providing a central management point. In a 2016 Quocirca research report, entitled The many guises of the IoT, the central IT platform was seen as the most important place to build intelligence into IoT applications. It is the hub that the internet of things platforms provide. The platforms aim to simplify deploying, managing and operating IoT devices and gateways. They simplify the collection and processing of vast streams of data, giving human operators a view of the big picture – the whole network of things relating to a given application. Decisions can be made about tuning applications based on data analysis, sending instructions back to gateways and/or things. As with any shared service, a platform used by multiple organisations for IoT deployments creates an ecosystem with data from perhaps millions of devices that is far larger than that of any one of the user organisations. All can indirectly benefit from being part of the ecosystem, gaining insights derived from each other’s data, and some constituents of the platform can provide IoT-specific machine learning to constantly improve the way the IoT networks under their control are managed. The same terms are heard repeatedly when reviewing the various platform offerings. They all aim to do the same basic thing – simplify the building of intelligent IoT applications, using out-of-the-box capabilities. These may include ensuring deployed devices are compliant with industry requirements, the gathering of audit data, that network connections to devices and data transmission are secure, the provision of dashboards for viewing and managing IoT applications and devices, data processing, storage and analytics to provide timely insight and carry out actions as necessary. The platforms usually include low code development capabilities – click-based tools for configuring devices, gateway and applications with minimal coding (akin to the fourth-generation languages/4GLs of the 1990s). Sometimes, the capability is embedded in the platform, or support for third-party tools is provided, including state-of-the-art concepts, such as containerisation and orchestration, the subject of a previous Computer Weekly buyer’s guide by Quocirca: What are containers and microservices? A wealth of platform offerings The fast growth in demand for IoT deployments has led to a range of suppliers from different backgrounds putting together IoT platform offerings. Some have chosen to specialise in a specific area of IoT, such as smart cities or homes. Included within this mix of technology providers are industrial conglomerates, business application providers, security suppliers, mobile service providers, network suppliers and the major IT platform service providers. The IoT platform market is already maturing, with startups being acquired by the industry giants. Read more about IoT and digitisation platforms and services We take a look at the pros and cons of using mid-sized nearshore IT service providers to support digital transformation. Among the technological drivers in industrial firms are IoT platforms that implement the concept of a digital twin. We explore the benefits. The US industrial giant General Electric (GE) set up a computing division, GE Digital, in 2011 to provide IT services across the range of sectors it serves, including automotive, manufacturing, healthcare, transport and utilities – all high on the list of those that might expect to benefit from the IoT. The highest profile offering from GE Digital is its IoT platform Predix. Based on Cloud Foundry, Predix is designed to handle industrial data, supporting heterogeneous data acquisition and analytics. Hitachi launched its Lumada IoT platform in 2016, which has now been spun out as a separate entity, Hitachi Vantara. The platform is used by Great Western Railway in the UK, and Hitachi has designs on expansion in various markets from the management of domestic appliances through to power stations. Business application providers are extending their reach to the IoT as a way of bringing more of their customers’ data under their management with the aim of creating more value from it. Two that now have specific IoT platforms are SAP and Salesforce. The SAP Cloud Platform Internet of Things provides the ability to onboard, configure and manage many kinds of remote devices, providing decision-making capabilities and tools to optimise processes. It is based on the SAP Hana database. Pre-configured regulatory, environmental and safety compliance is provided for common IoT devices and processes, along with guidelines for energy efficiency and data security. Salesforce IoT Explorer Edition was announced in October, extending its existing IoT capability. It includes low-code tools to quickly and easily launch IoT applications. Salesforce’s proposition is to use its CRM expertise to proactively apply customer context to IoT applications, such as by triggering actions with real-time rules to enable proactive sales, servicing or marketing processes. For example, a car dealership can proactively trigger actions when connected cars reach a certain mileage threshold or when vehicle diagnostics report failures. Gemalto is a security specialist with established expertise in securely connecting with devices. In 2011, it acquired SensorLogic, a cloud-based machine-to-machine (M2M) PaaS offering used for asset tracking, telematics and equipment monitoring. The platform is being adapted from its M2M origins, where single objects connected with servers, to supporting the high volumes of devices that come with IoT deployments. InterDigital’s Chordant subsidiary has an IoT platform focused on smart cities, with standards and application programming interfaces (APIs) that enable the consolidation of data from the diverse sources found across a city, which can then be exposed to city authorities, other businesses and consumers. Mobile service providers see an opportunity in IoT, to encourage use of their networks to link devices with gateways and hubs. For example, Orange, recently announced its IoT Live Booster Programme, a joint initiative with Deutsche Telekom. It is a mix of open source and commercial software to provide an ecosystem for IoT developers. It is aimed at the smart home market based on the oneM2M standard. In 2016, Cisco acquired a company called Jasper, which provided an IoT platform for launching, managing and monetising IoT applications. Jasper partners with mobile operators, providing IoT implementations for a range of industries. Stream Technologies’ IoT-X is a connectivity management platform for IoT deployments, providing high-quality wireless connectivity across cellular, satellite and low-power wide area networks (WANs), providing customers with resilient connectivity. The big cloud platform providers all see the IoT as an opportunity. Amazon Web Services (AWS) IoT includes a device registry, device gateway and support for the MQTT protocol (a low-power alternative to the HTTP web protocol to extends the life of device batteries). Another feature is Device Shadows, a virtual “shadow” of a device that can be synchronised intermittently with the physical device. This further reduces network traffic and allows devices to be updated virtually, even when offline. Microsoft’s Azure IoT Suite collects and analyses streaming IoT device data in real time and provides integration with existing customer infrastructure. Automatic alerts trigger actions, for example predictive maintenance. It includes a pre-configured connected factory offering for monitoring industrial equipment. Google’s Android Things, formerly known as Brillo, is an IoT platform with device developer kits aimed at prototyping, building and supporting devices based on the Android operating system. Support for Android APIs, Android developer tools and other Google services is provided along with the Android Things Console to access, manage and configure IoT devices. IBM takes an artificial intelligence angle with Watson Internet of Things, described as the hub of all things IBM IoT – anything IBM has a Watson angle these days. Delivered off IBM’s Bluemix cloud platform, Watson IoT manages and controls connected devices, providing access to live and historical data. A context mapping service enables analysis of moving object trajectories, for example to manage transport networks. Hewlett Packard Enterprise’s (HPE) Universal IoT Platform supports industrial IoT deployments across a range of sectors, including federated device management, data acquisition and data exposure. Simultaneous management of heterogeneous IoT gateways and devices is supported. The platform can be deployed on-premise as well as in a range of cloud environments. HPE cites use cases including connected cars, smart cities and smart metering. If the investment and message of IT industry leaders and some of the lesser known providers mentioned above is anything to go by, there is plenty going on with the IoT. Cloud IoT platforms make it easy for organisations to dip a toe in the water and see what can be achieved without making major upfront investments. The main problem will be selecting the right platform from the wide choice available. Bob Tarzey is an analyst at Quocirca.
<urn:uuid:dc8b8c88-5db8-4bea-b92f-f4264f028e96>
CC-MAIN-2024-38
https://www.computerweekly.com/feature/Build-IoT-apps-the-easy-way
2024-09-12T20:17:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00827.warc.gz
en
0.934521
2,051
2.828125
3
All these days, we have seen Blockchain Technology helping financial business streams such as Bitcoins and other cryptocurrencies trading. Pretty soon, we will see NASA using them to curb cyber attacks on Airport control stations – provided the research goes positive. Cybersecurity Insiders has learned that the US space agency will be taking the help of blockchain technology to stop spoofing and denial of service attacks on American air traffic control systems. Ronald Riesman, the NASA’s senior engineer said in a press statement that the Blockchain technology is said to help leverage an open source enterprise industrial framework to help curb cyber attacks from targeting aerospace stations. Already, research on the issue is said to be taking place on this issue and by the year 2020, the Automatic Dependent Surveillance-Broadcast (ADS-B) system is supposed to go live. This surveillance technology allows satellites to monitor the position of an aircraft and helps it to navigate by securing all the details such as flight plans and positions from the eyes of the cyber crooks. Blockchain technology is supposed to be used at this genre where an open source framework will help secure ADS-B’s from cyber vulnerabilities by allowing only authorized agents to be in communication stream, thus eliminating snooping eyes like tech-savvy criminals from gaining access on an easy note. Ronald Riesman is said to have recently put forward a proposal on this issue in front of the high command of NASA which includes operational events such as certificate authority, smart contract support, and higher bandwidth communication channels. At present, it is being called as an open source enterprise blockchain framework called Hyperledger Fabric which helps solve most of the technicalities that threaten the adoption of ADS-B by military, corporate and other aircraft operators. Although the technology is not perfected on a current note, it can be assumed as a viable technology to protect the future of the aviation sector. Hope, such tech helps in avoiding all future flight crash disasters like Malaysian Airlines Flight MH370 that disappeared on March 8th, 2014 and suspected to have been downed by Russian intelligence via a cyber attack.
<urn:uuid:2aa95c19-d5c2-4e87-933d-6b0c1e45bb67>
CC-MAIN-2024-38
https://www.cybersecurity-insiders.com/blockchain-technology-to-help-nasa-curb-cyber-attacks-on-airport-control-stations/
2024-09-16T12:16:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00527.warc.gz
en
0.946447
425
2.671875
3
Controlled Unclassified Information (CUI) refers to information that requires safeguarding or dissemination controls pursuant to and consistent with applicable laws, regulations, and government-wide policies. It plays a vital role in protecting sensitive data that, while not classified, still demands careful handling to prevent unauthorized access and potential security risks. Understanding and managing CUI is essential for organizations involved in national security and defense, as well as government contractors. The CUI Registry serves as a comprehensive resource, detailing the specific categories of CUI and the applicable safeguarding and dissemination requirements. It provides clear guidance on how to properly manage this information, ensuring compliance with relevant regulations and enhancing overall security protocols. By adhering to the guidelines set forth in the CUI Registry, organizations can effectively mitigate risks, maintain regulatory compliance, and contribute to the broader efforts of national security. What is the CUI Registry? The CUI Registry is an essential tool established by the National Archives and Records Administration (NARA) to standardize the way sensitive but unclassified information is handled across federal agencies and their contractors. The CUI Registry provides a centralized repository of all CUI categories, each accompanied by specific safeguarding and dissemination requirements. This comprehensive list ensures that organizations have clear and consistent guidelines for managing CUI. The primary purpose of the CUI Registry is to enhance national security by providing clear instructions on how to protect sensitive information that, while not classified, still requires strict handling controls. The registry offers detailed guidance on properly managing CUI, helping prevent unauthorized access, misuse, and potential security breaches. Moreover, the CUI Registry supports compliance with various laws, regulations, and government-wide policies. It aids organizations in identifying the appropriate safeguarding measures, banner markings, and dissemination protocols for each category of CUI. By adhering to the standards outlined in the CUI Registry, organizations can ensure that sensitive information is consistently and effectively protected, thereby reducing the risk of data breaches and enhancing overall security posture. Categories of CUI: Basic vs. Specified CUI is categorized into two primary types: Basic and Specified. Understanding the distinction between these categories is essential for ensuring proper handling and compliance. Basic CUI refers to information that requires protection as stipulated by laws, regulations, or government-wide policies but does not have specific handling requirements beyond those standards. For instance, general business information that needs safeguarding under the Freedom of Information Act (FOIA) falls under Basic CUI. Handling Basic CUI involves adhering to standard safeguarding practices without additional requirements. Specified CUI, on the other hand, includes information that demands more stringent protection measures due to specific statutory or regulatory requirements. An example of Specified CUI is information protected under the International Traffic in Arms Regulations (ITAR), which requires compliance with detailed control measures for international dissemination. Specified CUI often includes sensitive data such as export-controlled information, privacy information under HIPAA, or law enforcement sensitive data. The distinction between Basic and Specified CUI matters because it dictates the level of protection required. Properly identifying whether CUI is Basic or Specified ensures that organizations apply the correct safeguarding and dissemination controls, thereby maintaining compliance and protecting sensitive information from unauthorized access and potential security threats. Understanding these categories helps organizations navigate the complexities of CUI management effectively. The Role of Safeguarding and Dissemination Authorities Safeguarding and Dissemination Authorities are pivotal components in the management of CUI. These authorities provide the legal and regulatory frameworks that dictate how CUI must be protected and shared, ensuring that sensitive information is handled in accordance with established standards. Overview of Safeguarding and Dissemination Authorities Safeguarding and Dissemination Authorities are specific statutes, regulations, or government-wide policies that outline the requirements for protecting and distributing CUI. Each category of CUI listed in the CUI Registry is linked to these authorities, which detail the necessary controls and procedures to ensure compliance and security. These authorities ensure that organizations understand their obligations when handling CUI, providing a clear reference for the appropriate measures to take. Guidance on Handling CUI These authorities play a crucial role in guiding the handling of CUI by specifying the necessary safeguarding measures and dissemination protocols. For instance, the Safeguarding Authority may outline encryption requirements, physical security measures, or access controls that must be implemented to protect CUI from unauthorized access. Dissemination Authorities, on the other hand, provide guidelines on how CUI can be shared, with whom, and under what conditions. This could include restrictions on international sharing, requirements for secure communication channels, and rules for marking and labeling documents. By linking each category of CUI to its respective Safeguarding and Dissemination Authorities, the CUI Registry ensures that organizations have a clear roadmap for compliance. This linkage helps prevent the mishandling of sensitive information, which could lead to security breaches, legal penalties, and loss of trust. In practice, adhering to these authorities involves rigorous training, robust security policies, and continuous monitoring to ensure compliance. Organizations must stay informed about updates to these regulations and integrate them into their cybersecurity strategies. By doing so, they can effectively protect CUI, maintain regulatory compliance, and support the broader mission of national security. Understanding and following the guidance provided by Safeguarding and Dissemination Authorities is essential for any organization dealing with CUI, ensuring that sensitive information is protected and managed according to the highest standards. Understanding Banner Markings Banner markings are critical elements in the management of CUI. These markings, prominently displayed at the top of documents or data sets, indicate the level of protection required and the specific handling instructions mandated by the CUI Registry. Explanation of Banner Markings Banner markings provide a clear, visual cue about the classification and handling requirements of CUI. They include specific labels such as “CUI,” “CUI//SP-Category” for Specified CUI, and any necessary dissemination controls. These markings ensure that anyone handling the information is immediately aware of its sensitivity and the required safeguards. Importance of Banner Markings The importance of banner markings lies in their role in preventing unauthorized access and ensuring compliance with regulations. Proper banner markings facilitate the correct dissemination of CUI by clearly communicating the handling instructions to all personnel. This helps maintain the integrity and security of sensitive information, reduces the risk of data breaches, and ensures that all regulatory requirements are met. In essence, banner markings are a crucial tool in the effective management and protection of CUI, aiding in the prevention of mishandling and misuse. Safeguarding and Dissemination Authority Box The Safeguarding and Dissemination Authority Box is a vital component of the CUI Registry. It provides comprehensive details on the specific laws, regulations, and policies that govern the protection and dissemination of each category of CUI, ensuring that organizations understand their obligations and implement the necessary controls. What Information is Contained in the Authority Box Each Authority Box in the CUI Registry includes the following information: - Safeguarding Authority: This section identifies the statutes, regulations, or government-wide policies that mandate the protection measures for the specific CUI category. It provides links to the authoritative sources, ensuring that organizations can easily access and understand the requirements. - Dissemination Authority: This part outlines the guidelines for sharing the CUI. It specifies who can receive the information, under what conditions, and any restrictions on dissemination. Like the Safeguarding Authority, it includes links to the relevant legal or policy documents. - Sanctions Authority: This section lists the penalties for non-compliance with the safeguarding and dissemination requirements. It highlights the consequences of mishandling CUI, underscoring the importance of adhering to the prescribed measures. Using the Authority Box for Compliance and Operational Security Organizations can leverage the information in the Authority Box to ensure compliance and enhance operational security. By following the safeguarding and dissemination guidelines, organizations can: - Implement Appropriate Controls: Establish the necessary physical, technical, and administrative controls to protect CUI, as dictated by the Safeguarding Authority. - Ensure Proper Dissemination: Share CUI only with authorized parties and under the conditions specified in the Dissemination Authority, preventing unauthorized access. - Avoid Penalties: Understand and adhere to the Sanctions authority to avoid legal penalties and reputational damage from non-compliance. Incorporating the guidelines from the Authority Box into organizational policies and procedures helps maintain the integrity and security of CUI, ensuring that sensitive information is handled following the highest standards of regulatory compliance and operational security. Sanctions for Misuse of CUI The misuse of CUI can lead to significant penalties and sanctions, underscoring the critical need for strict adherence to safeguarding and dissemination guidelines. These sanctions are designed to enforce compliance and protect sensitive information from unauthorized access and breaches. Overview of Penalties and Sanctions Penalties for improper handling of CUI are outlined in the Sanctions Authority section of the CUI Registry. These penalties can include administrative actions, civil fines, and criminal charges depending on the severity of the misuse. For example, unauthorized disclosure of CUI can result in disciplinary actions for individuals, including termination of employment, loss of security clearance, and in severe cases, prosecution under federal law. Organizations found non-compliant may face substantial fines, loss of government contracts, and reputational damage. Real-World Implications of Non-Compliance Non-compliance with CUI handling requirements can have far-reaching consequences. For instance, a defense contractor failing to protect CUI might not only face legal and financial repercussions but also compromise national security. In another scenario, a healthcare provider improperly sharing CUI could violate HIPAA regulations, leading to hefty fines and loss of trust among patients. These real-world implications highlight the importance of rigorous CUI management. Organizations must implement robust policies, provide thorough training for employees, and continuously monitor compliance to mitigate the risks associated with CUI misuse. Adhering to the guidelines set forth by the CUI Registry ensures that sensitive information is protected, thereby safeguarding the organization from legal, financial, and reputational harm. Best Practices for Handling CUI Effective management of CUI is imperative for organizations handling sensitive data. Implementing best practices ensures compliance, protects against breaches, and maintains operational security. Practical Tips for Managing CUI - Regular Training and Awareness: Conduct ongoing training sessions for employees to ensure they understand the importance of CUI and the specific handling requirements. Regularly update training materials to reflect changes in regulations and policies. How to Recognize Controlled Unclassified Information (CUI): A Guide for Government Contractors How to Effectively Analyze Contracts for CUI References and Handling Requirements - Access Controls: Implement strict access controls to limit CUI access to authorized personnel only. Use multi-factor authentication (MFA) and role-based access controls (RBAC) to enhance security. - Data Encryption: Encrypt CUI both at rest and in transit. Utilize strong encryption standards to prevent unauthorized access and ensure data integrity. - Secure Communication Channels: Use secure communication methods, such as encrypted emails and secure file transfer protocols, to share CUI. Avoid using unsecured platforms that could expose sensitive information. - Audit and Monitoring: Regularly audit and monitor systems handling CUI to detect and respond to unauthorized access or potential breaches. Use automated tools to streamline this process and ensure comprehensive coverage. Integrating Best Practices with Existing Security Protocols - Policy Alignment: Align CUI handling practices with existing cybersecurity policies. Update organizational policies to incorporate CUI-specific requirements and ensure all employees are aware of these changes. - Technology Integration: Leverage existing security tools and technologies to manage CUI effectively. Integrate CUI management practices with your Security Information and Event Management (SIEM) systems, data loss prevention (DLP) tools, and other security infrastructure. - Continuous Improvement: Regularly review and update CUI management practices to adapt to evolving threats and regulatory changes. Engage in continuous improvement to enhance security posture and ensure ongoing compliance. By adopting these best practices, organizations can effectively manage CUI, reduce the risk of breaches, and maintain compliance with relevant regulations. Integrating these practices with existing security protocols creates a robust defense against unauthorized access and data loss, ensuring the protection of sensitive information. How MAD Security Can Help MAD Security is a trusted leader in managing CUI, offering unparalleled expertise and a comprehensive suite of services to ensure CUI compliance and security. With a deep understanding of the unique challenges faced by defense contractors and government agencies, MAD Security is equipped to provide tailored solutions that safeguard sensitive information. Expertise in Handling CUI MAD Security specializes in implementing best practices for CUI management, drawing on years of experience and a robust knowledge of relevant regulations, including DFARS, CMMC, and NIST standards. Our team of experts ensures that your organization meets all regulatory requirements while maintaining the highest data protection standards. - GRC Gap Assessments: Identify and address gaps in your governance, risk management, and compliance frameworks to align with CUI handling requirements. - Virtual Compliance Management (VCM): Continuous monitoring and management of your compliance status to ensure adherence to CUI regulations. - Policy Development: Provide policy templates and customization to develop tailored policies that ensure effective CUI management and regulatory compliance. - Security Operations Center (SOC) Services: 24/7 monitoring, detection, and response services to safeguard your environment. - Managed Detection and Response (MDR): Proactive threat detection and incident response services to protect CUI from cyber threats. - User Awareness Training: Educate employees on the importance of CUI and best practices for handling it securely, enhancing the company's overall security posture and effectively safeguarding sensitive information. - Remote Incident Response: Immediate support and remediation guidance in case of a security incident involving CUI. MAD Security’s comprehensive approach integrates advanced technology and proven methodologies to provide robust CUI management solutions. Partnering with MAD Security ensures your organization not only complies with regulatory mandates but also achieves a high level of operational security, protecting your valuable information assets from potential threats. Understanding the CUI Registry is crucial for ensuring the proper handling and protection of Controlled Unclassified Information. Adhering to the guidelines and regulations outlined in the registry helps prevent unauthorized access, mitigate risks, and maintain compliance with federal requirements. For expert assistance in managing CUI, turn to MAD Security. Our specialized services and experienced team are dedicated to safeguarding your sensitive information and ensuring your organization meets all regulatory standards. Contact MAD Security today to enhance your CUI management and secure your data against evolving threats. Frequently Asked Questions What is the purpose of the CUI Registry? The CUI Registry, established by the National Archives and Records Administration (NARA), standardizes the handling of Controlled Unclassified Information (CUI) across federal agencies and contractors. It provides a centralized repository of CUI categories and their safeguarding and dissemination requirements, ensuring organizations can manage CUI effectively and comply with relevant regulations. What are the differences between Basic and Specified CUI? Basic CUI requires protection under general laws, regulations, or government-wide policies without specific handling requirements. Examples include general business information protected under the Freedom of Information Act (FOIA). Specified CUI, like export-controlled information under ITAR or privacy information under HIPAA, demands more stringent protection measures due to specific statutory or regulatory requirements. How do Safeguarding and Dissemination Authorities guide the handling of CUI? Safeguarding and Dissemination Authorities provide the legal and regulatory frameworks for protecting and sharing CUI. These authorities specify the necessary controls and procedures, such as encryption, physical security measures, and access controls. They also outline dissemination protocols, including who can receive the information and under what conditions, ensuring compliance and security. Why are banner markings important in CUI management? Banner markings are critical for clearly indicating the classification and handling requirements of CUI. Displayed prominently on documents, these markings ensure that personnel are immediately aware of the sensitivity and required safeguards for the information. Proper banner markings help prevent unauthorized access, facilitate correct dissemination, and ensure compliance with regulations. How can MAD Security help organizations manage CUI effectively? MAD Security offers specialized expertise in CUI management, including services such as GRC Gap Assessments, Virtual Compliance Management (VCM), 24/7 SOC services, Managed Detection and Response (MDR), User Awareness Training, and Remote Incident Response. By partnering with MAD Security, organizations can ensure compliance with CUI regulations, protect sensitive information, and enhance their overall cybersecurity posture.
<urn:uuid:0b6dfae7-b488-419d-ac3e-5798ab2619f1>
CC-MAIN-2024-38
https://madsecurity.com/madsecurity-blog/understanding-cui-registry-safeguarding-dissemination-sanction-authorities
2024-09-08T02:30:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00427.warc.gz
en
0.896739
3,458
2.609375
3
April 15, 2021 Originally published: December 18, 2016 Whether protecting sensitive information, intellectual property, or just safeguarding against attacks, every IT department must devote some resources to data security. Traditionally, this takes the form of a sophisticated firewall which operates as a gatekeeper to the environment. Any data that wants to enter or leave the network must pass through this screening process to receive clearance. If the firewall deems the file to be malicious or untrustworthy, it is stopped. Firewalls are an effective way to regulate traffic coming and going from the network, but they are hardly foolproof. There is a saying in data security; security systems need to be right 100% of the time, but a hacker only needs to be right once. Inevitably, a firewall is going to have holes in it, and inevitably, a hacker will find a way through those holes. What is left to stop them once they are inside? For some environments, the answer is nothing. One strategy for providing security to the interior of the network is microsegmentation, and it can be achieved with software defined networking (SDN) products like VMware NSX. What is Microsegmentation? Segmentation is the practice of dividing the network into different tiers and installing a physical firewall or router designed to allow or forbid access to specific segments. Common segmentation strategies include an application segment, a web segment, and a database segment. Segmentation is a useful strategy and leads to a more robust security system, but there is still room to improve. Microsegmentation gives predictable security across hybrid cloud platforms and data centers the same by virtue of three key standards: dynamic adaptation, granular security, and visibility. The Zero Trust Model and Microsegmentation Forrester Research developed a concept known as the “zero trust” model of data security. It states that security policies should not simply be applied to the environment as a whole or large segment groupings but to everything. Every workload, every application, everything in the network must be protected. Without this strategy, a network is on some level “trusting” their network traffic to be innocent and benign. Microsegmentation is the process by which this “zero trust” model is achieved, and it drastically increases the number of segments in play in the network. Microsegmentation effectively makes each virtual machine (VM) on the hypervisor their own individual segment. Therefore, each and every virtual machine is protected by their own firewall. If a malicious file did manage a way through the environment firewall and onto a virtual machine, the file can get no further without having to once more pass through a firewall. Trying to create microsegmentation manually by dedicating specific physical firewalls and routers to virtual machines or bare-metal servers would be a time consuming and expensive process. However, with software defined networking solutions like VMware NSX, the environment is virtualized. This enables a network administrator to establish microsegmentation by creating “security policies” tied to each VM. Escalation and Data Security Microsegmentation is a powerful strategy for protecting the network, but it is important to remember why security administrators developed it in the first place. There is an ongoing arms race between data security professionals and hackers, and their back and forth competition has led us here. Microsegmentation is effective now and will one day become as commonplace as the standard firewall but it will never be truly enough. It is only a matter of time before malicious agents find reliable work arounds. For this reason, network administrators must always be fortifying their network security with the latest solutions, and that is not likely to ever change. Microsegmentation Post COVID-19 The sudden deployment of a remote workforce put a never before seen level of stress on IT resources: personnel and data/compute power. In this landscape, it is critical that those in charge of network security remain vigilant as remote-work starts to influence “the new normal”. The capability to quickly and easily segment is a key control as our work environments become more and more agile and dispersed, and workers begin to expect offices to adopt a remote working policy. If your organization does not currently practice this, now is the best time to implement this strategy. Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our highly-certified engineers and process-oriented excellence have certainly been key to our success. But what really sets us apart is our straightforward and honest approach to every conversation, whether it is for an emerging business or global enterprise. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges.
<urn:uuid:2f85c33f-d057-4875-8433-486067229380>
CC-MAIN-2024-38
https://gomindsight.com/insights/blog/microsegmentation/
2024-09-09T07:47:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00327.warc.gz
en
0.94767
1,033
3.125
3
What is business process management? Business Process Management (BPM) optimizes operations and cultivates a culture of constant improvement. Despite its benefits, some businesses still struggle with BPM, often confusing it with BPMN (Business Process Model and Notation). Understand the differences between BPM and BPMN, and discover how Nintex Process Manager simplifies this critical business practice. Business process management definition BPM is the common acronym for Business Process Management (BPM) and describes the method of capturing, understanding, and improving the way your business provides products and services to your customers. A healthy BPM approach engages teams in the pursuit of operational excellence and fosters a culture of continuous improvement. While this is not an uncommon acronym or business practice, many enterprise-sized organizations are still not leveraging the increased visibility and control that BPM practices can deliver that differentiates them from their competitors. The State of Business Process Management recently reported that 65% of organizations agreed that BMP had improved their efficiency, versatility, and customer satisfaction. And Gartner’s Leaders toolkit: Business Process Management states that 80% of business respondents feel process management value is higher than ERP, CRM or SCM to achieve an increased competitive advantage. Given this, it’s strange to understand why so many are overlooking something so essential to remain competitive in these difficult times. Streamline Your Business Processes Today Discover how Nintex Process Manager can transform your process management. Get started now BPM vs BPMN BPM is often confused with BPMN (Business Process Model and Notation) which is a graphical method of representing business processes within a business process diagram and is historically known as the ‘de facto’ format for documenting business processes. BPMN was developed as an international ‘language’ for describing business processes, it provides a complex and complete set of symbols and codes to define and describe procedures in a business environment. The resulting flow charts are intended to visually depict the details of business activities and their flow in an organization, making traditional text-based procedures a thing of the past. Unfortunately, although this language is readily available, a significant number of BPMN clients struggle to discombobulate the process steps, understand the complex language, and embed it. This approach is also heavily reliant on sufficient highly technical and qualified process experts, and enough companies that rely on BPMN will experience process breakdowns, non-conformance, and troubling degrees of wasted time or resources. Unlike BPMN best practices BPM tools like Nintex Process Manager are easy to understand, accessible, encourage employee engagement and feedback and allow you to hand process back to the business so it is embedded and becomes part of business as usual (BAU) supporting process excellence ambitions and revolutionize your process management. What is the BPM lifecycle? The BPM lifecycle standardizes the process of implementing and managing business processes inside an organization and is made up of five cyclical stages, design, model, implement, monitor, and optimize that support continuous improvement and process excellence. Regardless, if you are onboarding a new employee, applying for a new credit card, or responding to a customer complaint, these repeated plentiful activities completed in the same precise way are all ripe for process mapping, management, improvement, and even automation and are key to a corporation’s successful ongoing operation. - The design phase: The first step in the lifecycle is “design,” in this phase you will start by capturing a thorough understanding of how the process is currently performed. You’ll need to interview all people who perform, support, or are impacted by this process, review any pre-existing documents, get clear on unwritten business rules, and observe it in action. Some helpful questions to ensure you have an intricate understanding of the entire end-to-end process are: - Do you know the starting point of the process? - What are all the steps and in what order do they happen? - What is the end result of the process? - Who is responsible for each task and when it transitions to a new task owner? - Process dependencies – does it currently integrate with other systems? - How long does it currently take to complete? - Who performs each task (a service, a system, or a person) - Any current documentation that supports the process (or what key data points would be included in one if one were to be developed). - The model phase: The second step in the lifecycle is the “model”. Its purpose is to provide a visual representation of the process’s current phases, to improve things you must first understand how things are now (as-is) and then plan for how you want them to be in the future (to-be) once changed. It’s recommended that the ‘to-be’ is socialized far and wide for both feedback, then approval. This collaborative approach will flush out any discrepancies and increase buy-in of the new process and longer-term adoption. - The implementation phase: The implementation stage is the stage where you test the new model to see if it works in real-life scenarios. This will allow you to ensure that everything is working well and that any concerns have been resolved and any opportunities can be included before rolling the final process to a much larger audience. This approach increases confidence and the chances of a stress-free adoption. - The monitor phase: The fourth step of the BPM lifecycle “monitor” involves making sure the ‘new’ business processes are followed and measured in a repeatable manner to determine your return on investment over time. You might choose to develop KPIs that measure success (or lack thereof) like time, or cost to serve in dollars savings, preventable bottlenecks, delays, or potential mistakes. - The optimize phase: Optimize phase uses the insights captured in phase 4 – ‘Monitor’ and makes further process tweaks, that will make your process even more efficient. Good monitoring systems will enable you to achieve complete optimization and process excellence to eliminate wasted labor, improve output quality, ensure process compliance, and shorten speed-to-market. Unlock the Full Potential of Your Processes Identify, optimize, and drive efficiency across your organization with Process Manager. Learn more today What are the benefits of BPM? Visibility matters when it comes to your processes. Best practice BPM tools, like Nintex Process Manager, will allow you to identify, optimize, and drive efficiency across your entire enterprise and can: - Quickly turn complex maps and documents, into consistent, compliant, and easy-to-understand process maps. - Embed process ownership, collaboration, and accountability by handing the process back to the business. - Intuitive feedback tools being used by engaged employees will optimize processes – all possible from the office, home, or on the road. - Increase organizational control, ease auditing with mandatory approvals, escalations, and notifications. - Safeguard operational impacts from staff turnover, knowledge loss, and improve new employee onboarding. - Increase operational speed and efficiency by mapping, evaluating, identifying, and managing opportunities to improve your processes. - Personalized dashboards create total process visibility with live state changes. - Real-time process health summaries with team engagement stats and automatic tracking of every existing process, no matter its current state. What are examples of BPM? Discovering, mapping, managing and monitoring your processes with BPM can be applied to any repeatable set of business steps. Some examples of processes that have been improved by BPM are recruiting, onboarding, and provisioning process, and Help desk support process. Goal: Turbocharge recruiting, onboarding, and provisioning Finding the perfect candidate can be time-consuming and costly. Onboarding and provisioning equipment results in piles of paperwork and lost time. Get them productive without delay, by mapping your process and use automation to provide new starter forms, approvals, equipment, accounts, software licenses and more. Fast-tracking onboarding across departments—will help your new employee settle in fast and get to work. - Help employees get to work faster by mapping, automating, and optimizing the onboarding processes so that your new talent can get set up quickly and easily. Empower them from day one and watch them hit the ground running. - Increase employee retention, the average attrition for first-year employees is 13 percent, much of this is attributed to ineffective onboarding. With seamless onboarding, employees stay longer, cutting hiring costs with lower staff turnover. - Build trust and alignment with effective onboarding your new hires will be well informed about organizational initiatives and goals and eager to perform. Goal: Improving your help desk support process with digital forms and workflows It’s easy to be more efficient and save time to make better use of talent when you automate common help desk tasks. By mapping your help desk processes to identify automation areas. You can eliminate repetitious tasks—including assigning tickets for completion, tracking response metrics, and even accessing the support portal itself. - Accelerate response time by solving problems faster so you can get employees back to work without delay. - Track service levels and get real-time data with insights into the number of incidents, time to resolution, and user satisfaction. - Respond from anywhere and provide remote-accessible process applications to handle Watch & Learn: How to create a process group overview in the Nintex Process Manager The Nintex Process Manager process group overview shows a high-level view of a complex end-to-end process, defined at the process group level and made up of a series of linked sub-processes. In this video, you will learn how to create a process group overview. Discover how Nintex can revolutionize your business operations like it did for Coke Florida Read the full case study now!Customer stories Companies around the globe are using Nintex on different kinds of projects
<urn:uuid:de6e9043-2f2a-4cfc-8c80-359022d22577>
CC-MAIN-2024-38
https://www.nintex.com/process-intelligence/process-management/learn/what-is-business-process-management/
2024-09-09T07:59:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00327.warc.gz
en
0.934151
2,088
2.53125
3
Artificial intelligence (AI) is revolutionizing many industries, and the field of programming is no exception. Linus Torvalds, the creator of the Linux kernel and Git, offers insightful perspectives on how AI is transforming software development. This article delves into the profound changes AI brings to coding, from automating code writing to enhancing collaboration within the open-source community. By integrating natural language processing (NLP) in programming, AI has opened doors to new levels of abstraction, efficiency, and accessibility. However, this powerful technology comes with its own set of challenges and ethical considerations. Automating Code Writing and Review AI tools are increasingly capable of generating code from simple natural language descriptions. Developers can now write code using plain English instructions. This not only streamlines the coding process but also makes it accessible to individuals from diverse backgrounds. AI allows developers to focus on higher-level problem-solving rather than getting bogged down in syntax and syntax-related errors. AI’s role in code review processes is equally transformative. Automation can catch obvious bugs and subtle errors that human reviewers might miss. High-quality code is essential for robust and reliable software. Therefore, AI’s ability to enhance code quality makes it an invaluable asset in the programming toolkit. Additionally, AI tools excel at identifying coding patterns and suggesting improvements that humans might overlook due to familiarity or oversight. This not only helps in maintaining consistently high standards but also accelerates the development process, thereby enabling teams to meet deadlines more effectively. Moreover, AI-driven code writing and review extend the benefits of machine learning algorithms to enhance overall software robustness. These algorithms can predict potential issues by analyzing vast repositories of existing code and historical data. Incorporating such predictive analytics into the development cycle ensures that software is not only timely but also fortified against a wide array of potential vulnerabilities. This dual role of AI—streamlining routine tasks and bolstering security—makes it indispensable to modern-day software development. By enabling the translation of natural language to executable code, AI democratizes programming. More people, including those without formal coding education, can now engage in software development. This lowers the barrier to entry, promoting inclusivity and innovation. The shift from machine code to high-level languages and now to NLP is a natural evolution that broadens the scope of who can participate in coding. Furthermore, educational resources can be more effectively tailored to individual learning speeds and styles, leveraging AI-driven feedback and suggestions. Students and beginners benefit immensely from this personalized approach, accelerating their journey from novice to proficient coder. In addition, AI can identify gaps in knowledge and adapt learning materials to address these shortcomings, thus providing a more comprehensive educational experience. The democratization of programming also fosters a more diverse and dynamic development community. As more people with varying perspectives and problem-solving approaches join the field, the range of innovative solutions increases. This inclusivity not only drives technological advancement but also ensures that software products are more representative of the diverse user base they serve. In essence, AI broadens the programming landscape, making it more inclusive, adaptive, and innovative. New Level of Abstraction Evolution of Programming Languages Artificial intelligence introduces a new level of abstraction in programming languages. Developers can describe what they want in plain English, and AI systems convert these descriptions into executable code. This development is groundbreaking, as it simplifies the process and bridges the gap between technical and non-technical stakeholders. With this new abstraction level, the traditional approach to learning programming languages may shift. The focus moves from syntax memorization to understanding core programming concepts and logic. This evolution could lead to more intuitive and creative software solutions, enabling a broader range of people to contribute their ideas. The simplification of coding languages paves the way for more interdisciplinary collaborations, where experts from various fields can easily prototype and test software solutions without deep programming expertise. Moreover, the evolution of programming languages under AI’s influence demands a reevaluation of educational curricula. Schools and universities may need to update their syllabi to focus more on logic, problem-solving, and understanding how AI tools work rather than traditional coding syntax. This educational shift would prepare future generations better for a tech landscape increasingly driven by AI and automation. As AI continues to advance, we might see programming become less about technical know-how and more about creativity and strategic thinking. Potential for AI-Specific Languages There’s speculation that AI might eventually develop its own programming languages, tailored to its capabilities. These AI-specific languages could streamline complex coding tasks, reducing the need for extensive human intervention. While traditional developers will still play a crucial role, they might find their responsibilities evolving toward overseeing AI systems rather than writing every line of code. Such a shift would require developers to adapt their skillsets. Understanding AI algorithms, tuning models, and ensuring that AI outputs align with human goals and ethical standards will become increasingly important. This evolution reinforces the need for continuous learning and adaptability in the development community. Moreover, these AI-specific languages could focus on optimizing performance and security, aspects that are paramount in applications ranging from financial technology to healthcare. In the grand scheme, AI-specific languages could serve as a unifying platform for various specialized AIs, enhancing interoperability while minimizing redundant development efforts. Such a shift would enable more efficient use of resources and foster greater innovation across industries. As AI systems mature, they could handle increasingly complex tasks autonomously, leaving human developers to concentrate on strategic planning and high-level oversight. Evolution of Developer Roles Overseeing AI Systems Despite AI’s automation capabilities, developers will not become obsolete. Instead, their roles will shift to overseeing AI systems and ensuring these systems meet human-centric goals. This involves handling edge cases, troubleshooting unexpected behavior, and maintaining alignment with ethical standards. Developers will need to strike a balance between leveraging AI’s efficiency and preserving their critical oversight functions. Moreover, developers will play a key role in training and refining AI models. Their expertise in the domain will be vital for creating effective AI tools. This collaborative human-AI partnership can lead to even more remarkable advancements in programming. In essence, developers serve as the critical checkpoint ensuring that AI implementations adhere to human values, regulatory norms, and ethical guidelines. The oversight role also extends to data management and privacy concerns. As AI systems increasingly handle sensitive data, developers must ensure compliance with data protection regulations and ethical data usage. This responsibility adds another layer of complexity to the developer’s role, making it indispensable for safeguarding user trust and maintaining the integrity of AI systems. Thus, far from becoming obsolete, developers are poised to become even more integral to the tech ecosystem. Addressing AI’s Limitations AI-driven programming is not without its challenges. AI systems can make errors, sometimes referred to as “hallucinations,” where the AI produces incorrect or nonsensical outputs. These errors can have significant repercussions, especially in critical applications. Developing robust error detection and correction mechanisms is essential to mitigate such risks. Ensuring the reliability and safety of AI in coding involves continuous monitoring and improvement. Developers must remain vigilant and proactive in identifying and fixing issues. This process will not only improve AI systems but also build trust in their capabilities. The importance of fail-safes and rigorous testing protocols becomes paramount to ensure that AI-driven code meets the high standards required for mission-critical systems. Addressing these limitations also requires a multi-disciplinary approach. Collaborations between AI specialists, software engineers, and domain experts can produce more comprehensive solutions to potential errors. Standardizing best practices for error detection and resolution can further fortify AI’s role in programming. By proactively tackling these challenges, developers can harness AI’s full potential while minimizing risks, thus paving the way for more reliable and innovative software solutions. Impact on Open-Source Community The integration of AI tools in open-source projects can accelerate development processes and enhance code quality. AI can assist in managing large codebases, automating repetitive tasks, and providing intelligent insights. This facilitates faster iteration, allowing contributors to focus on innovative features and improvements. However, the rise of AI in open-source development also raises important questions about contributions and collaboration. AI-generated code needs to be as transparent and accountable as human-written code. Open-source communities will need to establish guidelines and best practices for integrating AI contributions. These guidelines will be crucial in maintaining the collaborative spirit of open-source while ensuring the reliability and integrity of contributed code. Furthermore, AI’s role in open-source can democratize participation by allowing contributors from diverse backgrounds to engage meaningfully. Automated tools can assist newcomers in understanding codebases and identifying areas where they can contribute, regardless of their initial skill level. This inclusivity can invigorate open-source projects with fresh perspectives and innovative solutions, driving the collective progress of the tech community. Balancing Open Data and Open Algorithms Artificial intelligence is dramatically reshaping numerous industries, and programming is no exception. Linus Torvalds, the visionary force behind the Linux kernel and Git, provides valuable insights into how AI is transforming the world of software development. This article explores the significant alterations AI introduces to coding, including the automation of code generation and the enhancement of collaborative efforts, particularly within the open-source community. With the incorporation of natural language processing (NLP), AI has elevated programming to new levels of abstraction, efficiency, and accessibility. However, this cutting-edge technology isn’t without its own unique set of challenges and ethical issues. These considerations include the potential loss of jobs, biases in code generated by AI, and the broader influence AI will have on the open-source community. By navigating these complexities, we can better understand the true impact of AI on the future of software development. The collaboration between human ingenuity and AI could potentially lead to even more groundbreaking innovations in the field.
<urn:uuid:c0b55f18-02bc-4065-94da-d26270b0e83a>
CC-MAIN-2024-38
https://developmentcurated.com/development-tools/ai-in-programming-transforming-code-with-natural-language-power/
2024-09-10T15:22:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00227.warc.gz
en
0.916165
2,043
3.046875
3
Originally published by New Context. When referring to the global network of computing resources that house and drive data and information interchanges today, many use the term “cloud” or “cloud computing.” From a holistic vantage point, this nomenclature has merit similar to that of the expression network used when describing a LAN, WAN, or MAN. Yet, just as these networks typically contain many connected computers and other devices, so are most cloud deployments a collection of multiple interconnected clouds. In today’s Information Age, the major threats that governments and enterprises (both local and global) face are digital in nature. As data is considered by many to be the most precious commodity—its use as a substitute for currency tax payments has even been considered—its acquisition and exchange are critical activities that take place in the cloud or clouds. Thus, when considering cloud security vulnerabilities, the analysis must be inclusive of all cloud deployments that may impact or interact with your systems. Prior to devising a plan to defend against these threats, we first need to recognize and understand the various types of cloud deployment. When you scan the skies on most days, the vista is usually dotted with multiple clouds. However, when you mention cloud computing, many conjure up a single structure with virtually infinite connections to networks that span the globe, similar to “the internet.” Exactly. In fact, a cloud is the part of the internet that connects a user or an enterprise with the computing resources that store and process their data. And “the cloud” generically refers to the complex constellation of data resources on the internet, in which we all are connected. When an organization decides to undergo a cloud migration, data security should be at the top of the list of essential considerations that must be addressed from the outset. This includes knowing the types of potential security threats for different types of cloud deployments such as multicloud or hybrid cloud. Neither the cloud nor the services and vendors that comprise it are static or fixed. Even if you begin with a single cloud or service provider, your cloud network is likely to evolve due to client demands or changes in vendors or services. Cloud environments are usually classified as either public or private. If your deployment consists of more than one network, regardless of whether they are public or private, then it is a multicloud. For enterprises, most cloud migrations or deployments are multicloud, which presents distinct challenges for securing data and managing access. Potential cloud security vulnerabilities for multicloud deployments include: Multicloud deployments can be quite complex as they typically involve different vendors and various tools. As such, a multicloud may require advanced tools and solutions to successfully address security vulnerabilities, which we will discuss after taking a look at hybrid cloud environments. If your cloud deployment consists of an integration of both public and private networks, then it is a hybrid cloud. Hybrid clouds usually include the following attributes: Employ automationHybrid clouds offer more flexibility than a multicloud; however, both have security threats. Hybrid cloud security vulnerabilities include: Hybrid clouds share some similar cloud security vulnerabilities with multiclouds, and both architectures require a series of deliberate steps to minimize cloud security vulnerabilities. Although different virtual network types, the security objective of preventing unauthorized access to data or any part of the system is the same for multiclouds and hybrid clouds, as the table below indicates: The list above is not all-inclusive. Yet, it does provide essential elements that should be a part of any planned cloud migration. Whether you are in the midst of a digital transformation or have finally realized that the success of your business necessitates it, cloud migration requires that you carefully consider the security vulnerabilities that your cloud deployment may face. Probably, the most significant decision concerning your cloud security is who will handle your digital transformation consulting. This partner will work closely with you to ensure that your migration successfully addresses the cloud security vulnerabilities for the network configuration that best suits your needs. Therefore, you should ensure that your choice has the requisite computing and tool expertise and experience, as well as the security strategy that will safeguard your data and meet compliance requirements regardless of your cloud deployment
<urn:uuid:5d55aaf4-4a82-4af7-acbf-157bf243590e>
CC-MAIN-2024-38
https://www.copado.com/resources/blog/cloud-security-vulnerabilities-a-step-by-step-plan-for-quelling-threats
2024-09-11T20:28:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00127.warc.gz
en
0.940794
855
2.671875
3
The philanthropic ventures of Oprah Winfrey From a childhood in poverty to becoming the first Black woman billionaire, Oprah has become a historic figure, touching the lives of many across the globe. Through her philanthropic ventures, Oprah made it her mission “to lead, to educate, to uplift, to inspire and to empower women and children throughout the world”, which she has successfully accomplished through donating approximately US$72mn to worthy organisations. In a bid to encourage people to make a difference around the world, Oprah launched the Oprah's Angel Network in 1998. Her vision was to inspire individuals by creating new opportunities to enable underserved women and children to rise to their potential. All funds went straight to the charity programmes, and Oprah herself covered all administrative costs. The organisation had raised a whopping US$80mn by 2010, which went towards various causes, such as helping women’s shelters, before the organisation stopped taking donations and eventually dissolved. “Through her foundation, Oprah Winfrey has taken her ability to convene and highlight issues close to her heart and translate that into action,” says Caroline Underwood, CEO of Philanthropy Company. “She has used the power of her personality, celebrity and reach to tackle issues affecting millions.” As a fierce advocate for girls and women, it’s no wonder that Oprah has donated to the Time’s Up campaign, which aims to create a society free of gender-based discrimination. Another venture that Oprah is in favour of is N Street Village, a non-profit providing housing and services for homeless and low-income women. But Oprah’s philanthropic efforts don’t stop in the US; the Oprah Winfrey Leadership Academy for Girls, provides just one example. Since founding the academy in 2007, Oprah is said to have spent over US$140mn on the school, providing a private education for underprivileged South Afrians girls in grades eight to 12.
<urn:uuid:3bc933e3-39b0-4406-9cb6-c3dd49c089b8>
CC-MAIN-2024-38
https://march8.com/articles/the-philanthropic-ventures-of-oprah-winfrey
2024-09-16T17:06:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00627.warc.gz
en
0.962429
404
2.515625
3
- April 22, 2021 - Posted by: Aanchal Iyer - Category: Uncategorized What is AI? From self-driven cars to SIRI, Artificial Intelligence (AI) is developing rapidly. While science fiction frequently portrays AI as robots with human-like characteristics, AI can comprise anything from IBM’s Watson to Google’s search algorithms to autonomous weapons. AI today is properly known as weak AI (or narrow AI), in that it is designed to do a narrow task (for example only facial recognition, or internet searches, or driving a car). However, many researchers have a long-term goal of creating general AI (AGI or strong AI). While narrow AI outdoes humans at specific tasks, such as solving equations or playing chess, AGI once developed will outperform humans at every cognitive task. Benefits and Safety with Regards to AI In the long run, the most important question is what will happen if the quest for a strong AI is realized and it leads to the creation of an AI system that surpasses human intellect? Such a super intelligent system may help us eliminate diseases, war, and poverty and work in improving healthcare services, transportation, and education in smart cities. Other applications that benefit from the implementation of AI systems in the public sector include energy, food supply chain, and environmental management. However, such a system can also be equally dangerous unless we learn to align our goals with those of a super intelligent AI. There are experts who doubt if such a strong AI will ever be created and then there are some who believe that once developed such an AI can unintentionally cause great harm. How Can AI Be Dangerous? A super intelligent AI is unlikely to have emotions such as hate or love and will not turn compassionate or nasty. However, experts believe that the following two scenarios are possible: - The AI is programmed to do something devastating: Autonomous weapons are AI systems that are programmed to destroy and if such weapons come in the hands of the wrong person, then it could cause great harm. Moreover, an AI arms race could lead to an AI war that is equally dangerous. - The AI is programmed to achieve something beneficial, but it creates a damaging method to achieve its objective: This can happen if we do not fully align our goals with those of AI, which is quite difficult. For example, if you ask a self-driven car to take you to the airport as soon as possible, it may actually do that but not before you are chased by the police for crossing the speed limit and causing damages to people and public property on the way. However, the car has still done what you asked it to do. AI safety and ethics is paramount and should be made a priority in the design and implementation of AI systems. AI ethics emerge to avoid societal and individual dangers caused by the abuse, misuse, poor design, or unintended negative consequences of AI systems. These will guarantee that the creation and implementation of AI is safe, and responsible. With the speedy advancements in computing power and access to huge amounts of big data, AI and Machine Learning (ML) systems will continue to evolve and improve. As always, with power comes great responsibility. Despite the benefits that AI brings to the world, it may potentially cause irreparable damage to the society as a whole if they are poorly designed or misused. The development of AI systems should always be responsible and developed towards providing the best sustainability for public benefit.
<urn:uuid:a9c9af58-89e5-42c9-8749-5a8413c560f6>
CC-MAIN-2024-38
https://www.aretove.com/benefits-and-risks-associated-with-artificial-intelligence
2024-09-16T15:41:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00627.warc.gz
en
0.961056
705
3.671875
4
The world of technology is expanding and evolving at a phenomenal rate. Gadgets, devices and software we didn’t even think were possible about ten years ago are emerging and altering life as we know it. Two of the biggest tech developments earning the media’s attention at the moment are augmented reality (AR) and virtual reality (VR). Once merely indulged in science fiction films, AR and VR have propelled onto the tech scene and are taking the world by storm. However, while people try to get a grasp on how each technology works, the question on most lips is what is the difference between the two? Although often mentioned in the same breath, AR and VR have significant differences. AR alters our view of the real world by overlaying virtual images and 3D graphics over what we are actually seeing in real life. >See also: The business of virtual reality With Microsoft’s HoloLens system, one of the most popular pieces of AR tech, users can view the room or environment they are in via the built-in camera in the headset. But the device also enables users to see computer-generated objects pop into the real world too. Retailers can make products come to life and demonstrate how they work, or vehicle manufacturers can see a car prototype take physical form in front of them. Where AR keeps users grounded in their current, real environment, virtual reality is a more immersive technology that throws users into a whole new virtual world, which isn’t connected to the real world. These virtual realities may look and feel real – a good example of this would be a flight simulator – or they may be completely made up, like the fantasy world in a video game. In any case, they’re entirely created by a computer. Loaded with sensors in the headsets, VR devices work to block out the current world and completely immerse the user in a simulated environment by affecting sight, hearing, smell and even touch. How are both being used currently? Both AR and VR have been receiving an increased amount of attention in recent months, with the likes Snapchat looking into the development of augmented glasses, and Facebook investing $2billion on the VR Oculus Rift device. Even some big name car manufacturers are getting ahead of the game – Audi, Volvo and Toyota are just three of the major players bringing VR and AR technologies into the car-buying process. Not to mention BMW’s Mini, which has actually developed its own AR driving goggle to enhance the driving experience. AR and VR are also expected to make waves across a number of other industries. As well as gaming and entertainment, they are set to transform sectors such as health, education and even the military. In fact, the science and technology sector has been using VR for a number of years, particularly with NASA having used VR technology to train astronauts for spacewalks since 1992. In the healthcare sector, a doctor in Miami used Google’s head-mounted VR device, Cardboard, to save the life of a baby who was born with half a heart and only one lung. He used Cardboard to see 3D images of the baby’s heart so he could plan and figure out how to conduct the surgery. In terms of AR, AccuVein is a medical handheld scanner that can show the location of veins under a patient’s skin in order to make it easier for doctors and nurses to insert injections, saving time and discomfort for the patient. AR and VR technologies also have the power to transform the property market. Integrating VR into the house-buying process will mean that instead of actually physically going to view a property, potential buyers will be able explore a house without having to travel to see it first. And AR devices will be useful when it comes to home renovations – enabling users to see how a piece of furniture will fit in a living room, for instance. When it comes to education, the immersive nature of VR technology also means that educators will be able to create more experiential learning experiences for children – a potential game-changing development given that children often learn in different ways. Which one will have more success in the business world? While it’s still relatively early days for both technologies, there is perhaps more of an enterprise opportunity for AR as the devices are intended to be discreet and can be used from anywhere and while on the move. This is potentially where the issue lies for VR as it blocks out the real world and is restrictive in where it can be used, and so may not work as well in the workplace. That’s not to say that we won’t see VR making waves in the enterprise world. There are lots of examples where VR devices could prove extremely valuable, particularly when it comes to the healthcare sector, education and training. However, as things currently stand, healthcare is the one main sector that is getting on board with VR – adoption in other industries is happening at a much slower rate. The key reason for this probably lies in the fact that AR’s benefits and uses are clearer for businesses, and so compared to VR, AR already seems to be getting a lot of traction in the business space. AR has the potential to completely revolutionise the way various industries operate and boasts a range of benefits that can aid employees in their work. By wearing an AR headset, workers can pull up useful information onto the display, which can help improve their ability and knowledge about a particular job. If we think about an engineer who may have been tasked with fixing a component on an aeroplane, he could use an AR device to look at the area he’s repairing and could bring up details about how to carry out the work or check when that particular part was last maintained. Users can access all manner of important information right in front of their face, without having to use a separate computer or refer to a paper-based manual, making life and the world of work considerable easier and more efficient. The possibilities are endless for both VR and AR. Of course there is still a long way to go before such technology becomes mainstream, and developers are working hard to develop new devices and iron out possible issues to make them more accessible. It’s an exciting time to see which direction AR and VR head in, as both have the potential to greatly impact the business world in the near future. The technologies look set to be the next major technology shift – following the development of the PC, the internet and the smartphone – and businesses need to decide how to play. Sourced from Matt Hunt, CEO, Apadmi Enterprise
<urn:uuid:b452e537-f405-45a0-b65d-cb67509ce28c>
CC-MAIN-2024-38
https://www.information-age.com/augmented-reality-vs-virtual-reality-2707/
2024-09-16T15:29:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00627.warc.gz
en
0.96326
1,359
2.9375
3
What are waterfall charts? Simply put – a waterfall chart is a diagram that is designed to represent cumulative values of a series of data in a sequential manner. This can be effective in various situations, including conducting a financial analysis. How waterfall charts can help a financial analyst If an analyst wants to study how profitable an organization is, he/she needs to build waterfall charts with the data. That way, he/she will be able to focus on the following, among other factors, at a glance. - The profits made by the organization on a yearly basis. - The investments made by the organization during specific time periods. - The time periods when the company made the most profits or sustained the worst losses. - The sources of revenue that proved to be most valuable for the organization. With all this data, the analyst can get down to proposing strategies for the organization to improve profits and manage investments better, reducing losses as much as possible. What to remember when building waterfall charts There are a few best practices that an analyst can follow, to ensure waterfall charts look better, and are easier to understand. Here are a couple of those: - Using contrasting color schemes to indicate the nature of the values is a good idea. For instance, the analyst can use warm colors (red, orange, brown, etc.) to indicate increases in values and cool ones (blue, green, magenta, etc.) to indicate decreases in values. - Mentioning the values of each segment over its corresponding column on the chart is nice, since it helps viewers know the value instantly, instead of looking at the axes and trying to gauge the values from there. Using Collabion Charts for SharePoint, it is possible for financial analysts to build beautiful Waterfall charts by connecting to different sources containing business data, such as SharePoint lists, SQL Server/Oracle databases, Microsoft Excel workbooks, CSV files, BDS, BDC, and many others through ODBC. More information on how to build these charts can be found here.
<urn:uuid:4902380a-fc13-40a8-b4c1-804367c9a72f>
CC-MAIN-2024-38
https://www.collabion.com/blog/?post_type=post&p=374
2024-09-08T06:31:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00527.warc.gz
en
0.925235
416
2.59375
3
Companies need to restrict who accesses which type of data and what they can do with it in order to keep it secure. Identity access management (IAM) manages identities and limits access to the systems, apps, and data inside an organization. In simple words, IAM is a cybersecurity framework that ensures the right people (and things) have access to the right resources at the right time. At the heart of IAM is Active Directory (AD), which serves as the central repository for managing user identities, permissions, and access levels. Through AD, organizations can maintain stringent control over access rights and confidentiality of their digital assets. What is Microsoft Active Directory? Microsoft Active Directory (AD) is a core Windows Server technology. It works as a centralized directory service for managing user identities, devices, and access permissions across an entire Windows domain network. One of the primary functions of Active Directory is managing user identities. It stores information about users, including usernames, passwords, and contact details. Active Directory also plays a vital role in managing devices within a network by enabling administrators to register and organize devices. Why Identity and Access Management is Important? Nowadays, the bulk of security breaches originate from compromised credentials. This underscores the critical importance of access management for individuals and organizations in the digital world. Implementing effective access management strategies is paramount to ensuring that only authorized individuals have access to sensitive information or systems. Here's why access management is important: First Line of Defense Against Attacks Access management serves as the primary barrier against cyber-attacks. It offers proper control of who can access your data, software, or system. It also acts as the initial defense layer and prevents unauthorized users from infiltrating networks or compromising data. Maintains Regulatory Compliance IAM ensures adherence to regulatory requirements and industry standards. Through this system, organizations can demonstrate compliance with regulations such as the General Data Protection Regulation or the Payment Card Industry Regulation Data Security Standard. This mitigates legal risks and enhances trust between customers and stakeholders. Protects Data Confidentiality Access management safeguards the confidentiality of sensitive data. It prevents unauthorized individuals from viewing or manipulating confidential information, helping to maintain its integrity and confidentiality and reduce possible breaches. Automated Security Feature Access management systems frequently integrate automation, effectively streamlining security processes and mitigating the risk of human error. Automations enhance operational efficiency by removing and handling routine tasks and ensuring consistent security protocols are in place. Easy Use for User With simplified access, users can easily transition between applications, allowing them to focus on tasks at hand without being impeded by tireless security measures. You will facilitate smoother workflows and enhance organizational efficiency without compromising the security of your data. They work better; you run safer. How Active Directory Identity And Access Management Works Active Directory Identity and Access Management functions as a central hub for controlling user access and permissions. It secures safe access to digital resources. This system employs protocols like Lightweight Directory Access Protocol (LDAP), Kerberos for authentication and Group Policy for enforcing security settings. IAM encompasses several key functionalities, each playing a vital role in safeguarding resources and enhancing user experience. Here is how it works: Authentication is the cornerstone of IAM. It ensures that only legitimate users gain access to resources. The sign-in process verifies users' identities through methods like passwords, biometrics, or security tokens. Multi-factor authentication adds an extra layer of security. It requires users to provide additional proof of identity through features like code sent to their mobile device, further mitigating the risk of unauthorized access. Authorization and Restrictions Once authenticated, users' access privileges are determined through authorization mechanisms. Role-Based Access Control assigns permissions based on predefined roles. It imposes restrictions based on factors like user location, device health, or sign-in risk level. Single Sign-On (SSO) Single Sign-On (SSO) allows you to access multiple applications with a single set of credentials, eliminating the need for multiple logins. It also enhances productivity and reduces password fatigue while centralizing access management for administrators. Azure AD Connect Azure AD Connect bridges the gap between on-premises and cloud environments. This integration enables users to utilize a unified identity across cloud and on-premises resources. IAM leverages mechanisms to ensure the security and integrity of user identities and access privileges. Some of them include continuously monitoring user activities, authentication attempts, and access patterns to identify anomalous behavior. These approaches can detect suspicious activities in real time and respond proactively to mitigate risks. Further, it automates the process of identifying and mitigating risks, reducing the burden on IT administrators. Monitoring and Reporting Effective IAM requires continuous monitoring and reporting to ensure compliance and detect potential security breaches. Detailed logs and reports provide insights into access patterns, audit trails, and usage statistics. This enables administrators to identify anomalies and take proactive measures to enhance security posture and maintain regulatory compliance. Multi-Factor Authentication for Active Directory Gone are the days when a mere password was sufficient to secure access to critical systems and data. Multi-factor authentication is a potent solution that requires users to present two or more verification factors before granting access to digital resources. This extra layer of protection significantly mitigates the threat posed by unauthorized access attempts, making it an indispensable tool in the fight against cyber threats. The benefits of MFA are vast, but the most common are: - Enhanced security; - Multiple walls that prevent breach attempts; - Protection against phishing and; - Compliance with industry standards Key Benefits of Active Directory IAM Review Reports and Audit User Activity in your System As an admin, you’re able to see what all these identities are up to. This helps you spot risks before they turn into real damaging problems — or allows you to understand what went wrong if you are in the middle of a crisis. Risk Calculation and Multi-factor Authentication Access management serves as the first wall against malicious actors. Authentication mechanisms can help organizations mitigate the risk of unauthorized access and potential breaches. There may be situations where you can be reasonably confident that the user is who they say they are and allow them instant access. In the on-prem world, you may deem company desktops located within your building safe devices, for instance. You might also want your users to be able to access data and apps remotely. The power of the cloud is a worthy goal to pursue — but just because that login attempt from a personal laptop does have the right password doesn’t necessarily mean it’s the right end-user. Perhaps the laptop was stolen, along with the notepad the user wrote down their password. Or perhaps a completely unknown device acquired the credentials through a data breach. Use One Identity to Access When a user has to sign on with one set of credentials for one service, and another for another, this leads to problems. Passwords are often used for multiple services, or when you force different passwords, the user may start storing them insecurely in order to remember them. With Active Directory, you can allow one identity to access any app. When each point of login is a potential security hole, bundling them all together and enforcing strong passwords reduces your risk. Protect Access to Sensitive Data and Apps Not every user requires access to every application or dataset. Following the principle of least privilege, access should be granted on an as-needed basis. Users should have precisely the level of access necessary to perform their tasks effectively. Allowing more access than necessary poses an unacceptable risk. In the event of a compromised account or a user turning hostile, the potential for damage is significant. However, by limiting access, the extent of potential harm can be mitigated. Secure Your Active Directory with Expert Guidance (from us) Your organization's security is paramount in an ever-evolving digital landscape. With Active Directory Identity and Access Management at the forefront, you can proactively safeguard your digital assets against emerging cyber threats. Ready to enhance your security posture and ensure regulatory compliance? Don't wait until it's too late. Contact us now to schedule a consultation and take the first step towards strengthening your digital defenses.
<urn:uuid:9f4574f6-64e7-447b-9652-9a14ba854c32>
CC-MAIN-2024-38
https://info.cruciallogics.com/identity-and-access-management
2024-09-11T22:43:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00227.warc.gz
en
0.919016
1,690
3.015625
3
Technology is innovating and revolutionizing the world at a rapid pace with the application of Machine Learning. Machine learning (ML) and Artificial Intelligence (AI) might appear to be the same, but the reality is that ML is an application of AI that enables a system to automatically learn from data input. The functional capabilities of ML drive operational efficiency and capacity automation in various industries. Technological Innovation for Convenience Workforce handling is tedious and less productive; this is where Artificial Intelligence has lucratively overcome the age-old system of manual labor. With the world moving at such a fast pace, monitoring has become a constraint for most organizations; for this very reason, Artificial Intelligence and Machine Learning are used more as tools of convenience rather than just pieces of technology. We have seen how accounting systems have replaced ledger books. At the same time, processes have been set up to align machines with organizational requirements effectively to balance everyone’s demands. However, with the way Artificial Intelligence is advancing, it seems this technology is quickly going to change the way processes are functioning. Not only trends on social media will be affected, but even marketing will see a complete makeover through the use of Artificial Intelligence. The Effect on Various Fields When it comes to Artificial Intelligence, everybody wants a taste of it. From marketing experts and tech innovators to education sector decision-makers, Artificial Intelligence holds the capability to pave the path for a healthy future. Artificial Intelligence has been designed to provide utmost customer satisfaction. To derive maximum results from the nuances of AI customer-centric processes will need to align their business metrics to the logic of this latest technology. As Big Data evolves, machine learning will continue to grow with it. Digital Marketers are wrapping their heads around Artificial Intelligence to produce the most efficient results by putting in minimal efforts. The entire algorithm and the build of Artificial Intelligence will be used to predict trends and analyze customers. These insights are aimed at helping marketers build patterns to drive organizational results. In the future, it seems like every basic customer need would be taken care of through fancy automation and robotic algorithms. The healthcare industry is one of the widely reckoned industries in the world today. Simply put, it has the maximum effect on today’s society. Through the use of Artificial Intelligence and Machine Learning, doctors are hoping to be able to prevent the deadliest of diseases, which even includes the likes of cancer and other life-shortening diseases. Robots Assistants, intelligent prostheses, and other technological advancements are pushing the health care sector into a new frenzy, which will be earmarked towards progressing into a constantly evolving future. In the financial sector, it’s vital to ensure that companies can secure their operations by reducing risk and increasing their profits. Through the use of extensive Artificial Intelligence, companies can build elaborate predictive models, which can successfully mitigate the potential of on-boarding risky clients and processes; this can include signing on dangerous clients, taking on risky payments, or even signing up on hazardous loans. No matter what might be the company’s requirement, Artificial Intelligence is a one-stop shop when it comes to preventing fraudulent activities in day to day operations – this, in turn, will lead to money savings possibilities, profit enhancement and risk reduction within every organizational vertical. We are steadily heading towards a future that will be marked complete with the rise of robotics and automation; this is not going to be restricted to the medical sector only; intelligent drones, manufacturing facilities, and other industries are also going to be benefited by the rise of robotics. Artificial Intelligence methodologies like Siri and Cortana have already seen the light of day – this is just the beginning. More and more companies are going to take these capabilities to a new level. As more and more military operations begin to seek advantages from the likes of mechanized drones, it won’t be long before e-commerce companies like Amazon start to deliver their products through the use of drones. The potential is endless, and so are the possibilities. In the end, it is all about using technology in the right manner to ensure the appropriate benefits are driven in the right direction.
<urn:uuid:6d89cbd7-beac-4c63-b4ce-ddd8ca8670cd>
CC-MAIN-2024-38
https://www.idexcel.com/blog/tag/uses-of-machine-learning/
2024-09-11T23:45:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00227.warc.gz
en
0.942314
839
2.828125
3
Security Event vs Incident: Understanding the differences between a Security Event vs an Incident Security Event vs Incident: Understanding the difference between a security event and an incident is crucial when safeguarding your digital assets. In today’s interconnected world, where cyber threats are rampant, having a solid grasp of these terms can help you effectively respond to potential risks. A security event refers to any occurrence that has the potential to compromise the confidentiality, integrity, or availability of your data or systems. It could be something as simple as a failed login attempt or suspicious network activity. On the other hand, an incident represents a confirmed and actionable security breach that requires immediate attention and response. - Understanding the difference between a security event vs incident is crucial for effective risk management. While a security event refers to any occurrence that could potentially compromise the security of a system, an incident is a confirmed breach or violation that has caused actual harm or damage. - Contextual differences play a significant role in distinguishing between security events and incidents. Factors such as intent, severity, and impact on business operations help determine whether an event should be classified as an incident. - Conducting a thorough impact analysis is essential to assess the severity of a security event or incident. This analysis includes evaluating the potential harm, financial losses, reputational damage, and regulatory compliance implications. - Understanding the time frame insights is crucial in effectively managing security events and incidents. Prompt identification, containment, eradication, and recovery are key steps to minimize the impact and prevent further damage. - Real-life examples provide valuable insights into the consequences of security events and incidents. Learning from real-life examples helps organizations enhance their incident response strategies and strengthen their security posture. - Implementing effective response strategies is vital to mitigate the impact of security incidents. This includes establishing incident response teams, defining roles and responsibilities, and implementing incident response plans. On this page: - Security Event vs Incident: Understanding Basics - Contextual Differences - Security Event vs Incident: Impact Analysis - Security Event vs Incident: Time Frame Insights - Response Strategies - Security Event vs Incident: Risk Management Role - Security Event vs Incident: Strategy Overviews - Security Event vs Incident: Post-Incident Focus - Security Event vs Incident: Frequently Asked Questions Security Event vs Incident: Understanding Basics An incident is an unexpected and disruptive occurrence that deviates from the norm, posing potential threats to organizations. It can be a security breach, a system failure, or any other event that compromises the integrity of your systems or data. Incidents require immediate attention and swift resolution to mitigate the risks they pose. An incident disrupts the normal functioning of your organization’s operations, potentially leading to financial losses, reputational damage, and legal consequences. These incidents can come in various forms, such as unauthorized access attempts, malware infections, or data breaches. The impact of incidents can be severe if not addressed promptly and effectively. It is crucial to understand that incidents are not just inconveniences but serious security threats that demand your attention. Ignoring or delaying their resolution can result in further damage and escalation of the situation. Therefore, it is essential to have proper incident response protocols in place to detect, analyze, contain, eradicate, and recover from incidents efficiently. Security Event Defined A security event refers to changes within your organization’s systems or network architecture. Unlike incidents with negative implications, events can have positive and negative impacts on your organization. Events can be planned or unplanned occurrences that affect the overall security posture of your systems. Positive security events include routine system updates, software installations, or scheduled maintenance activities to enhance your infrastructure’s security. On the other hand, negative security events encompass unauthorized access attempts, system vulnerabilities being exploited, or suspicious network traffic patterns indicating a potential attack. While events may not always require immediate action like incidents, they should not be ignored. Certain events may indicate underlying vulnerabilities or weaknesses in your security measures. By paying attention to these events and taking appropriate actions when necessary, you can proactively address potential issues before they escalate into full-blown incidents. Security Event vs Incident: Key Differences The key differences between incidents and events lie in their nature, urgency, and impact. Incidents are unexpected occurrences that demand immediate attention due to their detrimental effects on your organization’s security and operations. They can disrupt your systems, compromise sensitive data, and jeopardize the trust of your customers or stakeholders. On the other hand, events are changes within your systems or network architecture that can have lasting impacts but may not necessarily require immediate action. While events can indicate potential vulnerabilities or threats, they do not always result in immediate harm or disruption. However, ignoring significant events can lead to incidents if left unaddressed. When it comes to security contexts, both incidents and events have an impact on your organization. Incidents, such as data breaches or cyberattacks, can expose vulnerabilities in your systems and highlight areas that need improvement. They can harm your organization’s security posture, potentially leading to financial losses, reputation damage, and legal consequences. On the other hand, events refer to any occurrence that could risk your organization’s security. However, they can be managed effectively with the right measures. Understanding these security contexts is crucial for effective risk management. By analyzing incidents, you gain valuable insights into the weaknesses in your security infrastructure and processes. This knowledge allows you to address those vulnerabilities and strengthen your overall security posture proactively. It enables you to identify potential threats before they escalate into major incidents. While incidents demand immediate attention and response, events require a different approach. With events, you can anticipate potential risks and implement preventive measures. By managing events effectively, you can minimize their impact on your organization’s security. The impacts of incidents and events on organizations can vary significantly. Incidents often result in tangible consequences that affect your organization’s operations and stability. They can lead to financial losses due to the costs of incident response, recovery efforts, legal proceedings, and regulatory fines. Moreover, incidents can damage your organization’s reputation among customers, partners, and stakeholders. In contrast, events may not always have such immediate or severe consequences. However, this does not diminish their importance. Managing events effectively is essential for maintaining organizational stability and preventing them from escalating into major incidents. By recognizing potential risks early on and implementing appropriate controls or countermeasures, you can mitigate the impact of events on your organization’s security. Establishing robust incident response and event management processes is crucial to ensure organizational stability amidst the varying impacts of incidents and events. This includes developing incident response plans, conducting regular security assessments, implementing preventive measures, and fostering a culture of security awareness among your employees. Security Event vs Incident: Impact Analysis When it comes to incidents, the consequences can be far-reaching and significantly impact various aspects of an organization. First and foremost, incidents can disrupt normal operations, causing delays, downtime, and even complete shutdowns. This disruption can result in financial losses and damage the organization’s reputation. Prompt and effective incident response is crucial in mitigating these negative consequences. By responding swiftly and efficiently to incidents, organizations can minimize the disruption caused and reduce the overall impact on their operations. This involves identifying and containing the incident, investigating its root cause, and implementing measures to prevent similar incidents from occurring in the future. In addition to operational disruptions, incidents can also have a detrimental effect on an organization’s reputation. News of security breaches or data leaks can spread quickly, damaging trust and confidence among customers, partners, and stakeholders. The loss of trust can lead to a decline in business opportunities and revenue. Organizations must prioritize transparency and communication to mitigate the negative consequences of incidents on reputation. By being open about the incident, acknowledging any mistakes or shortcomings, and taking steps to address the issue, organizations can demonstrate their commitment to resolving the situation and rebuilding trust. Stability is another critical area affected by incidents. A major incident can create instability within an organization’s internal systems and processes. It may disrupt employee workflows, compromise data integrity, or even compromise critical infrastructure. This instability not only hampers day-to-day operations but also poses risks to the organization’s overall security posture. Organizations must focus on effective risk mitigation to maintain stability in the face of incidents. This involves conducting thorough vulnerability assessments and implementing appropriate controls to minimize potential risks. Organizations can proactively address weaknesses in their systems and processes to enhance their resilience against future incidents. Security Event Consequences Events also have consequences for an organization’s security posture. While some events may have negative impacts, others can present opportunities for improvement and growth. It is crucial to evaluate the consequences of events to manage risks effectively. Negative consequences of events can include increased vulnerability to security threats, compromised data integrity, or even physical damage to infrastructure. These consequences can affect an organization’s operations and risk management strategies. It is essential to address these issues promptly and implement measures to prevent similar events from occurring in the future. On the positive side, events can catalyze change and improvement. They can highlight weaknesses in existing security measures and provide valuable insights into areas that require attention. By leveraging these insights, organizations can strengthen their security posture and overall resilience. Evaluating event consequences is vital for effective risk mitigation. Organizations must analyze the impact of events on their operations, identify any vulnerabilities exposed, and develop strategies to address them. This proactive approach enables organizations to minimize potential risks and improve their ability to respond effectively to future events. Security Event vs Incident: Time Frame Insights Incidents can vary in duration, depending on the severity and complexity of the situation. Swift resolution is crucial to minimize the impact of incidents on your organization. The duration of an incident can range from a few hours to several days. During an incident, it is important to respond promptly and efficiently. Immediate actions are taken to contain and mitigate the effects of the incident. This includes identifying the root cause, isolating affected systems, and implementing temporary fixes or workarounds. Once the initial response is underway, a thorough investigation takes place to understand the full extent of the incident and its impact on your organization’s operations. This investigation may involve forensic analysis, data recovery, and collaboration with internal teams or external experts. The duration of an incident also depends on factors such as communication channels, coordination among teams, availability of resources, and complexity of systems involved. Resolving incidents may require extensive troubleshooting, testing, and implementation of permanent solutions. Security Event Duration Events can have varying durations based on their planning and execution. Unlike incidents that are often unexpected disruptions, events are typically planned occurrences that serve specific purposes for your organization. Short-term events such as conferences, seminars, or product launches may last from a few hours to a few days. These events create awareness, promote products or services, or facilitate networking opportunities. On the other hand, long-term events like trade shows or exhibitions can span several days or even weeks. These events provide platforms for showcasing products or innovations to a wider audience. Evaluating the duration of events is essential as part of effective risk management. Understanding how long an event will last allows you to allocate resources better, plan for contingencies, and ensure smooth operations. Furthermore, considering an event’s potential short-term or long-term impacts allows you to assess its success in achieving desired outcomes. This evaluation can help you identify areas for improvement and make informed decisions for future events. Security Event vs Incident: Real-life Examples In cybersecurity, incidents can take various forms and have serious consequences. Let’s look at some real-life examples to understand their impact on organizations and individuals. One common type of incident is a cybersecurity breach. This occurs when unauthorized individuals gain access to sensitive information or systems. For instance, in 2017, Equifax, one of the largest credit reporting agencies, experienced a massive data breach that exposed the personal information of approximately 147 million people. The breach had far-reaching consequences, leading to financial losses for individuals and damaging the company’s reputation. Another example is ransomware attacks, where malicious software encrypts files on a victim’s computer or network until a ransom is paid. In 2020, the University of California San Francisco fell victim to a ransomware attack. It paid $1.14 million to regain access to its encrypted data. Such attacks not only disrupt operations but also pose significant financial risks. Different types of incidents require tailored responses. When dealing with a cybersecurity breach, organizations must act swiftly to contain it, investigate its extent, and mitigate further damage. This involves isolating affected systems, patching vulnerabilities, and notifying affected individuals about potential risks. Learning from incidents is crucial for improving security measures. By analyzing the root causes and identifying weaknesses in their systems or processes, organizations can implement preventive measures to avoid similar incidents in the future. This includes conducting regular security audits, implementing strong authentication protocols, and training employees on cybersecurity best practices. Security Event Illustrated While incidents often bring negative connotations, events also play an important role in maintaining security posture. Let’s explore some examples of events that can impact security. One such event is software updates. Regularly updating software ensures that known vulnerabilities are patched and new features are implemented. For example, operating system updates often include security patches that address known vulnerabilities. Failing to apply these updates can leave systems exposed to potential attacks. Another important event is data backups. Regularly backing up data is crucial for protecting against data loss or ransomware attacks. For instance, if a company’s systems are infected with ransomware, having recent backups allows them to restore their data without paying the ransom. On the other hand, failing to perform regular backups can permanently lose valuable information. Effective event management is key to maximizing the benefits and minimizing the risks associated with events. This involves carefully planning and coordinating software updates, system maintenance, and data backups. Organizations should establish clear procedures and schedules for these events, ensuring they are conducted regularly without disrupting critical operations. When handling incidents, it is crucial to respond promptly and effectively. Responding promptly allows you to minimize the impact of the incident and prevent further damage. Addressing the incident quickly can mitigate potential risks and protect your organization’s assets. To handle incidents efficiently, organizations often rely on incident response teams. These teams are composed of experts who are well-versed in identifying, analyzing, and resolving security incidents. They coordinate the response efforts and ensure the incident is contained and resolved as swiftly as possible. The key steps involved in incident response include identification, containment, eradication, recovery, and lessons learned. First, you need to identify that an incident has occurred by monitoring your systems for any signs of compromise or unauthorized activity. Once identified, you must contain the incident to prevent it from spreading further. This involves isolating affected systems or networks. After containment, the next step is eradication. Here, you focus on removing the incident’s root cause and eliminating any malicious presence from your environment. Once eradicated, you can proceed with recovery, which involves restoring affected systems or data to their normal state. Lastly, analyzing the incident and learning from it is essential. This step is known as “lessons learned.” By analyzing what went wrong during the incident and identifying areas for improvement, you can enhance your organization’s security posture and prevent similar incidents. Managing Security Events Managing events requires careful planning, execution, and evaluation to ensure successful outcomes. Meticulous event management is critical in maintaining security, whether it’s a conference, seminar, or company gathering. Planning an event involves considering various factors such as venue selection, attendee registration process, and logistical arrangements. You must assess potential security risks and implement appropriate measures to mitigate them. This may include hiring security personnel, implementing access control measures, or conducting background checks on attendees. During the execution phase, it’s crucial to have a well-defined chain of command and clear communication channels. Event managers should be prepared to handle unexpected situations promptly and efficiently. Regularly monitoring the event for any security incidents or suspicious activities is essential to maintaining a secure environment. After the event concludes, evaluation is key to identifying areas for improvement. By gathering participant feedback and analyzing the event’s overall success, you can refine your event management strategies for future events. This includes assessing security protocols, identifying any vulnerabilities exposed during the event, and implementing necessary changes to enhance security measures. Security Event vs Incident: Risk Management Role Assessing Incident Risk Regarding risk management, incidents are crucial in identifying vulnerabilities and potential threats to an organization’s security. By assessing the risks associated with incidents, you can gain valuable insights into the weak points of your security infrastructure. This assessment allows you to understand the impact of incidents on your organization and its operations. Assessing incident risk involves evaluating the likelihood of an incident occurring and determining its potential consequences on your organization’s security. It requires a comprehensive understanding of your systems, processes, and assets and any potential threats that could exploit vulnerabilities. By conducting thorough assessments, you can identify areas where additional controls or measures are needed to mitigate risks. It is important to note that incident risk assessment should not be a one-time activity. As technology evolves and new threats emerge, your organization’s risk landscape will also change. Therefore, regular assessments are necessary to avoid potential risks and ensure ongoing protection. Managing Security Event Risk Managing event risk is another important aspect of risk management. Events refer to planned activities or organizational occurrences that pose risks or challenges. These events range from large-scale conferences or product launches to routine maintenance tasks or system updates. The process of managing event risk involves identifying, assessing, and mitigating the risks associated with these events. It begins with a thorough analysis of the potential risks involved in each event. Understanding the specific risks associated with an event, you can develop strategies to minimize or eliminate them. Event risk management aims to strike a balance between reaping the benefits of an event and ensuring that any associated risks are effectively managed. It involves implementing measures such as contingency plans, security protocols, and communication strategies to address potential issues before they escalate. Security Event vs Incident: Strategy Overviews When it comes to incident responses, there are several strategies that your teams and operations should follow. First and foremost, documentation plays a crucial role in effectively responding to incidents. Documenting all the incident details, including the time, date, and any relevant information, is important. This documentation is valuable for future analysis and helps identify patterns or trends. Another key strategy is analysis. Once an incident occurs, it is essential to analyze its root cause and impact. This analysis helps in understanding how the incident happened and what steps can be taken to prevent similar incidents from occurring in the future. By identifying vulnerabilities or weaknesses in your organization’s systems or processes, you can make necessary improvements to enhance security. Notifying stakeholders is also an important aspect of incident response. Your organization needs to promptly inform relevant stakeholders about the incident. This includes internal teams, management, clients, and other parties affected by the incident. You can ensure transparency and maintain trust with your stakeholders by keeping everyone informed. In addition to documentation, analysis, and stakeholder notification, containing incidents is another critical strategy. It involves taking immediate action to minimize the impact of the incident and prevent further damage. This may include isolating affected systems or networks, deactivating compromised accounts, or implementing temporary measures to mitigate risks. Swift and effective incident responses are vital because they help prevent recurrence. By addressing incidents promptly and thoroughly, you can identify vulnerabilities or gaps in your security measures and take appropriate actions to strengthen them. This proactive approach reduces the likelihood of similar incidents happening again in the future. Security Event Techniques Various techniques can be employed for successful outcomes when managing events within your organization’s security framework. The first step is planning. Meticulous planning ensures that all aspects of the event, including security considerations, are considered. This involves identifying potential risks and developing strategies to mitigate them. Another important technique is evaluation. After the event, it is crucial to evaluate its effectiveness and identify areas for improvement. This evaluation helps assess the success of security measures implemented during the event and provides valuable insights for future events. Event techniques also play a significant role in minimizing security risks. You can reduce the chances of unauthorized access or breaches during events by implementing appropriate security measures, such as access control systems, surveillance cameras, or encryption protocols. These techniques help safeguard sensitive information and protect your organization’s assets. Security Event vs Incident: Post-Incident Focus After experiencing a security incident, it is crucial to take immediate action to recover from its impact. Recovery steps involve a series of measures to restore normal operations and minimize any damage caused by the incident. The first step in the recovery process is to identify and isolate the affected systems or areas. By isolating the compromised components, you can prevent further spread of the incident and limit its impact. This may involve disconnecting affected devices from the network or temporarily shutting down certain services. Once isolation is complete, remediation measures come into play. These measures focus on fixing vulnerabilities or weaknesses that allowed the incident to occur in the first place. It may involve patching software, updating security configurations, or implementing additional security controls. In addition to remediation, data recovery is another critical aspect of post-incident recovery. Backups are essential for restoring lost or corrupted data. Regularly backing up your important files ensures that you have a copy available for restoration in case of an incident. While recovering from a security incident, it is essential to maintain continuous monitoring and evaluation. This involves closely observing your systems and networks for signs of lingering threats or potential vulnerabilities that could lead to future incidents. Ongoing monitoring allows you to detect and respond promptly if any suspicious activities are detected. Preventing security incidents should be a top priority for any organization. Here are some tips to help you strengthen your security posture and reduce the risk of incidents: - Conduct security assessments: Regular assessments help identify vulnerabilities and weaknesses in your systems and networks. By regularly assessing your infrastructure’s security, you can proactively address potential risks before they become incidents. - Invest in employee training: Your employees play a crucial role in maintaining security. Educate them about best practices, such as using strong passwords, recognizing phishing attempts, and being cautious when accessing sensitive information. Well-trained employees can act as an additional line of defense against security threats. - Implement strong access controls: Limiting access to sensitive data and systems is vital for preventing unauthorized access. Ensure that only authorized individuals have the necessary permissions to access critical resources. Regularly review and update user privileges to align with changing roles and responsibilities. - Keep your software and systems current: Regularly applying security patches and updates is crucial for addressing known vulnerabilities. Outdated software can be easy targets for attackers. Enable automatic updates whenever possible to protect you against the latest threats. - Establish a security incident response plan: Having a well-defined plan in place allows you to respond effectively in the event of an incident. The plan should outline roles and responsibilities, communication channels, and steps to follow during incidents. Security Event vs Incident: Frequently Asked Questions What are the basic differences between a security event and an incident? A security event refers to any occurrence that has the potential to compromise the confidentiality, integrity, or availability of information. On the other hand, a security incident is an actual violation or breach of security policies, resulting in unauthorized access or damage to systems or data. How do security events and incidents differ in terms of context? While a security event can be benign and not necessarily indicate a breach, a security incident always involves an actual violation of security measures. Contextually, events can serve as indicators that help identify potential threats. At the same time, incidents demand immediate attention and response to mitigate their impact. What Is an Alert? An alert is a notification of a cybersecurity event. (Or, sometimes, a series of events.) You can work with your security provider to determine which types of events you want to monitor with alerts. Depending on your Security Information and Event Management (SIEM) software and support, you can send alerts to any relevant parties who need to take action. What is the importance of impact analysis in distinguishing between events and incidents? Impact analysis helps assess the consequences of a security event or incident. It aids in determining whether an event qualifies as an incident based on its severity and potential harm. Organizations can prioritize their response efforts and allocate resources more effectively by evaluating the impact. How does the time frame play a role in differentiating between security events and incidents? Time frame insights are crucial when distinguishing between events and incidents. A security event may occur without immediate detection or response. In contrast, an incident requires prompt action due to its active nature. Analyzing the time frame helps establish appropriate response protocols for each situation. How should organizations approach response strategies for handling events versus incidents? Organizations must have predefined response strategies for both events and incidents. Events require monitoring, analysis, and investigation to determine their potential impact. On the other hand, incidents demand immediate response actions, such as containment, eradication, and recovery efforts. A well-defined incident response plan is crucial for effective incident management. What role does risk management play in distinguishing between events and incidents? Risk management is vital in differentiating events and incidents by assessing the potential risks associated with each occurrence. It helps organizations prioritize their focus based on the likelihood and potential impact of security events turning into incidents. Risk management enables proactive measures to prevent incidents and minimize their impact if they occur. What should organizations focus on after an incident occurs? After an incident occurs, organizations should shift their focus toward post-incident activities. These include performing a thorough investigation to identify the root cause, evaluating the effectiveness of response strategies, implementing necessary improvements, conducting lessons learned sessions, and updating security policies or controls accordingly. The aim is to enhance future incident prevention and response capabilities.
<urn:uuid:f002e7ba-e723-4840-b8d0-b44d2de90420>
CC-MAIN-2024-38
https://www.businesstechweekly.com/cybersecurity/risk-management/security-event-vs-incident-understanding-the-differences-between-a-security-event-vs-an-incident/
2024-09-13T03:58:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00127.warc.gz
en
0.940064
5,444
2.578125
3
SPF Record Check To Reinforce SPF Records And Prevent Domain Impersonation For Spam Emails and Phishing SPF record check can play a significant role in thwarting malicious attempts to impersonate. With many of the emails sent globally marked as ‘spam,’ protection from email spoofing and spam is no longer an addition to security practices but a necessity. SPF or Sender Policy Framework is a security measure that enables domains to authorize a selected list of servers to send emails on their behalf, thereby reducing the number of spam emails being sent under their domain name. And an SPF record check can ensure the record’s accuracy, thus making it foolproof. Table of Contents What Is SPF Record Check? SPF is a security provision where an SPF record containing a list of all authorized email servers authorized to send emails on an entity’s behalf (that is, using their domain name) is published in the Domain Name Service (DNS). The policy is set in the form of an SPF record that follows a specific format. An accuracy check known as SPF record check is necessary to ensure that the SPF record syntax is intact, free from errors and discrepancies, and has not missed listing any of the authorized servers. It thus makes the entire SPF policy perfect and more efficient. It effectively prevents anyone from sending spoof emails for spamming and phishing using the domain of an organization. Why Have An SPF Record Check? A lower SPF grade indicates an organization’s vulnerability to malicious actors using their domain name for spoofing attacks. It comes with its share of defamation and financial loss for the organization. In 2016, 54.8% of organizations in the U.S had an SPF grade of C or lower. It indicates a large quantity of spoofing emails that organizations are likely to receive because of the Sender Policy Framework’s poor execution by domains. Functions Of SPF Checkers SPF check operates as a diagnostic tool that validates the Sender Policy Framework. In simple terms, the SPF checker locates the SPF record for the required domain name and displays it (provided the domain has published its SPF Record). A series of diagnostic tests are then run for the record, reflecting the result and highlighting any errors that may be found. It will help to make the SPF record error-free and more efficient. All such actions of SPF checkers are performed to accomplish its two broad functions described below The adversaries know precisely how to enhance the credibility of their fake emails. They do it by impersonating renowned and established organizations. Although a highly overused trick, such phishing scams have proven to be significant time and again. Spoofing is when the adversaries replicate a domain and send out emails to people in its name, usually for phishing and spamming. The SPF checker assists the email receiver’s server to identify whether the received email is actually from the organization’s domain it claims. By verifying the authenticity of the mail server, the SPF record check promotes spoofing prevention. Keep Emails From Being Marked As Spam This function of SPF is a corollary to the one listed above. In preventing fake emails from being marked as legitimate, the SPF record check also ensures that the emails sent from a domain (the real and genuine ones) do not get marked as ‘Spam’ by the recipient’s server. Here becomes evident the importance of using SPF for an organization! In case an organization or domain doesn’t use SPF, then the chances that their emails will go into the spam folder of the targeted recipients become manifold! How To Create SPF Records? An SPF record is essential to protect an organization, its business, and its customers from malicious interventions. The processes involved in creating an SPF record are listed here. - Enlist IP Addresses used to send emails. - Identify the sending domains. - Create the SPF Record - Publish the SPF Record on DNS What Else Works With SPF? Though SPF is powerful, it realizes its full potential when it operates in unison with two other email authentication techniques – DKIM and DMARC. DKIM Record Check DomainKeys Identified Mail or DKIM is used to authenticate an email source with a digital signature. The DKIM record check allows a recipient to check whether an email was sent from the domain owner whose name reflects in the “From” section. Just like an SPF Record Checker, a DKIM Analyzer tool also tests a domain name and tries to locate it in the published DKIM records. DMARC Record Check DMARC or ‘Domain-based Message Authentication, Reporting, and Conformance’ is the umbrella mechanism forming the basis of authentication checks like SPF and DKIM. As per RFC 7489 published by the IETF RFC Editor, DMARC record check enables an organization to forward domain-level policies and preferences for reporting, message validation, and disposition and permits it to take extreme measures against emails that defy authentication checks like SPF or DKIM. SPF Record Check Google Google Workspace or GSuite also provides the facility of keeping spam emails at bay. The SPF record check security feature of Google may be accessed and enabled from its Admin Help Center, where a step-wise guide has been provided for the same. Spam emails are a hassle to the receivers and a financial drain for the impersonated organization. Since publishing a list of verified IP addresses serves as a security shield for both the organization and its beneficiaries, it is wise to use the SPF protocol with an appropriate SPF record check. Join the thousands of organizations that use DuoCircle Find out how affordable it is for your organization today and be pleasantly surprised.
<urn:uuid:3b7cdc31-4d7e-446c-868b-aa593f0df834>
CC-MAIN-2024-38
https://www.duocircle.com/content/spf-record-check
2024-09-13T06:02:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00127.warc.gz
en
0.929047
1,209
2.640625
3
(SciTechDaily) Generating quantum states normally requires a strong interaction between the systems involved, such as between several atoms or nanostructures. Until now, however, sufficiently strong interactions were limited to short distances. Typically, two systems had to be placed close to each other on the same chip at low temperatures or in the same vacuum chamber, where they interact via electrostatic or magnetostatic forces. Coupling them across larger distances, however, is required for many applications such as quantum networks or certain types of sensors. A team of physicists, led by Professor Philipp Treutlein from the Department of Physics at the University of Basel and the Swiss Nanoscience Institute (SNI), has now succeeded for the first time in creating strong coupling between two systems over a greater distance across a room temperature environment. In their experiment, the researchers used laser light to couple the vibrations of a 100 nanometer thin membrane to the motion of the spin of atoms over a distance of one meter. As a result, each vibration of the membrane sets the spin of the atoms in motion and vice versa. The experiment is based on a concept that the researchers developed together with the theoretical physicist Professor Klemens Hammerer from the University of Hanover. It involves sending a beam of laser light back and forth between the systems. “The light then behaves like a mechanical spring stretched between the atoms and the membrane, and transmits forces between the two,” explains Dr. Thomas Karg, who carried out the experiments as part of his doctoral thesis at the University of Basel
<urn:uuid:7e4c12c7-46ac-4c64-9caf-4d1cf4709c12>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/laser-loop-acts-as-a-mechanical-spring-to-couple-quantum-systems-over-a-distance/amp/
2024-09-13T05:51:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00127.warc.gz
en
0.9481
320
3.609375
4
(Phys.org) Quantum researchers at the University of Bristol have dramatically reduced the time to simulate an optical quantum computer, with a speedup of around one billion over previous approaches. Experimental work from the University of Science and Technology of China (USTC) was the first to claim quantum advantage using photons—particles of light, in a protocol called “Gaussian Boson Sampling” (GBS). Their paper claimed that the experiment, performed in 200 seconds, would take 600 million years to simulate on the world’s largest supercomputer. Taking up the challenge, a team at the University of Bristol’s Quantum Engineering Technology Labs (QET Labs), in collaboration with researchers at Imperial College London and Hewlett Packard Enterprise, have reduced this simulation time down to just a few months, a speedup factor of around one billion. Their paper “The boundary for quantum advantage in Gaussian boson sampling”, published today in the journal Science Advances, comes at a time when other experimental approaches claiming quantum advantage, such as from the quantum computing team at Google, are also leading to improved classical algorithms for simulating these experiments. The team’s methods do not exploit any errors in the experiment and so one next step for the research is to combine their new methods with techniques that exploit the imperfections of the real-world experiment. This would further speed up simulation time and build a greater understanding of which areas require improvements. Anthony Laing, co-Director of QET Labs and an author on the work, said: “As we develop more sophisticated quantum computing technologies, this type of work is vital. It helps us understand the bar we must get over before we can begin to solve problems in clean energy and healthcare that affect us all. The work is a great example of teamwork and collaboration among researchers in the UK Quantum Computing and Simulation Hub and Hewlett Packard Enterprise.”
<urn:uuid:a235e27b-b2df-45c1-b806-a23f3e0e2499>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/research-team-at-university-of-bristol-chase-down-advantage-in-quantum-race/amp/
2024-09-13T05:01:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00127.warc.gz
en
0.924304
395
3.640625
4
Data Protection: Safeguarding Your Data on Wi-Fi Like many modern technologies, it’s a wonder we ever got by without Wi-Fi. Before wireless connectivity, we were tethered to one spot with our computers, we couldn’t use our phones to access the Internet, and we certainly weren’t turning our lights on and off with an app. Business — and the rest of life — has gotten a lot more convenient thanks to Wi-Fi. Watch out for Wi-Fi hacking Unfortunately, it’s made things a lot more convenient for cybercriminals as well. There are a number of ways hackers might be able to harm your business if you don’t have the right protections in place. Common Wi-Fi exploitations include: - Piggybacking and wardriving: In both of these situations, someone nearby hops on your wireless network without you knowing. Piggybacking is done by someone who likely lives or works close by — wardriving is when someone drives around with a wireless device seeking out vulnerable wireless networks. - Evil twin: A hacker sets up a public Wi-Fi network that mimics the name of an actual network nearby. For instance, a hair salon might offer a public network called Salon-Name. A hacker could set up a wireless network called Salon-Name-Customers, hoping that some people would assume that’s the salon’s Wi-Fi so they can then access the users’ devices. - Brute force attack: A hacker tries to guess your WiFi password, which can be rather easy compared to guessing personal passwords because businesses want to make it convenient for customers to access their Wi-Fi. When hackers do gain access to your network, it becomes their playground from there. They can access and control devices from your phone to your smart thermostat to the router itself. They can even pursue illegal activities using your network — potentially making it look like someone in your business is the criminal. To avoid problems like these, you’ve got to start with your password. Implement Wi-Fi password best practices Whether you have a wireless network for employee use or customer use, it’s an absolute must for you to protect it with a strong password. Hackers make easy work of passwords for devices — and they can do it for your Wi-Fi network too. Creating a hard-to-crack password will protect your business. While employees and customers might complain that it’s cumbersome to enter a lengthy password, it’s essential to your wireless safety. Follow these guidelines for safer Wi-Fi network passwords: - Make it complex: Try a series of words, inserting symbols and capital letters to increase the time it would take to brute force the password — like Apple-purPle!jumP$. An 18-character password with a combination of numbers, uppercase letters, lowercase letters and symbols will take hackers 26 trillion years to guess. (79 billion years if they use ChatGPT.) - Change it often: Change the password at least twice per year, if not more. Start fresh each time — don’t change just one character! - Don’t broadcast it: All your hard work of creating strong passwords and diligently changing them will be lost if you post signs around your business with the login information available for anyone to see. Get help creating a protected Wi-Fi network If building a safe wireless network isn’t something you want on your to-do list, the Cenetric team can help. We have the expertise to make your wireless access productive and protected at the same time. Tell us about your needs and we’ll share how we can help.
<urn:uuid:58afa4f6-def7-41b5-acf1-3b992052a534>
CC-MAIN-2024-38
https://cenetric.com/data-protection-safeguarding-your-data-on-wi-fi/
2024-09-14T09:35:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00027.warc.gz
en
0.916205
776
2.578125
3
Ransomware and The Damage It Causes Ransomware is a malicious software, when attacks a system, it encrypts all files and data. The system is hacked for the time and the user cannot access it unless a ransom is paid. These cyber criminals then have the power to delete and corrupt all your important data, files and software if the amount demanded is not met or is delayed. If the ransom is paid to the cyber criminals, they then provide decrypting codes that do not guarantee the complete safety of your data and ensure ransomware recovery. Ransomware was developed to threaten organizations, campaigns, and businesses, to make big sales out of it. And the ransomware market is thriving off of ransomware software and phishing attacks. These cybercriminals have become sophisticated in their techniques and have advanced their horizons to smartphones as well. Any online device can be attacked and will only be restored if the money is given to them. Companies are normally advised not to pay the ransom, but instead invest money in strong cybersecurity systems, tools, software and ransomware recovery. Since the cost of recovering the systems and resuming work is far greater than investing in a good fool-proof security plan. According to Bitdefender, the ransomware payments hit a record of $2billion in 2017. You can imagine how severe the attacks can be for them to make such big figures. These cyber criminals run proper targeted campaigns- they once attacked files, that video gamers cherished and saved them in local drives or external drives, with contents like downloaded games, maps, and other important information. Measures to take After a Ransomware Attack Note that placing proper security plans in place doesn’t guarantee 100% safety, but you can take extra precaution. These six steps can be taken after the attack: - Identify the kind of attack- there are two kinds, one is screen-locking and the other is encryption attack. Identify what are you dealing with and then see what options of ransomware recovery you have at your disposal. Check how many files you still have to access, save those immediately before it spreads. - Disconnect Immediately- Since it is a cyber attack and requires systems to be connected through a network, disconnect when attacked, so it doesn’t spread any further and any other systems via a shared network. - Note details of ransom demanded- take a picture of the ransomware, the message displayed by the cyber attackers and report to respective authorities. - Activate your response plan- the policies you have set when a data breach happens, follow through that which may include notifying stakeholders along with other necessary measures. - Research- you need to study the ransomware recovery process and how it can be dealt with if there are any online security software that can handle such breach and recover data. Also, if there is a way to decode the encrypted files without giving in to their demands. If all research fails, then turn to the cybersecurity authorities. - When all fails, take the hit and recover from backups- make sure you have an antivirus installed before you recover from backups.
<urn:uuid:9d11f3ac-b8f2-4347-95e0-6b26f48f3508>
CC-MAIN-2024-38
https://university.monstercloud.com/news/ransomware-infection/
2024-09-14T11:12:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00027.warc.gz
en
0.94355
627
2.75
3
Why are the characters in the story "Trifles" feeling upset? 1. What is the significance of the broken birdcage in relation to the characters' emotions? The text implies that Ruth and other characters feel upset due to emotional dissatisfaction stemming from their environments and relationships. The characters in the story "Trifles" are feeling upset due to emotional dissatisfaction stemming from the environment and relationships presented in the narrative. The broken birdcage symbolizes neglect and a lack of care, reflecting the overall atmosphere of unhappiness and discontent among the characters. In Susan Glaspell's play "Trifles," the characters, especially Ruth, express feelings of unhappiness and dissatisfaction. The broken birdcage mentioned represents more than just a physical object; it serves as a metaphor for the neglect and lack of care experienced by the characters in the story. This neglect is evident in the untended life of Minnie Foster, whose isolation and solitude depict a life devoid of warmth and companionship. The emotional weight of the broken birdcage and the untended life it represents contributes to the overall atmosphere of unhappiness in the narrative. Ruth's reaction upon returning home may be attributed to the realization of this emotional dissatisfaction in her own life and surroundings. The absence of care and understanding from her environment and relationships likely make her feel isolated and unhappy. Furthermore, the broken birdcage symbolizes the larger themes of confinement and suffocation experienced by the characters, particularly women, in the patriarchal society portrayed in the play. The lack of agency and autonomy, coupled with the sense of being trapped in oppressive conditions, adds to the characters' feelings of emotional distress and unhappiness. Overall, the characters in "Trifles" feel upset not only because of the broken birdcage itself but also because of what it represents in terms of neglect, lack of care, and emotional dissatisfaction in their lives. The impact of environment and relationships on individual well-being is a central theme in the play, highlighting the profound effect that external factors have on one's emotional state.
<urn:uuid:cca2e6a5-5ced-44e9-9e51-975b2ae20cd3>
CC-MAIN-2024-38
https://bsimm2.com/english/the-unhappy-residents-of-trifles.html
2024-09-15T13:54:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00827.warc.gz
en
0.933651
414
3.453125
3
Abu Dhabi’s Sustainable Aquaculture Initiative: A New Approach to Marine Conservation and Economic Growth Abu Dhabi is launching a sustainable aquaculture project using innovative techniques focused on traditional fish farming. The initiative uses floating sea cages and underwater cameras instead of AI-driven systems. The project aims to enhance food security, reduce pressure on natural fish stocks, and support economic growth. Key challenges include balancing production efficiency with environmental sustainability and addressing community concerns. The initiative could face environmental risks and potential resistance from local communities. Main AI News: Abu Dhabi is setting a new standard in sustainable aquaculture practices with a groundbreaking initiative that moves away from traditional reliance on artificial intelligence. This project focuses on revitalizing conventional fish farming methods with a commitment to environmental sustainability. Located off the southeastern coast of Delma Island in the Dhafra region, the initiative is dedicated to conducting scientific research on local fish species using innovative floating sea cages. The main objective is to develop environmentally sound procedures for cultivating marine life, aiming to reduce the strain on natural fish stocks, combat the effects of climate change, and enhance food security in response to the growing demand for seafood. What distinguishes this project from typical aquaculture ventures is its intentional decision to forgo advanced AI-driven data collection systems. Instead, the initiative utilizes underwater cameras to monitor fish behavior and optimize feeding strategies. A sophisticated data transmission, storage, and analysis platform further ensures streamlined and effective operations. Aligned with government priorities to promote the development of marine aquaculture, this project highlights the significant economic and environmental benefits of sustainable fish farming practices. The initiative exemplifies a forward-thinking approach to marine conservation and economic growth through strategic site selection guided by hydrodynamic modeling and comprehensive surveys. However, the project also faces challenges, particularly in balancing production efficiency with environmental sustainability. Concerns about floating sea cages and their potential impact on marine ecosystems and local fishing communities may arise. On the positive side, this project boosts sustainability by emphasizing traditional fish farming methods while integrating innovative techniques. It strengthens food security by addressing the rising demand for seafood in Abu Dhabi, easing the pressure on natural fish stocks. Furthermore, the development of marine aquaculture contributes to economic diversification and job creation in the region. Nonetheless, potential drawbacks include environmental risks, such as water pollution and ecosystem disruption, if fish farming activities are not managed effectively. Local fishing communities and environmental groups concerned about the project’s impact on their livelihoods and the surrounding environment may also resist. This initiative represents a significant shift in the aquaculture market, highlighting a growing trend toward sustainable practices prioritizing environmental stewardship and long-term resource management. For the market, this means increased emphasis on innovation in traditional sectors, potential growth in eco-friendly technologies, and new opportunities for economic diversification in regions looking to balance environmental and economic interests. However, success will depend on effectively managing environmental impacts and engaging with local communities to mitigate resistance.
<urn:uuid:df05081d-afe0-4a08-8591-5d30f7df4d3f>
CC-MAIN-2024-38
https://multiplatform.ai/abu-dhabis-sustainable-aquaculture-initiative-a-new-approach-to-marine-conservation-and-economic-growth/
2024-09-16T18:38:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00727.warc.gz
en
0.908312
604
2.578125
3
As today’s devices become increasingly interconnected, data security becomes a top priority. One critical component is a firewall which acts to filter incoming and outgoing network traffic. Firewalls may consist of hardware with router capabilities or software (like the built-in firewall in Windows), with routers redirecting packets based on their MAC addresses and destination IP addresses, while firewalls review each packet’s content. What is a Router? Routers transmit data packets between networks, enabling users to share files across the Internet. They also act as traffic cops by selecting routes for computers to travel down. Without routers, businesses could not communicate or collaborate on projects effectively. Routers are networking devices that operate at the third layer of the OSI model. They allow network devices like computers, printers, scanners and hard drives to communicate between themselves as well as connect to the internet, acting both as firewalls and content filters that protect against security threats as well as prevent employees from downloading malicious software. Most routers include a web-based management program for performing administrative activities, including changing passwords and activating security protocols as well as setting port forwarding rules. Some models even come equipped with family-friendly features like content blocking or timers to limit internet use that can be enabled individually on devices or globally across your network. In addition, wireless access points exist that transmit WiFi signals throughout your network. Features of Router Routers are network devices used to transmit data between multiple networks. Using ports, these routers filter incoming information according to its type (e.g. http traffic commonly uses port 80 while outgoing email over SMTp typically utilizes port 25), before routing appropriate packets of information directly to their appropriate applications. Routers can act as firewalls to safeguard against malicious attacks and data leakage, and often come equipped with parental controls that restrict internet access for children. However, it’s important to remember that routers only provide limited network security protection against outside threats. Many routers feature an internal wireless access point that converts Ethernet data to radio waves that can be broadcast across your Wi-Fi network, making it possible for multiple devices to connect at once and improving overall productivity. Some routers even feature Quality of Service (QoS) options that prioritize certain types of data over others – much like how highways give priority to carpools on highways – improving multimedia performance like videoconferencing or gaming online. Types of Router There are various kinds of routers, from wired models to wireless ones and those featuring different ports such as Fast Ethernet or gigabit, for high speed network access. Furthermore, there are also dual WAN options that offer redundancy in case one connection goes down. Routers operate at the network layer (Layer 3 of the OSI model), and identify data packets’ IP addresses by inspecting their headers and comparing this information against its routing table to determine which path best forwards them. Routers connect large networks, like WANs. Their main function is routing data across the Internet – such as emails or websites – from source to destination, such as video files. Routers also enable hardware devices like printers and smart security systems to connect to it; their maximum speeds may be stated in megabits per second but keep in mind that actual speeds may be lower due to shared usage among all connected devices. How Does Router works? Routers serve a vital purpose: connecting computers and other devices to both the internet and each other. They do so by converting Ethernet signals into WiFi waves that can be broadcast over your home Wi-Fi network. Many routers also come equipped with security features that prevent outsiders from hacking your Wi-Fi signals. To ensure packets reach their destinations safely, a router scans each packet’s data for its header before consulting a list of routes to determine its next stop. It uses various factors – including destination location and travel time – when choosing its path of travel for each packet. A router is an essential piece of networking hardware in any home or small business with multiple computers or devices connected to the internet, as without one your devices wouldn’t be able to communicate between each other or access the web. Furthermore, its primary responsibility lies in protecting against outside threats like viruses, malware and spyware. What is a Firewall? Firewalls are an essential element of network security, offering essential defense against cyberattacks. As entry points to private networks they filter traffic based on pre-established security rules. Firewalls may also serve as DHCP relays allowing devices to communicate across various network segments while still upholding security policies. When data arrives from the Internet and enters your network, a firewall inspects its header before making its decision whether or not to forward or block it according to rules you set up. This process usually happens quickly enough that most people don’t even notice. Outgoing data poses a considerable security threat, including key loggers (which record passwords) and macros (scripted computer demands that cause applications to crash). A firewall that filters outgoing traffic can help shield against these risks; however, you must keep its software current to protect against cyberattacks of various forms. Features of Firewall Firewalls monitor data packets at a network level to block malware and suspicious activity as well as quickly assess for hidden infections. They may take either software or hardware forms; software firewalls can be installed directly onto each machine while hardware firewalls can be established between your gateway and network – these latter solutions may even come as cloud solutions that offer “firewall as a service”. Static packet-filtering firewalls operate at OSI layer 3 of the network stack and use filtering based on IP addresses, port numbers and packet protocols to prevent two networks from connecting directly without authorisation. They cannot track ongoing connections; rather they must approve new traffic at every new packet received. Next-generation firewalls can monitor traffic at every level of the OSI model, including application layer. They can recognize and block specific content as well as apply policies based on who’s using an application or website, providing bandwidth control to prioritize certain apps and websites – particularly useful when dealing with VoIP phone systems where voice/video calls often experience severe turbulence issues. Types of Firewall There are various kinds of firewalls to choose from. Hardware devices may sit between networks and the Internet while software programs run on computers; additionally, firewalls may also be built into routers. A packet-filtering firewall is the simplest form of firewall available. This type of protection checks information packets before they can reach the network, taking into account surface-level details like their destination IP address, packet type and port number to determine their approval for passage through to its intended target. If denied passage it will be dropped instead and may never reach its desired goal. Proxy firewalls are another popular form of network protection, serving to filter outgoing data from a network and prevent hackers from gaining entry to internal computers or devices. Some proxy firewalls also act as DHCP servers that assign IP addresses for devices on the network – an especially handy feature for businesses sharing equipment like printers or wireless security cameras. More advanced protection may include network address translation or virtual private networks. Difference Between Firewalls and Routers? Firewalls and routers both play key roles in network security, but their functions differ considerably. A router focuses on routing data packets between networks; firewalls prevent unauthorised access from the Internet into private systems. One difference between firewalls and routers is that firewalls filter incoming traffic while routers do not filter any. Firewalls monitor both incoming and outgoing network traffic, either hardware or software-based. They work by inspecting each packet’s headers and contents against predefined rules, before making a determination whether to allow or block connections based on these results. They can even be configured to block specific types of attacks such as DDoS attacks or worms. A firewall can be integrated into a router or purchased as a separate device; many consumer routers now come equipped with inbuilt firewall protection. Low-end combined router/firewall devices may be more suitable for small businesses or paranoid individuals; however, dedicated firewalls offer greater packet analysis capability.
<urn:uuid:fa521818-18ee-48b1-84b3-4b647828b2d0>
CC-MAIN-2024-38
https://cybersguards.com/router-vs-firewall/
2024-09-18T01:54:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00627.warc.gz
en
0.945613
1,686
3.8125
4
In which three ways is an IPv6 header simpler than an IPv4 header? (Choose three.) Click on the arrows to vote for the correct answer A. B. C. D. E. F.Explanation The correct answers are A, B, and C. A) Unlike IPv4 headers, IPv6 headers have a fixed length: IPv6 headers are simpler than IPv4 headers because they have a fixed length of 40 bytes, while IPv4 headers vary in length from 20 to 60 bytes, depending on the options present. This fixed length simplifies packet processing and reduces the overhead of routing and forwarding packets. B) IPv6 uses an extension header instead of the IPv4 Fragmentation field: IPv6 uses an extension header to carry information that was previously included in the IPv4 header. One such field is the Fragmentation field, which was used in IPv4 to enable packets to be fragmented into smaller pieces for transmission over networks with smaller Maximum Transmission Units (MTUs). In IPv6, this function is provided by the Fragmentation Extension Header, which is only used when fragmentation is required. C) IPv6 headers eliminate the IPv4 Checksum field: IPv6 headers eliminate the Checksum field that was present in IPv4 headers. In IPv4, the Checksum field was used to detect errors in the packet header and payload. In IPv6, this function is performed by the upper-layer protocols, such as TCP and UDP, which have their own error-detection mechanisms. Eliminating the Checksum field simplifies packet processing and reduces overhead. D) IPv6 headers use the Fragment Offset field in place of the IPv4 Fragmentation field: This statement is incorrect. In IPv6, the Fragmentation Extension Header is used in place of the IPv4 Fragmentation field, and the Fragment Offset field is used in both IPv4 and IPv6 to indicate the position of the fragment within the original packet. E) IPv6 headers use a smaller Option field size than IPv4 headers: This statement is incorrect. In fact, IPv6 headers have no Options field, which was present in IPv4 headers and used for a variety of purposes, such as record route, timestamp, and security. In IPv6, this functionality is provided by extension headers. F) IPv6 headers use a 4-bit TTL field, and IPv4 headers use an 8-bit TTL field: This statement is incorrect. In fact, IPv6 headers use a 8-bit Hop Limit field, which is similar to the TTL field in IPv4 headers. The Hop Limit field specifies the number of intermediate routers that a packet can traverse before being discarded, to prevent packets from looping indefinitely in the network.
<urn:uuid:910d6795-945c-48c5-8670-ba969f9b7cbe>
CC-MAIN-2024-38
https://www.exam-answer.com/ipv6-header-simpler-than-ipv4-header
2024-09-09T16:17:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00527.warc.gz
en
0.880109
558
3
3
Ethical Data Practices in the Age of AI and Advanced Analytics In today's technology-driven world, businesses are leveraging vast amounts of data to gain valuable insights, make informed decisions, and enhance their overall performance. The advent of artificial intelligence (AI) and advanced analytics has further accelerated this process; however, with this power also comes the obligation to embrace ethical data practices. In this blog, we will explore the significance of ethical data practices in the age of AI and advanced analytics for maintaining customer trust and navigating the complex legal and regulatory landscape. The data collected using encryption, secure servers, and regular security updates to protect sensitive data during storage and transmission forms the foundation of ethical data practices. In the context of AI and advanced analytics, ethical considerations are involved. Adhering to these principles shows commitment to privacy, data breach prevention, regulatory compliance, and building trust through responsible data management- ensuring that organizations use data responsibly. Below are the fundamental pillars of ethical data practices. Ensuring Data Security "According to DataProt, 1 in 10 small businesses suffers a cyberattack yearly". This staggering statistic underscores the critical importance of implementing robust data security measures as the first fundamental aspect of ethical data practices. Organizations must adopt various strategies to safeguard against such devastating losses, including utilizing encryption methods to protect data during storage and transmission, implementing secure servers with stringent access controls, and regularly updating security protocols. With data safely guarded, businesses can confidently focus on generating more opportunities for growth and innovation without the hassles and anxieties posed by cyber threats. Businesses must ensure that individuals' personal information is collected with consent and used legally, fairly, and transparently. Additionally, organizations should establish procedures to anonymize or pseudonymize data whenever possible, minimizing the risk of re-identification and unauthorized access. Respecting privacy rights enables businesses to enhance trust and maintain positive relationships with their customers. Addressing Bias and Fairness "AI can help reduce bias, but it can also bake in and scale bias"- McKinsey. While AI and advanced analytics systems offer tremendous potential, their heavy reliance on data for training and decision-making poses a significant challenge. To champion fairness and counteract biases, businesses must take proactive steps in their AI implementations. One vital strategy is diversifying the training data, ensuring a more representative and inclusive dataset. Periodic audits serve as a safeguard against potential discriminatory practices and provide an opportunity to rectify any unintended biases. Through these efforts, businesses enhance their reputation and foster a work environment that values diversity, inclusivity, and equal opportunities for all. When businesses are transparent about their decision-making processes, they promote a culture of accountability throughout the organization. This commitment to transparency involves implementing algorithms that are efficient and offer individuals valuable insights into the rationale, significance, and potential outcomes of automated decisions. Data Governance and Responsible AI Ethical data practices are intrinsically linked to the implementation of comprehensive data governance. Data governance also involves the implementation of adequate controls to uphold data quality, ensuring that information is accurate, reliable, and fit for its intended purpose. A robust data governance framework enables organizations to effectively manage risks, maintain data integrity, and ensure compliance with regulations. Algorithmic transparency requires organizations to clarify how algorithms are designed, trained, and utilized- helping businesses build trust, reducing potential discrimination, and fostering accountability. This includes documenting the data sources used, disclosing any limitations or biases, and ensuring that individuals impacted by automated decisions have avenues for recourse or explanation. Ethical data practices are paramount in the era of AI and advanced analytics. Businesses prioritizing privacy, fairness, transparency, accountability, and consent can establish themselves as responsible data stewards, engender customer trust, and navigate the legal and regulatory landscape more effectively. By adopting these practices, organizations fulfill their ethical obligations and unlock the true potential of data responsibly and sustainably. To learn more about our AI, Analytics, and Data Governance services, reach out to us at email@example.com.
<urn:uuid:e7cd3a66-210f-41a7-9a0b-fadfcfd189e4>
CC-MAIN-2024-38
https://www.espire.com/blog/posts/ethical-data-practices-in-the-age-of-ai-and-advanced-analytics
2024-09-10T20:12:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00427.warc.gz
en
0.91652
811
2.890625
3
Have you heard? Antivirus is dead. Kaput. No longer relevant in today’s computer security world, where 315,000 new malicious files are detected every day. When Brian Dye, senior vice president of information security for Symantec — the company that invented commercial antivirus software 25 years ago — told The Wall Street Journal a couple of weeks ago that antivirus “is dead,” loyal antivirus updaters across the world rightfully asked, “What does that mean?” Let’s start by saying that while it makes a great headline, declaring that antivirus is dead is an exaggeration. Antivirus is still an important part of your computer security equation. However, believing that it’s the only part, and that antivirus alone will protect you, is highly inaccurate. Antivirus is a piece of software designed to prevent, detect and remove malicious viruses from a computer system or network. Most antivirus software, traditionally and today, works by blacklisting malicious code. There are millions and millions of computer viruses out in the world — some of them existing and some original — and when antivirus software finds a known piece of bad code, it flags and bans it. A blacklist protection system is always playing catch-up — it is always inherently one step behind. Though it’s flawed, antivirus software using a blacklist model is straightforward and easy to implement, which is why it’s been our most widespread computer virus protection system for the past 25 years. A supremely superior model of security is white-label. Say there are 500 tasks that your computer needs to perform and programs it needs to run. If you already know that these 500 things are safe, a white-label system allows these 500 things and prevents everything else. This method is incredibly effective and safe, yet very few people use it because it requires an enormous amount of effort to set up. Since you have to tell your computer each and every program it is allowed to run, this white-label antivirus model is not practical for most people at this time. If blacklist antivirus is too little, and white-list systems are too much, to find the “just right” solution, I suggest taking a layered approach to your computer’s security. Your system needs to include a blacklist antivirus software program in addition to internet filtering. In this day and age, if you’re not filtering your internet connection, you’re going to be attacked and hackers are going to compromise your systems. This goes for both personal and business use. It’s important to note that, when implemented properly, internet filtering can be aggravating at times, because by design it prevents you from going to some places on the internet. This barrier can get in your way, preventing you from doing something that you want to do. The worst thing that you can do in this instance is shut off your internet filtering. You can put all the safeguards in the world on your computer, but if you don’t actually use the tools, they’re worse than useless because they cost you money and don’t secure anything. The last important thing to understand is that the general goal of computer viruses is to exploit some vulnerability in another piece of software on your computer, like Adobe Flash Player or Acrobat Reader. Even with antivirus software and internet filtering implemented, you need to be sure to keep your computers and all the software that runs on them up-to-date all the time. This post originally appeared in The Tennessean.
<urn:uuid:71e38254-7367-49f5-a10a-b2ca86f9d0e9>
CC-MAIN-2024-38
https://concepttechnologyinc.com/blog/antivirus-software-still-a-necessary-tool/
2024-09-14T15:20:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00127.warc.gz
en
0.943861
739
2.796875
3
The technology industry — the companies that provide hardware, software and IT services — is at war with itself over the American H-1B visa program. The H-1B is by definition a non-immigrant, temporary visa — it’s essentially a guest worker program for yuppies. The recipient of an H-1B visa must be championed by a company, and cannot easily change jobs once here, which translates into a kind of low-paid “indentured servitude.” The visa lasts six years. Spouses can come with, but can’t work while here. To reapply, the visa holder must leave for a year before coming back. H1-B visas are in such demand (by both American companies and foreign skilled workers) that the quota for the whole year was filled in one day! The program is controversial. Proponents argue that by enabling businesses to hire skilled workers from abroad, U.S. companies are better able to compete in the global marketplace. That competitiveness enables them to stay in business, and hire more American employees. Opponents charge that H-1B visa holders displace Americans because they’re paid less, and they lower American wages for skilled work. They also claim that H-1B visas provide a disincentive for American students to choose careers in technical fields like engineering. The current cap of 65,000 people is itself controversial. Business executives want to raise it, some worker-advocacy groups want to lower it. (An additional 20,000 visas are allowed to workers with graduate degrees from U.S. universities, and there is no cap for hiring by non-profit companies, government research labs and universities.) The controversy is especially acute in tech centers like Silicon Valley because such skilled workers are so crucial to global competitiveness. When you think of H-1B visa workers, you tend to think of engineers, designers and software developers. But, oddly, the H-1B program also includes fashion models! A modest (but heretical) proposal The H-1B is a “dual-intent” visa, which makes it possible to apply for a green card while on the visa, and many do — not with the intent of becoming U.S. citizens, but as a loop-hole to get around the six-year maximum stay. The U.S. granted 1,266,264 immigrants legal residence — i.e. green cards — in 2006. The overwhelming majority of recipients were unskilled, poorly educated immigrants. Currently, a legal permanent resident must live in the United States for five years on a green card before applying for citizenship. So here’s my proposal: Let’s transform the H-1B visa into a special “H-1B green card.” We can retain the 65k cap on the total number of skilled foreign workers, but they’ll be given all the rights and privileges of any other green card holder, except the “H-1B green card” expires after six years. Most importantly, they should be allowed to apply for U.S. citizenship any time during their stay, and become fully naturalized U.S. citizens on the first day of their fifth year if they choose. However, if they fail to become citizens within the six-year period, they have to leave (as they would have with the H-1B visa). IT Career Columns | | Rather than raising the H-1B visa cap, let’s increase the rights and privileges of skilled worker immigrants, and fast-track them to citizenship. Rather than using Silicon Valley as a training camp for foreign companies, let’s actually keep the skilled workers here. This proposal benefits everyone (except foreign economies). It favors skilled immigrants who want to become Americans and keep their skills here, rather than bring them back (along with American trade secrets) to their countries of origin. They’ll have to be paid more (and therefore won’t depress American wages), because they’ll be able to change jobs, will be more likely to buy homes and have kids — all of which encourage the demand for higher wages. And they won’t be “stealing American jobs” because, soon enough, they themselves will be Americans. Meanwhile, American companies like Microsoft, IBM and Oracle benefit, because the total number of skilled workers they can hire and import from abroad rises, as “H-1B green card” holders transition to citizens and free up more H-1B spots. American companies will be more competitive, and will be more likely to stay in business, hire more workers and grow. We maintain our cap on temporary skilled workers, and America becomes slightly smarter and more skilled by the addition of a well-educated citizen of demonstrated value to the economy. Hey, I’ve got nothing against your tired, your poor, and your huddled masses. But IT grads are yearning to breathe free, too. Let’s do ourselves a favor and leverage the appeal of American citizenship to build a smarter, more skilled and educated population. Plus, we can always use more fashion models.
<urn:uuid:99ecaca8-4bd1-4a24-9def-c8153dbbbcf0>
CC-MAIN-2024-38
https://www.datamation.com/trends/a-modest-proposal-to-solve-the-h-1b-visa-crisis/
2024-09-16T22:30:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00827.warc.gz
en
0.962257
1,085
2.5625
3
Cost Benefit Analysis This technique helps the project manager to weigh the benefits of the quality efforts versus the costs to determine the appropriate quality level and requirements for the project. COST OF QUALITY (COQ) COQ involves looking at the costs of conformance and non-conformance to quality and creating an appropriate balance. The costs of conformance should be lower than the costs of non-conformance. RULE OF SEVEN The rule of seven is a thumb rule (also known as heuristics). It refers to a group or series of non-random data points that total seven on one side of the mean. The rule of seven tells the project manager that even if these points are within the control limits, they are not random and the process may be out of control. ASSIGNABLE CAUSE / SPECIAL CAUSE VARIATION If there is an assignable cause or special cause variation, it means a data point or series of data points require investigation to determine the cause of variation. This technique involves looking at other projects to get ideas for improvement on the current project and to provide a basis (or benchmark) to use in measuring quality performance. DESIGN OF EXPERIMENTS (DOE) DOE uses experimentation to statistically determine what variables will improve quality. DOE is faster and a more accurate statistical method that allows project managers to systematically change all of the important factors in a process and see which combination has a lower impact on the project. Doing quality audits for all the manufactured products in a project might be a time-consuming task. Here, it is best to take a sample of a population. Sampling is used when: Auditing the population may take too long It costs too much Auditing is destructive The sample size and frequency of measurements are determined as a part of the Plan Quality process, and the actual sampling is done in Perform Quality Control. OUTPUTS OF PLAN QUALITY Following are the results of the Plan Quality process: Quality Management Plan: The quality management plan is the process to determine what quality is and to put a plan in place to manage quality. The project manager needs to think through the areas on the project that are important to measure and decide what measurement systems are acceptable. A quality checklist is the list of items to inspect, a list of steps to be performed, or a picture of the item to be inspected, with space to note any defects found. Process Improvement Plan: The plan for improving the processes is called the process improvement plan and it becomes a part of the project management plan. Process improvement plan helps save time by increasing efficiency and preventing problems. It also saves money and increases the probability that the customer will be satisfied. Project Management Plan and Project Document Updates: Updates to the project management plan and project documents are needed throughout the project management process.
<urn:uuid:e8ce3576-b8b6-4750-b695-1a7b7bdf5979>
CC-MAIN-2024-38
https://www.greycampus.com/opencampus/project-management-professional/opencampus-project-management-professional-cost-benefit-analysis
2024-09-16T23:14:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00827.warc.gz
en
0.928649
591
3.4375
3
Artificial neurons go quantum with photonic circuits (Phys.org) The University of Vienna research describing artificial neurons has been described in always excellent Phys.org. Inside Quantum Technology summarizes here. Physicists at the University of Vienna have now demonstrated a new device, called the quantum memristor, which may allow us to combine the two worlds of AI and quantum technology, unlocking unprecedented capabilities. The experiment, carried out in collaboration with the National Research Council (CNR) and the Politecnico di Milano in Italy, has been realized on an integrated quantum processor operating on single photons. The work is published in the current issue of the journal Nature Photonics. At the heart of all artificial intelligence applications are mathematical models called neural networks. These models are inspired by the biological structure of the human brain, made of interconnected nodes. One of the major game changers in the field was the discovery of the memristor, made in 2008. This device changes its resistance depending on a memory of the past current, hence the name memory-resistor, or memristor. A group of experimental physicists from the University of Vienna, the National Research Council (CNR) and the Politecnico di Milano, led by Prof. Philip Walther and Dr. Roberto Osellame, have now demonstrated that it is possible to engineer a device that has the same behavior as a memristor, while acting on quantum states and being able to encode and transmit quantum information. In other words, a quantum memristor. Realizing such a device is challenging because the dynamics of a memristor tend to contradict typical quantum behavior. By using single photons (i.e., single quantum particles of lights) and exploiting their unique ability to propagate simultaneously in a superposition of two or more paths, the physicists have overcome the challenge. “Unlocking the full potential of quantum resources within artificial intelligence is one of the greatest challenges of the current research in quantum physics and computer science,” says Michele Spagnolo, who is first author of the publication in the journal Nature Photonics. This new achievement represents one more step towards a future where quantum artificial intelligence becomes reality.
<urn:uuid:47aeeca6-626b-4101-bc0b-aa9b5e7957a2>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/artificial-neurons-go-quantum-with-photonic-circuits/
2024-09-16T22:08:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00827.warc.gz
en
0.925346
447
3
3
A new report from Capgemini’s Digital Transformation Institute has found that increased digital investments by power plant owners will create significant generation efficiency gains, driving down both operating costs and CO2 emissions The research revealed that utility companies are investing in significant digital enhancements to coal and gas-fired energy generation to increase production efficiency and reduce their generation costs. Over the past five years, firms have invested an average of $330 million in digitising their power plants. Continued investments will see one in five power plants (19%) becoming ‘digital plants’ by 2025, operating with approximately 27% lower costs and, together, contributing to a 4.7% reduction in global carbon emissions from power generation. Reducing energy generation costs The report, which surveyed utility leaders across China, France, Germany, India, Italy, Sweden, UK and US found the increased production efficiency achieved from digitisation will enable utilities to bring down energy generation costs. The report found that power plants with digital technology will see 27% lower production costs, with individual plants saving $21 million each year on average. As the price of renewable energy continues to reduce, these savings will enable organisations with gas and coal-fired plants to remain competitive. With global electricity demand increasing year-on-year and ambitious global carbon reduction targets to meet, these digital investments will ensure that traditional power plants can continue to contribute to an energy ecosystem increasingly shifting towards renewable energy sources. More environmentally friendly power production The research also provides an optimistic outlook on the environmental benefits of digitising power plants. Utilities expect that digital investments will enable them to increase the energy produced from fossil fuels with a resultant decrease in carbon emissions. By 2025, digitised plants will annually produce 625 million metric tons less carbon emissions equivalent to a 4.7% reduction in global emissions from power plants, 28.6 billion more trees or 133 million less passenger vehicles on the planet. Greater gains possible from digital Despite the huge potential gains from deploying digital plants only 8% of utility organisations are currently digitally mature and just 19% of power plants are expected to be digital within five years. If more utilities were to prioritise digital investments then the benefits to the industry and climate could be much greater. However, the report highlights the need to acquire the digital maturity required to plan and manage digital power plant projects. A ‘digital beginner’ organisation typically achieves 33% less in productivity gains than a ‘digitally mature organisation’ from digitising. >See also: Digital business trends 2017 Perry Stoneman, Global Head of the Energy & Utilities sector at Capgemini said, “It’s clear that digital is already transforming power generation, enabling utilities to remain competitive and significantly reducing global carbon emissions. However, the industry can go further. With many utilities yet to digitise power plants, it is possible to reduce carbon emissions even more, if these utilities invest in digital skills and technologies. Firms which choose to embrace the digital future of power production now will gain a greater competitive advantage, lower production costs and boost their brand reputation.”
<urn:uuid:e9b1b001-b0b7-4108-9034-510f8f84ddd5>
CC-MAIN-2024-38
https://www.information-age.com/power-plants-digital-makeover-reduce-operating-costs-dramatically-7775/
2024-09-19T11:29:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00627.warc.gz
en
0.931627
631
2.703125
3
NoSQL Data Stores NoSQL is one of the most frequently heard terms in technology today. NoSQL is no single technology or framework but rather a consolidation of varying technologies with different use cases. NoSQL databases approach data storage in a less constrained way when compared to the Relational Databases widely used today. Any database that is not a RDBMS, has schema-less structures, does not follow ACID transactions, and has high availability and support for large data sets in horizontally scaled environments can be categorized as a “NoSQL data store”. As there are no schemas in a NoSQL database, it makes it suitable to store unstructured data usually handled by internet scale websites. NoSQL is not a competing set of technologies to RDBMS but provide with alternatives to use cases where RDBMS may not be a suitable option. Many implementations of NoSQL database do support SQL (Structured Query Language) and the term NoSQL is widely defined as ‘Not only SQL’. Traditionally the emphasis for Relational Database Management Systems (RDBMS) has been on a set of properties called ACID (Atomicity, Consistency, Isolation, and Durability). The ACID properties of RDBMS guaranteed reliability by locking records while being updated. This constraint made the transactions reliable but affected the performance of the database itself, making them less suitable for internet scale applications. For a database following ACID properties, two users querying the same data will get same results after execution of a transaction. Many NoSQL databases usually follow BASE (Basically Available, Soft-state, and Eventually Consistent) properties. This means while you can scale up your database and make it highly available, the consistency of data may not be immediate. For a database following BASE properties, two users querying the same data might get different results immediately after execution of a transaction but will get the same data eventually. Lotus Notes from IBM is one of the first implementation of what we today identify as a NoSQL database. One of the developers of Lotus Notes, Damien Katz built first modern NoSQL database called CouchDB in mid 2000s. However, the real impetus to the whole NoSQL movement came from the Google’s paper on Bigtable (2006) and Amazon’s paper on DynamoDB (2007) which became the standard for anyone who wanted to develop a NoSQL database. Currently there are about 150 NoSQL databases developed and available, mostly open source. Typical use cases where you can consider a NoSQL databases would include and not be limited to: – When you have scalability issues with your RDBMS, achieving scale at an acceptable cost – You have an application where data models change frequently and you cannot fix them, a pre-requisite for a RDBMS implementation – Your application has a lot of data which does not mandate an explicit transaction, e.g. likes for each update on your website – You deal with temporary data which does not get stored in the main transaction tables, e.g. site personalization or lookups or shopping carts – You can allow temporary inconsistencies of data e.g. updates on social networking sites can take time to be visible to all users – You need to query data which is non-hierarchical e.g. how many people on my extended social network have a bachelor’s degree in engineering from University of Pune. – You have de-normalized data in your RDBMS NoSQL databases can be classified mainly into 4 types, - Key-Value (KV) Stores - Document Stores - Column Family Data stores or Wide column data stores - Graph Databases I have explained these types in detail in my next post. Recommended reading: Types of NoSQL Databases
<urn:uuid:50753d57-27bb-475b-a2a4-ffcf7784d5cf>
CC-MAIN-2024-38
https://blazeclan.com/en-eu/blog/detailed-outlook-no-sql-databases/
2024-09-08T12:14:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00727.warc.gz
en
0.909995
786
3.171875
3
Table of contents Table of contents - Evaluating cyber risk - The potential impacts of cyber attacks - Cyber risks for small businesses - Risk factors and high-risk events - Mitigating cyber risks to small businesses - How to protect personal data - Cybersecurity training and certifications for entrepreneurs and managers - Cyber resources for small businesses and entrepreneurs Generate summary with AI An annual internet crime report published by the FBI states that in the year 2020 alone, the costs related to cybercrime had exceeded $4.2 billion. It is important to bear in mind that small businesses are far from immune to cyber threats. As such, businesses of any size should ensure that they have gone to appropriate lengths to protect sensitive data relating to their clients, employees, and business operations. Evaluating cyber risk When evaluating cyber risks for a small business, the following measures can be effective: - Determine what information your business manages and stores. - Evaluate the security priority of various types of information that your business manages and stores. - Research what cyber threats could affect your business. - Identify the potential outcomes of various types of cyberattacks. - Judge how likely each breach scenario is to occur. - Take inventory of your available resources for cybersecurity. - Review your current procedures and policies relating to cybersecurity. - Review your current software, hardware, and services related to cybersecurity efforts. Once you have evaluated your cyber risks, you can better develop procedures to reduce them. The potential impacts of cyber attacks Potential negative consequences for a small business as a result of a cyber attack include: - Damage to your business’s reputation; - Damage to IT resources; - Interference with IT processes; - Interference with business operations; - Financial losses related to loss of assets, legal actions, and damage control. These impacts can arguably be far more devastating to small businesses, as they often have fewer resources to devote to recovery. Cyber risks for small businesses Cyber risks and risk factors are much the same for small businesses as they are for large businesses. However, small businesses may have fewer resources at their disposal to identify, prevent, manage, and recover from cyber threats. Types of cyber risks Common types of cyber risks for small businesses include: - Phishing: Phishing refers to an attack where a bad actor poses as a reputable entity within the company to extract information from an employee or convince them to perform another action. - Malware: Malware refers to a wide range of harmful software such as ransomware, spyware, adware, keyloggers, and trojans. - Weak passwords: Some passwords are easier to guess than others, such as those that include simple personal information like names and birth dates. - Insider threats: An insider threat is anyone who has internal access within the organization, which they can use to carry out a cyber attack. - DDOS attack: A distributed denial-of-service (DDoS) attack is when bad actors attempt to overwhelm a website’s ability to process and store information through a barrage of requests. - MItM attack: A man-in-the-middle (MitM) attack is when a bad actor attempts to intercept or interfere with communication or transfer. - Zero-day exploit: A zero-day exploit refers to an attack where a bad actor identifies and exploits a vulnerability in a system before the target organization is aware of it. It is important to keep in mind that this is just a small selection of common vulnerabilities. There are a wide variety of strategies for a cyber attack, and these strategies are constantly evolving. Risk factors and high-risk events Potential factors that can increase a small business’s risk of cyber attacks include: - Lack of resources: Lack of resources such as money, manpower, and computer assets may make it more difficult for a business to prevent, manage, and respond to a cyber attack. - Lack of information: If business owners and employees are unaware of what cyber security threats there are and how to effectively manage them, they will be less equipped to deal with them. - Lack of cybersecurity policies: Without specific policies in place regarding cybersecurity, there is no consistent structure to fall back on in terms of security. - Lack of training: Informational resources and cybersecurity policies are not as effective without additional training to reinforce the information and procedures. - Lack of automated processes: Automation of IT functions can reduce the possibility of human error. - Lack of a recovery plan: Businesses need to be able to recover from breaches in addition to preventing them. - Insider threats: An insider threat such as a disgruntled employee often has easier access to sensitive data. - Man-made disasters: Man-made disasters such as large-scale accidents and acts of terror can affect elements such as the power grid, available hardware, and access to restricted areas. - Natural disasters: Similar to man-made disasters, natural disasters such as floods and earthquakes can affect elements such as the power grid, available hardware, and access to restricted areas. If your business is experiencing one of these issues or is likely to, it is important to factor this information into your cybersecurity strategy. Mitigating cyber risks to small businesses Options for reducing the likelihood and impact of cyber attacks for small businesses include: - Sufficient training for employees: Ensuring that employees are aware of potential risks and what to do if they encounter them can allow organizations to more quickly and effectively identify threats. - Regular software updates: Software updates often include patches related to cybersecurity that address newly identified threats and vulnerabilities. - Regular hardware updates: To function optimally, cybersecurity software requires a compatible system with sufficient power and storage to run on. - Regular policy updates: Cybersecurity threats are constantly evolving, along with best practices for addressing them. As such, policy related to cybersecurity should reflect these changes. - Creation of backup files: Backup files can prevent information from being permanently lost. Backup files can be handled on-site or through an RMM. File backup strategies will be more effective if they are highly automated. - Use of high-quality cybersecurity services: For some businesses, it is more feasible to outsource cybersecurity management than to handle it in-house. Doing so may offer them more resources than they could manage or afford otherwise. - Use of high-quality information storage services: Outsourcing data storage can also be a helpful option for businesses that can’t effectively handle secure data storage in-house. Cloud computing is a common option because data is encrypted and stored by a third-party, which provides several obstacles for bad actors. Implementation of an alert system: The sooner you know about a security breach, the better. To this end, an automated alert system can bring potential threats to your attention more quickly. These strategies will be particularly important if you store personal data such as client or employee information. How to protect personal data Steps for protecting sensitive client and employee data can include: Developing specific policies and procedures related to cybersecurity; - Providing informational materials to employees; - Ensuring data is highly organized; - Providing cybersecurity awareness training; - Utilizing secure platforms for storing and managing information; - Regularly updating software; - Regularly updating hardware; - Monitoring access and activity on the network; - Restricting access to sensitive information; - Maintaining access logs; - Having a detailed bring-your-own-device policy; - Limiting what information is documented; - Securely deleting information as needed; - Using secure passwords; - Utilizing multi-factor authentication; - Using cybersecurity tools; Immediately reporting and investigating possible security breaches. It is also important to keep in mind that depending on your industry and the data you manage, you may be subject to regulations relating to its storage and transfer. For example, healthcare organizations are subject to HIPAA regulations. Cybersecurity training and certifications for entrepreneurs and managers Training and certification related to cybersecurity can be useful resources for small businesses looking to improve their cybersecurity protocols and practices. Such training programs can be used to refine the knowledge of existing employees or to more effectively vet new-hires by requiring certain certifications. What Is Cybersecurity awareness training? Cybersecurity awareness training refers to generalized training programs meant to increase employees’ understanding of cybersecurity risks and how they can help manage them. Such programs are a valuable asset for cybersecurity within organizations, as uninformed personnel can represent a vulnerability. Training programs are offered both through the federal government and private vendors. Cybersecurity training platforms Helpful cybersecurity training platforms include: - Cybrary: This is an aggregation of crowd-sourced career development resources related to cybersecurity. - EC-Council: This entity provides training and certification related to cybersecurity. - IBM: Although IBM is most well-known for selling hardware and software, they also provide consultation and training services related to cybersecurity. - (ICS)2: This entity provides training and certification related to cybersecurity. - Khan Academy: This is a crowd-sourced online learning platform. - Open Security Training: This is a crowd-sourced training platform that focuses on cybersecurity. - SANS Institute: This entity provides training and certification related to cybersecurity. - Skillshare: This is a crowd-sourced online learning platform. - Udemy: This is a crowd-sourced online learning platform. Various educational institutions also offer cybersecurity training programs. Valuable cybersecurity certificates include: Certified Ethical Hacker (CEH): This certification is administered by the EC-Council through various educating bodies, such as universities. Ethical hackers are given clearance to access various elements of a network to identify and correct vulnerabilities. GIAC Security Essentials: This certification is developed by Global Information Assurance Certification (GIAC), and is used to confirm knowledge related to information security. Exams are conducted online through third-party vendors. Certified Information Security Manager (CISM): This certification is developed and administered through Information Systems Audit and Control Association (ISACA) to demonstrate an individual’s ability to manage enterprise information security. You can take the exam online or in-person through PSI. CompTIA Security+: This certification was developed to test basic knowledge related to information security. The certification is administered by the Computing Technology Industry Association (CompTIA) online or in-person through Pearson VUE. Certified Information Systems Security Professional (CISSP): This certification is offered through the International Information System Security Certification Consortium, and is meant to test knowledge related to information security. Testing is conducted through Pearson VUE. These are just a few examples of reputable certifications. It is always important to investigate the validity of certifications and the entities through which they’re offered before pursuing them. Cyber resources for small businesses and entrepreneurs Resources that can help small businesses protect sensitive information and other digital assets include: - Anti-virus software: Anti-virus software identifies and fixes viruses. - Encryption: Encryption is the process of converting data into a cipher that can only be deciphered by authorized devices. - Backup files: A backup file is a copy of a file stored in a secondary location such as a cloud server to preserve it in case the device or storage software is compromised. - VPNs: A virtual private network (VPN) disguises your IP address and encrypts data related to your internet activity to offer more privacy to the user. - DaaS: Data-as-a-Service refers to cloud computing services that are used to manage data. - Firewalls: A firewall controls incoming and outgoing traffic for a network. - Authenticator apps: Authenticator apps generate codes for one-time use to confirm user access. - Password managers: A password manager generates and stores passwords. Which options you implement and how will depend on a variety of factors, including your business structure and individual preference.
<urn:uuid:5c817bf0-8770-40d4-b853-fbfbc614c17b>
CC-MAIN-2024-38
https://www.atera.com/blog/cybersecurity-resource-guide-for-small-businesses-and-entrepreneurs/
2024-09-08T13:47:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00727.warc.gz
en
0.93882
2,495
2.671875
3
Harvard Researchers Develop Quantum Circuit-Based Algorithm Inspired by Convolutional Neural Networks (Phys.org) A team of researchers at Harvard University recently developed a quantum circuit-based algorithm inspired by convolutional neural networks (CNNs), a popular machine learning technique that has achieved remarkable results in a variety of fields. “Our work is largely motivated by recent experimental progress to build quantum computers and the development of artificial intelligence based on neural network methods,” Soonwon Choi, one of the researchers who carried out the study, told Phys.org. “In some sense, the idea to combine machine learning techniques and quantum computers/simulators is very natural: In both fields, we are trying to extract meaningful information from a large amount of complex data.” Choi had often wondered whether there might be a more efficient way of analyzing the large amount of complex data obtained using quantum simulators. Artificial neural networks soon caught his attention. In their future work, Choi and his colleagues will first try to use their findings to develop new quantum computers. In addition, they would like to carry out further research investigating the relationship between CNNs or other neural network based methods and renormalization techniques.
<urn:uuid:5e2ccacd-7dd9-4520-a241-5618d283be67>
CC-MAIN-2024-38
https://www.insidequantumtechnology.com/news-archive/harvard-researchers-develop-quantum-circuit-based-algorithm-inspired-convolutional-neural-networks/
2024-09-10T23:04:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00527.warc.gz
en
0.950607
244
3.140625
3
The construction industry stands at a pivotal intersection, where the demand for sustainable practices meets the urgent need to combat climate change. With buildings accounting for 37% of global CO2 emissions and 34% of species habitat loss due to urban development, the sector’s transformative shift toward green building practices is both a responsibility and an opportunity. The World Economic Forum’s recent report lays out a comprehensive roadmap to decarbonize the global building sector using 11 strategic transition levers, covering every phase of a building’s life cycle from construction to usage to end-of-life. This holistic approach aims for outcomes that are not only net-zero carbon but also nature positive, resilient to extreme weather, and conducive to occupant well-being. At the heart of this green building revolution is the potential for significant economic gains. Early adopters of green building practices might tap into a projected $1.8 trillion market, providing a robust incentive for builders and developers to align with these new standards. This economic upside is juxtaposed with the critical ecological benefits, offering a dual motivation to embrace sustainable construction. The industry is encouraged to adopt a comprehensive vision characterized by four main attributes: net-zero emissions, environmentally enhanced performance, resilience to climate volatility, and improved community and occupant well-being. Strategic Transition Lever: Decarbonizing the Building Value Chain The roadmap proposed by the World Economic Forum outlines 11 strategic levers aimed at decarbonizing the building value chain. These levers take into account the various stages of a building’s life, from the initial construction and material sourcing to its operational phase and eventual decommissioning or repurposing. By targeting these stages, the approach ensures a reduction in carbon emissions and a positive impact on biodiversity throughout the entire building lifecycle. Implementing advanced materials and technologies is paramount in achieving net-zero emissions. Innovative construction materials, such as low-carbon cement, recycled steel, and bioplastics, are just a few examples of the possible advancements. Leveraging these materials can significantly minimize the carbon footprint of new buildings and renovations. In addition to materials, technological breakthroughs like AI and smart building systems can optimize energy efficiency and reduce waste. AI algorithms can analyze building usage patterns to manage energy consumption more effectively, while innovations in biomaterials promise new ways to reduce reliance on traditional, carbon-intensive materials. Smart building systems, which encompass everything from energy-efficient HVAC systems to smart lighting, further contribute to a building’s overall sustainability. Together, these innovations form the backbone of a decarbonized building industry, opening new avenues for collaboration across the value chain. Enhancing Environmental Performance and Resilience Apart from reducing carbon emissions, green building practices also emphasize enhancing environmental performance through the integration of natural elements. Green roofs, vertical gardens, and rainwater harvesting systems contribute to the local ecosystem and provide urban biodiversity havens. These natural elements improve air quality, reduce heat island effects, and offer additional insulation, thereby lowering energy costs. The emphasis on resilience to climate volatility is another critical component. Buildings must be designed to withstand extreme weather conditions, from hurricanes to heatwaves. Using durable, sustainable materials alongside architectural designs geared towards resilience can minimize damage and extend the life of buildings. Sustainable construction also involves community and occupant well-being. Green buildings typically promote healthier indoor environments with better air quality, natural lighting, and non-toxic materials. These improvements contribute to the overall health and productivity of the building’s occupants. High-quality, sustainable living and working spaces also enhance property values and attract tenants who prioritize eco-friendly practices. Additionally, the incorporation of shared green spaces and community areas fosters a sense of well-being and social cohesion among residents and users. Collaboration, Policy, and Global Impact The construction industry is at a crucial turning point, facing the dual demands of sustainable practices and urgent climate action. Buildings are responsible for a staggering 37% of global CO2 emissions and 34% of habitat loss due to urban development, making the industry’s shift to green building practices essential and opportune. The World Economic Forum has outlined a detailed roadmap to decarbonize the global building sector through 11 strategic levers, addressing every stage of a building’s life cycle—from construction to usage to end-of-life. This comprehensive approach aims not just for net-zero carbon outcomes but also for nature-positive and weather-resilient structures that enhance occupant well-being. Central to this green revolution is the prospect of substantial economic gains. Early adopters could access a projected $1.8 trillion market, providing strong financial incentives for builders and developers to adopt these new standards. This economic potential complements the crucial environmental benefits, creating a powerful dual motivation. The industry is urged to adopt a vision with four core attributes: net-zero emissions, enhanced environmental performance, resilience to climate extremes, and improved community and occupant well-being.
<urn:uuid:8bd7d2b3-efe2-484c-b0dd-813691a53dad>
CC-MAIN-2024-38
https://constructioncurated.com/market-overview/green-building-revolution-decarbonizing-construction-for-a-sustainable-future/
2024-09-13T11:38:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00327.warc.gz
en
0.902432
1,002
2.96875
3
Network cable testers are essential tools for engineers or technicians to ensure the connectivity and reliability of wired ethernet networks. In this comprehensive guide, we will explore how network cable testers work. We will also discuss how to detect, localize, and resolve common faults and issues, plus the difference between network cable testers and ethernet network testers. ● What is a Network Cable Tester? ● What are the types of Network Cable Testing? ● Detecting and Localizing Faults with a Network Cable Tester ● The Difference between a Network Cable Tester and an Ethernet Network Tester What is a Network Cable Tester? A network cable tester is a specialized tool that network technicians rely on to assess the functionality and integrity of network cabling. Its primary purpose is to verify the physical connections and wiring within a network infrastructure, ensuring proper installation for optimal data transmission. It is important to know the key differences between the types of cable testers and technologies available, and which is appropriate for your requirements. (Note, this article focuses on testing twisted-pair copper cabling; fiber optic testing is another subject.) Three Types of Network Cable Testing – Validation, Qualification, Certification Validation – Basic Cable Testing Validating or verifying cabling is the most essential form of cable testing in which the connectivity of the individual wires is examined. The simplest form of tester in this category may rely on having a remote terminator at the far end; an electrical pulse is sent down the wire in order to identify common issues such as breaks (or “opens”), shorts (miswiring where one conductor contacts another), and improper continuity (sometimes called “wiremap”). Wiremapping ensures that each conductor is correctly terminated to the correct pins in plugs or jacks. More sophisticated testers use a technology called Time Domain Reflectometry (TDR) where an electrical pulse is injected into cabling which is open or unterminated at the far end. By measuring and interpreting the electrical reflections coming back to the tester (and the timing of the signal), it is able to determine various faults and the distance to them as well as the overall length of the cable. (Accurate length and distance to fault measurements require the user to enter the type of cabling under test. The “speed” of the cable – referred to as its Nominal Velocity of Propagation (NVP) – must be taken into account for accurate length and distance measurements.) Qualification – What bandwidth will the cabling support? Validating connectivity and proper wiring is essential, but do you know whether your cabling is of sufficient quality to provide the bandwidth required? With the insatiable growth in bandwidth demands, increasing speeds of Wi-Fi Aps (with Multi-Gig 2.5/5Gbps backhauls), and 1Gbps to 10Gbps infrastructure upgrades, network professionals must have confidence that their network will transport that data error-free at the maximum speed possible. Downtime or intermittent loss and errors is simply not an option. To “qualify” a cable plant is to assess its ability to carry data at a particular speed or rate, error free. The most reliable and meaningful test methodology is to generate and measure the transmission of line rate Ethernet frames point-to-point over your network cabling infrastructure, qualifying its ability to support 100M/1G/2.5G/5G/10G on copper links. To do so, two testers are typically used, one at each end of the network under test. A pre-configured rate of network traffic is generated both upstream and downstream simultaneously, measured at each end. (These types of testers typically also include basic cable validation features as well.) Running this type of test over a long duration (up to 24 hours) serves as a “soak test” to identify the presence of intermittent issues and noise events that can corrupt network traffic. The other type of cable qualification tester is one that utilizes electrical signal parametrics (measurements such as insertion or return loss, near-end and far-end crosstalk, etc.) and compares those results against the specifications of a particular cabling standard, such as TIA-568-C.2 or ISO/IEC 11801. For example, if the test results are within the parameters of the standard’s requirements for Category 6A cabling, one can infer that the cabling will support 10Gbps. But beware! Just because a cable does not meet the requirements of a standard does not mean that the cabling cannot support the transmission of packets at speed, error free. Many network owners have spent thousands in re-cabling links that did not meet the standard, but which could still carry multiple gigabits of network traffic. In fact, depending on the quality of installation and other parameters (such as lengths <100 meters), even Category 3 cabling can support 1 Gbps or better! Qualifying your cable plant by transmitting actual packets could save you from costly, unnecessary upgrades. Certification – Does the cabling meet the requirements of a particular standard? The only way to ensure that a cable plant complies with an industry standard is to use a tester that is capable of certification, comparing measurement results with the requirements as set out by the Telecommunications Industry Association (TIA) or International Standards Organization (ISO). The test results will show either a “pass” or “fail” against the requirements of the standard employed. No data frames are utilized in this methodology; specific electrical signaling and parametric measurements are compared to the standard’s specifications. Typical cable certification testers can be very expensive, single-purpose tools – making them cost-prohibitive for many installers and end-users. How to use a network cable tester Technicians connect the tester to the end of the cable being tested (and terminate at the far end if required.) The tester then sends signals through the cable and analyzes the responses received. This process helps identify common issues such as breaks (opens) in the cable, unintended connections (shorts), incorrect wiring, and problems related to cable length and signal quality. By accurately identifying these faults, network cable testers enable technicians to locate the exact source of the problem. This information is invaluable for efficient troubleshooting and allows technicians to take appropriate actions to resolve the issues promptly. By utilizing a network cable tester to validate an installation or adds, moves, and changes, technicians can ensure the overall performance and reliability of network cables. Properly installed and tested cables minimize the risk of network connectivity issues, data transmission errors, and downtime. This leads to improved network efficiency, reduced troubleshooting time, and increased productivity. Features of Network Cable Testers Network cable testers offer a range of features to enhance their functionality: ● Connectivity Testing: Network cable testers are designed to test the connectivity of the cables. They can check if a properly wired connection is available from one end of the cable to the other. Some advanced models can even measure the cable length, identify open circuits, short circuits, or reversed connections. ● Cable Locating: Basic cable testers typically also include the ability to inject an analog or digital “tone” onto the wire which can be detected and audibly amplified by a receiver called a probe (or “tone probe”). This is used to locate cable runs and to identify individual cables inside bundles in trays or equipment racks. ● Power over Ethernet (PoE) Measurement: There are two levels of PoE validation. Inexpensive testers are capable of reporting the presence and voltage of PoE supplied by a PSE (power sourcing equipment), but only more sophisticated testers are capable of load testing, that is, measuring the actual power (in watts) that the PSE is actually delivering. This is the only way to ensure that a PD (powered device) will have enough power to successfully operate, and that a switch’s PoE budget has not been maxed out. ● Network Link Testing: Some cable testers also feature the ability to detect ethernet link pulse and, in some cases, actually link to the network. The distinction between link DETECTION and the speed at which the tester can actually connect to the network is very important (see “Ethernet Network Tester” below.) ● Signal Strength Analysis: Testers with link detection may also come with features that allow them to analyze the signal strength on the network cable. In some cases, this is reporting the strength of the link pulse signal; this could include signal-to-noise ratio (SNR) and delay skew between pairs, essential measurements for multi-gig links (2.5 and 5Gbps.) Detecting and Localizing Faults with a Network Cable Tester Network cable testers are capable of detecting various faults and issues that can impact network performance. Common errors include opens, shorts, incorrect wiring, and incorrect cable lengths. Using a network cable tester, these faults can be localized, enabling technicians to respond quickly and efficiently. By utilizing the tester's features, such as wire mapping, length measurement, and signal quality analysis, network technicians can identify the exact location and nature of the fault. This facilitates targeted troubleshooting and minimizes downtime. The Difference between a Network Cable Tester and an Ethernet Network Tester A network cable tester is aimed at diagnosing potential problems at the physical layer. It identifies a wide range of wiring faults, such as shorts, open wires, crossed pairs, and reversed pairs in ethernet cables. Additional features of advanced models may include the ability to measure cable length and trace the distance to faults. While the network cable tester verifies the physical integrity of the ethernet cable connections that play a critical role in network reliability, the ethernet network tester focuses on ensuring operational performance and diagnosing higher-level network issues. Actual link testing is the job of an ethernet network tester – a more sophisticated tool designed to test and evaluate the performance and functionality of ethernet-based networks. It measures data transmission, checks error rates, and assesses the operations of switches, routers and other integral components within the network's infrastructure including key services such as DHCP and DNS. This can also include network cable testing but is not always included. While some cable testers may include the ability to detect ethernet link pulse, that is, to identify whether or not the cabling is connected to an active port, be aware that link “detection” is not the same as actively linking to the network. It is simply the ‘decoding’ and reporting the type of link pulse it sees on the wire. Even certain network testers may claim “10Gbps capable” when in fact, they are not able to actively link at that speed – simply detecting that the link partner is capable of connecting at that rate is not the same as validating that capability. Only by actually linking and communicating at the necessary speed ensures that a specific network link is operating at the highest possible configuration. Acting as a full-functioning network client, network testers will also validate the availability and function of key network services such as DHCP and DNS, ensuring that other connected clients can get a valid IP address and resolve network addresses (URLs). To ensure full connectivity at higher layers, the network tester may also be able to ping other devices on the local network or at remote sites, and/or conduct TCP/IP connectivity testing to assure the successful operation of specific applications and ports across the network infrastructure. Additional functions of more sophisticated network testers can include packet capture, network discovery, path analysis, performance testing, remote control, and more. NetAlly's LinkRunner AT: An Advanced Solution for Ethernet Network Testing NetAlly's LinkRunner AT is a handheld ethernet network tester that includes cable testing, purposefully engineered to make the testing processes faster and more efficient. The device is equipped with advanced features, such as actively testing network linking at 1Gbps, identifying the link partner (switch port), and confirming reachability of internal network servers and/or external internet sites and validating key services (DHCP, DNS) capabilities that go well beyond the scope of basic cable testers by verifyingthe functionality and connectivity of the broader ethernet network. The LinkRunner AT stands out due to its user-friendly interface and ability to rapidly detect and troubleshoot network issues. It provides detailed reports, allowing for better documentation. Test results can be automatically uploaded to the Link-Live reporting and analysis platform to improve collaboration between network engineers and technicians, creating greater job visibility, project control, and fleet management. Request a free virtual demo to learn more. Network Cable Testers are fundamental tools for maintaining the connectivity of wired networks. As dedicated devices for assessing the quality and integrity of connections within a network infrastructure, they play a key role in ensuring consistent and efficient data transmission. These testers probe into the detail of each network cable, checking for errors that might affect the network performance such as: open connections, shorts, incorrect wiring, and even problems related to the length and quality of signal transmission. By isolating these physical issues, network cable testers not only assist in a quick and accurate fault detection but also enable more efficient physical layer problem resolution. The indispensability of these tools stems from their ability to precisely map the fault source, which aids targeted troubleshooting and minimizes network downtime. Advanced network cable testers stand out with enhanced features such as wire mapping, deeper connectivity testing, and signal strength analysis, furnishing technicians with a nuanced view of the network cable health. However they are not the only factor for a healthy network While network cable testers are crucial for verifying the physical robustness of network cables, the comprehensive overview provided by ethernet network testers, such as NetAlly's LinkRunner AT, complements their functionality. The LinkRunner AT offers both – essential cable testing AND in-depth network diagnostics that go beyond basic physical connections, providing a more complete picture of overall network health and connectivity. This results in enhanced network management, and improved performance and reliability. Director, Product Marketing Dan Klimke serves as the Director of Marketing for NetAlly (www.netally.com). He has been involved in the premise cabling and network performance analysis segments of the network communications business for the past 30 years and is responsible for brand and product management as well as field and channel marketing at NetAlly, a leading provider of network analysis and troubleshooting solutions.
<urn:uuid:6c6ab458-6185-4b30-959a-9cd5ec6c46ef>
CC-MAIN-2024-38
https://www.networkdatapedia.com/post/how-do-network-cable-testers-work-dan-klimke
2024-09-14T16:27:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00227.warc.gz
en
0.925699
2,992
3.125
3
Deep Poetry-AI Poetry Analysis Tool Enhance your poetry with AI-powered insights. Poetry writing and analysis guide, educating on poetic forms and styles. Learn how to interpret and write poems like a professional poet. Poem Generator for Sonnets, Haiku, Odes, Free Verse, Elegy, Limerick, Epic, Acrostic and Spoken. Deep Things | Learning with Books, Poetry, Quotes Deeper dives into reality through reading, quotes, stories, images and anything else you think about. Poem Generator. Deep Thoughts. Book Summaries. Writes narratives, novels, poems, plays and more. Poem Writer is a creative AI poem writer that helps you effortlessly write a poem on any theme. Ideal for poetry enthusiasts and writers, it generates beautiful, stylistically diverse poems, making it a perfect tool for anyone looking to express creativel Verse Poetry Companion A muse for your poetry journey Rhymes, the Poetry Critic A personal AI poetry assistant specializing in free verse. 20.0 / 5 (200 votes) Introduction to Deep Poetry Deep Poetry is an advanced tool designed for the detailed analysis and enhancement of poetic works. Its primary purpose is to aid poets, literary scholars, and enthusiasts in understanding and refining the structural and artistic elements of poetry. Deep Poetry leverages natural language processing and AI to evaluate poems based on a variety of metrics, such as semantic depth, rhyme quality, syllabic patterns, and the use of literary devices. For example, a poet seeking to improve the rhythmic flow of their sonnet can use Deep Poetry to mark stressed and unstressed syllables, ensuring adherence to iambic pentameter. Another scenario could involve a literary scholar using the tool to analyze the depth of meaning in rhyming pairs by examining colexification and semantic similarity, providing deeper insights into the poem's thematic richness. Main Functions of Deep Poetry Highlighting the pattern of stressed and unstressed syllables in a poem. A poet revising a poem for a consistent meter uses Deep Poetry to automatically mark the stressed (**bold**) and unstressed (**unbold**) syllables, ensuring the poem maintains its intended rhythmic pattern. Rhyme Quality Analysis Evaluating and scoring rhymes based on type, semantic similarity, and syllabic match. A writer working on a collection of rhymed verses employs Deep Poetry to analyze the rhyme schemes, identifying areas where rhymes can be improved for better phonetic appeal and semantic depth. Literary Device Identification Detecting and demarcating literary terms such as metaphors, similes, and alliteration. An English literature student uses Deep Poetry to annotate a poem with various rhetorical and poetic techniques, facilitating a deeper understanding of the poem's stylistic and thematic elements. Ideal Users of Deep Poetry Poets and Writers Poets and writers seeking to refine their craft can benefit greatly from Deep Poetry's in-depth analysis tools. The ability to ensure rhythmic consistency, enhance rhyme schemes, and identify literary devices can elevate the quality of their work, making their poetry more engaging and sophisticated. Literary Scholars and Students Literary scholars and students can use Deep Poetry to conduct detailed analyses of poetic texts. The tool's capability to measure semantic depth, mark scansion, and identify rhetorical devices provides valuable insights for academic research, essays, and critical studies. How to Use Deep Poetry Visit aichatonline.org for a free trial without login, also no need for ChatGPT Plus. Familiarize yourself with the interface and explore the various tools available, such as rhyme analysis, semantic depth measurement, and scansion markup. Input your poem or text into the designated area and select the desired analysis features to apply. Review the detailed feedback provided, including stressed and unstressed syllables, rhyme scores, and literary term identification. Utilize the feedback to refine your poetry, ensuring optimal rhythm, rhyme, and semantic depth. Repeat the analysis as needed. Try other advanced and practical GPTs Traducteur ANGLAIS - FRANCAIS AI-powered English-French translations AI-driven accuracy for perfect grammar Code Vulnerabilities & Exploit Advisor AI-Powered Vulnerability and Exploit Detection AI-powered writing for tech insights AI-powered USMLE Study Companion React Code Wizard AI-driven solutions for React developers cTrader Algo Trading Assistant AI-powered cTrader Algorithmic Trading Technical Analysis | Algo Trading Signals Bot AI-Powered Insights for Smarter Trading AI-powered C# and HALCON integration for advanced machine vision. BRANDON AI EXPERT COMPTA AI-Powered Accounting and Finance Solutions AI-Powered Assistance for All Your Needs Facilitation for All AI-powered facilitation for seamless sessions. - Creative Writing - Poetry Analysis - Literary Study - Rhyming Help - Poem Improvement Detailed Q&A About Deep Poetry What is Deep Poetry? Deep Poetry is an advanced AI tool designed to analyze and improve the structural and artistic elements of poetry. It offers features like rhyme analysis, scansion markup, and semantic depth measurement. How does Deep Poetry measure semantic depth? Deep Poetry measures semantic depth by analyzing the meaning and nuances of words in rhyming pairs. It considers colexification and the relationship between words to provide a depth score. Can Deep Poetry identify literary terms in my poem? Yes, Deep Poetry can identify and demarcate various literary terms, including rhetorical and poetic techniques, enhancing your understanding of the text's literary elements. How does the scansion markup feature work? The scansion markup feature highlights the pattern of stressed and unstressed syllables in a poem, helping you understand its meter and rhythm more clearly. Is Deep Poetry suitable for beginners? Absolutely! Deep Poetry is user-friendly and provides detailed feedback that can help beginners understand and improve their poetry, making it a valuable tool for poets of all levels.
<urn:uuid:878fa1b9-063f-420a-b2be-605a43924379>
CC-MAIN-2024-38
https://theee.ai/tools/Deep-Poetry-2OToEiyetb
2024-09-15T23:54:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00127.warc.gz
en
0.884907
1,294
2.703125
3
The Information Transparency and Personal Data Control Act, proposed by Suzan DelBene (D-WA), aims to give consumers control over how businesses are sharing or selling their personal information--spanning identifiers to financial, health, genetic, biometric, geolocation, sexual orientation, citizenship and immigration status, social security number and religion--with or without their permission. The bill was initially introduced in 2018 in a previous session of Congress but did not come to floor vote. Key elements of measure include: - Requires companies to provide their privacy policies in plain English. - Allows users to opt-in before companies can use their most sensitive private information in ways they might not expect. - Increases transparency by requiring companies to disclose if and with whom their personal information will be shared and the purpose of sharing the information. - Creates a unified national standard and avoids a patchwork of different privacy standards by preempting conflicting state laws. - Gives the Federal Trade Commission strong rule making authority to keep up with evolving digital trends and the ability to fine bad actors on the first offense. Empowers state attorneys general to also pursue violations if the agency chooses not to act. - Establishes strong privacy hygiene by requiring companies to submit privacy audits every two years from a neutral third party. U.S. Personal Privacy Act: Building On California, Virginia Laws? In the absence of a federal law for data privacy, first California and now Virginia have stepped into the void with strong statutes giving consumers substantial control over their personal information. Other states are also considering similar legislation, including New York, Oklahoma, Utah and Washington. DeleBene’s bill could help settle confusion among consumers and businesses by superseding state data privacy laws. A national standard is necessary to establish a uniform set of rights for consumers and rules for businesses regarding how personal data is used, DelBene said. “Data privacy is a 21st Century issue of civil rights, civil liberties, and human rights and the U.S. has no policy to protect our most sensitive personal information from abuse,” she said. “With states understandably advancing their own legislation in the absence of federal policy, Congress needs to prioritize creating a strong national standard to protect all Americans. This bill will create those critical protections,” she said. U.S. Personal Privacy Act: Early Advocates The proposed legislation drew endorsement from a number of IT associations, including the Information Technology and Innovation Foundation, the U.S. Chamber Technology Engagement Center, the Information Technology Industry Council and others. Here a sampling of the advocacy: “By significantly strengthening the FTC’s enforcement capabilities, establishing uniform national rules for the digital economy, and ensuring businesses focus on protecting consumers’ most sensitive information, this legislation would boost consumer protection without sacrificing innovation,” said Daniel Castro, Information Technology and Innovation Foundation vice president. Tom Quaadman, U.S. Chamber of Commerce Technology Engagement Center executive vice president, said now is the time to enact a national privacy law. “Every American the right to control their privacy, no matter where they live, with a clear set of rules for all businesses, no matter where they operate,” he said. Shannon Taylor, Information Technology Industry Council senior vice president, senior counsel, government affairs, called a national privacy law a “top policy priority to enable innovation while upholding the individual rights of citizens who entrust companies with their personal data. “ The bill has global implications, DelBene said. “If we do not have a clear domestic policy, we will not be able to shape standards abroad, and risk letting others, like the European Union, drive global policy,” she said.
<urn:uuid:8e75a6dd-41bb-482b-acd8-f4bea21fbf06>
CC-MAIN-2024-38
https://www.msspalert.com/news/us-consumer-data-privacy-legislation
2024-09-15T23:24:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00127.warc.gz
en
0.925485
767
2.59375
3
Why Texts are so Difficult to Understand ~ a Tennis Example Even a simple newspaper article is sometimes hard to understand. The following fragment was taken from a Belgian sports newsletter, talking about Kim Clijsters' Tennis Ranking. It shows how difficult it is to write text that is correct and easy to understand. And these are only a few lines… Kim Clijsters' Tennis Ranking Belgium is a small country, but with a few superb exports, e.g., chocolate, beer, and female tennis players. We confidently watch the adventures of Kim Clijsters and Justine Henin on the tennis courts and are happy to read about it in the newspapers. Here is a translated article from a sports site some time ago. Many thanks to Christophe Mues for bringing it to my attention. The article talks about the WTA rankings after a tournament, although I do not remember which one, but that is not really important. Here is the text: Kim Clijsters' Tennis Ranking Clijsters becomes the world's number one if she reaches the final, OR If Davenport doesn't reach the final, OR Mauresmo doesn't win the tournament. Lindsay Davenport stays number one if she wins the tournament AND Clijsters doesn't reach the final, OR she looses the final (against another player than Mauresmo) AND Clijsters looses in the semi-finals. Amélie Mauresmo becomes number one if she wins the tournament and Clijsters looses in the quarter-finals. Sounds clear, right? What could be difficult about three simple paragraphs? The text is not only hard to understand, it also contains a number of errors. Why is this text so difficult? A lot of procedures, policies, regulations, legal texts, manuals are written in the same way. Why are they so difficult to understand and apply? Is it because of the technical language, difficult terms, complex sentences? Can we simplify text by using less complex words and sentences? Partly, but the real reason is different: it is a matter of orientation. Most text (including this example) is written in a conclusion-oriented way, indicating for each of the conclusions what the required preconditions are: - Clijsters becomes number one if … - Davenport stays number one if … - Mauresmo becomes number one if … That is a nice way to specify the knowledge, but it is hard to validate and hard for the reader to apply the knowledge in a specific situation. Usually the situation is even worse, because the text also contains exceptions, default conclusions, etc. What the reader actually wants to know is condition oriented: if this and this is the current situation, what will be the conclusion. But that is not easy to derive from the text. In order to know this, the reader would have to play rule engine and apply the specification to a given situation. Humans are bad business rule engines Business rule engines are good at applying a set of rules to certain inputs and reaching a conclusion, but humans are not. And that is exactly what the reader has to do when trying to understand and apply the article: match the rules and premises, remember intermediate conclusions, keep track of matching rules, look for facts, etc. Our memory stack is simply too small to bring this to a good end. The validation perspective The article is not only difficult to understand, it is also hard to know if the text is complete and consistent. If the original text is translated into a decision table (Figure 1), it is immediately clear that the text contains some inconsistencies (there can only be one number 1) and is not complete (what happens in column 8?). Figure 1. Validating the text Text can be difficult to understand because it is often written in a conclusion-oriented way, while the reader expects a condition-oriented view. In order to make this translation, humans would have to play rule engine, and they are not good at that.References J. Vanthienen, "50 Ways to Represent your Rule Sets," Business Rules Journal, Vol. 7, No. 1 (Jan. 2006), URL: http://www.BRCommunity.com/a2006/b266.html # # #
<urn:uuid:43c485b3-97ce-49cd-92d2-be961ab52ab8>
CC-MAIN-2024-38
https://www.brcommunity.com/articles.php?id=b346
2024-09-20T19:25:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00627.warc.gz
en
0.942834
888
3.46875
3
Edge computing is all about bringing things closer together - a shift which is essential to meet the networking and computing demands of a connected 5G world. Learn what it is, and how it can give you an edge over the competition. What is edge computing? Edge computing is a distributed framework in which compute capabilities (such as the processing, analysis and storage of data for an application) are moved to the 'edge' of a network, geographically closer to where the data is being generated or consumed. This means shorter distances for data to travel, as well as less data being sent back and forth between devices and centralized data centers and congesting the network. As a result, edge computing delivers benefits such as low latency and high bandwidth, plus more control over data sovereignty and handling. What is the edge? In a private edge, computing and storage resources are often deployed at an enterprise premises such as a factory. Edge computing resources can be standalone or integrated with a private network and software-defined wide-area network (SD-WAN). These often have third party applications running on an enterprise proprietary stack or third-party private cloud. In a network edge, computing and storage resources are distributed across communication service provider (CSP) premises, between national, regional and local access sites. These can be standalone or integrated with the mobile cloud (running both telecom and third-party workloads). Edge compute can be seen as an extension of the CSPs existing network capabilities. In an extended public edge, computing and storage resources are distributed from central cloud sites located outside the CSP's premises, for example at a co-location site or a hyperscale cloud provider (HCP) data center. Site and cloud infrastructure is owned by HCPs or third-party cloud providers and others such as information or operational technology (IT or OT) players. Gateway edge is where small computing and storage resources are deployed at an enterprise or consumer-facing premises, or within mobile physical objects, for example trains, ambulances or private vehicles. Gateway edge also includes wireless WAN router-based edge solutions primarily used in enterprise sites, vehicles, Internet of Things (IoT) applications and more. Edge computing vs cloud computing Capabilities and benefits for telecom Higher performance and efficiency With data being processed nearby, there is less latency than with centralized data centers, as well as increased performance and bandwidth for the transport network. Comply fully with jurisdictional data regulations and sovereignty laws by allowing data to be processed locally, or within a particular geographical region. Improved Quality of Experience (QoE) Elevate QoE and enable innovative solutions that use real-time analytics for video processing, low latency for remote equipment control or offloading computational heavy functions to enable slim, lightweight devices such as extended reality (XR) glasses. Real-time data analysis and insights provide a platform for next-level automation, using machine learning (ML) and artificial intelligence (AI) technologies for time-critical decision-making. Enhanced security and privacy On premise edge computing helps insulate those using private networks from cyberattacks and threats, while also enjoying less risk of data being intercepted in transit, improving security and privacy. Demand driver | Edge capability in 5G | Application latency | With the app closer to the user and 5G radio, the latency can be reduced, supporting new use cases. | Application exposure | The new 5G core will also offer application exposure for edge deployments. | Transport offload | 5G bandwidths may increase traffic further. Service delivery from the edge will minimize the backhaul traffic. | Processing offload | Application processing at the edge will offload devices at central datacenters while preserving user experience. | Use cases and applications Manufacturing, healthcare and gaming and entertainment are three of the top vertical industries with enormous potential when it comes to edge computing. However, the early maturity level and long time-to-market common in the manufacturing and healthcare industries means the reality looks slightly different. Gaming and entertainment use cases stand out among the most advanced and market-ready to date, together with real-time analytics, autonomous and connected vehicles and video optimization. Use cases (non-exhaustive) | General overview | Private edge | Network edge | CDN-related cases | Latency: 100ms-1s+. Bandwidth: high | || Video processing | Latency: 100-200ms. Bandwidth: high. Sofety and regulations | || Manufacturing. heavy industry plants applications | Latency: 1ms-1s. Bandwidth: variable. Reliability, regulations, data privacy | || Business park and city offices, retail shops | Latency: 30ms-1s. Bandwidth: variable. Reliability, safety | || Cloud gaming services | Latency: 30ms-1s. Bandwidth: variable. Reliability, safety | || Data collection and processing (including AI/ML) | Latency: 100ms-1s. Bandwidth high. Data processing distributed | || Vehicles | Latency: 10-100ms. Bandwidth: mid/high. From private cars to AGVs | || XR(AR/VR/MR) | Latency: 10-50ms. Bandwidth: high. Availability and complex processing | Application placement location, short- to mid-term: Higher probability/deployment ratio Lower probability/deployment ratio Very low probability/deployment ratio The challenges and opportunities of edge How to effectively deploy edge computing There are five interdependent key areas that have been identified by the standards, CSPs and analysists to be the most significant methods of defining and deploying an edge computing solution.
<urn:uuid:5a7cd1fb-facc-4871-a190-37fb1d2d0dd0>
CC-MAIN-2024-38
https://www.ericsson.com/en/edge-computing
2024-09-20T19:48:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00627.warc.gz
en
0.903541
1,185
3.421875
3
People learn in different ways, at different speeds, and using different methods. The appropriate method depends on one’s own learning preferences. Some people prefer to learn by listening, others by watching a video, and others by reading. These different methods also vary depending on the subject we’re learning and the context and while some people think of themselves as visual learners, it’s been shown that we all use various methods at different times to learn. It is obvious that the different preferred methods of learning should play a major role in the design of a strong security awareness service. The German psychologist Hermann Ebbinghaus did a study (with himself as the subject) which was published in 1885 on the retention of learning over time and various ways to improve it – the Forgetting Curve. A subsequent study was published in 2015 which largely upheld Ebbinghaus’s findings. The basic premise of the Forgetting curve is that if you learn something only once, you’ll start forgetting it quite quickly, repeat learning on the other hand helps you retain relevant knowledge, this is called Spaced Learning. You can also improve knowledge retention by ensuring that what you’re learning has meaning for you and by checking your knowledge regularly. In view of the findings of the forgetting curve according to Dr. Ebbinghaus, we need to take both learning methods and retention into account when we plan security awareness training. The art of a good security training program is to involve all participants, regardless of which learning method is preferred and what the individual learning speed is. This is a real challenge, to which we at Hornetsecurity dedicate ourselves with our Security Awareness Services. The knowledge that is imparted in training courses within a company can be quickly forgotten if the training isn’t adequately designed and planned. Practice Makes Perfect In a nutshell, the Ebbinghaus’ Forgetting Curve states that learning content must be repeated several times before it is permanently memorized. The forgetting curve itself illustrates the degree of failing to remember as a function of time. The more time passes, the more is forgotten. After just one day, only half of what has been learned remains in the memory, and after two day s, only a third. One week after learning, on the other hand, the memory capacity has already shrunk to 23%; less than 15% of what has been learned is permanently stored. Is learning then of any use at all? The answer is yes, if you do it right. If you learn regularly and practice or repeat what you have learned over and over again, knowledge retention is vastly improved, which is called the spacing effect. We, at Hornetsecurity, make use of this knowledge with our Security Awareness Service. The training service is structured in a sustainable way to mitigate the effect of the forgetting curve. One strength of our blended learning approach which keeps the learners engaged is the Employee Security Index (ESI), please download our free ESI® Benchmark Report for more information. Through this benchmark report , we can individually consider the training goals of each participant and thus create an optimal learning curve. In this way, we succeed in anchoring the important topic of security awareness in the minds of the participants. The New normal The days of a company’s data being stored on servers in their own datacentre, only accessible from systems under IT’s control are long gone. The Covid-19 pandemic has accelerated digital transformation in businesses everywhere, and nowhere is this more pertinent than in the work from home reality. This brings additional cyber security challenges, including making a thorough onboarding process challenging. Hornetsecurity recently conducted a survey of over 900 IT Professionals about Remote Management challenges. 80% of respondents believe that remote working introduces extra cybersecurity risks and 75% are aware that personal devices are used to access sensitive company data. Read the full report for free here. Today’s distributed workforce also increases the need for frequent training events on security, the adoption of a learning culture in the company through an open learning environment and encouraging knowledge sharing. Automatisms are the goal A good Security Awareness Training program repeats the content so often that the knowledge is anchored in the long-term memory and automatisms are formed in everyday life. Automatisms means the actions are performed without conscious thought, for example, the sender of each e-mail and its attachments are carefully checked before opening. If IT security training is only undertaken once – for example, in the form of a block training session – it is highly likely that the participants will have forgotten most of the content after just one week. In other words, if the training isn’t repeated, at the end of the day the simpler, faster, and old behaviors will reappear – and prevail. Frequent learning should therefore be an ongoing process to combat the forgetting curve. The reason for the training must be repeated over and over again. The training must happen again and again. And the content must be presented to the participants again and again using a wide variety of media, including mobile learning. There’s an old military saying, “eternal vigilance is the price of peace”, which we can co-opt and use to emphasize the importance of complete training for all staff. All this is taken into account in our training content, with a learning experience created for memory retention. Hornetsecurity involves the staff at every level of knowledge, making it clear why security awareness is so important and ensuring that the content learned is repeated. Best of all, it’s mostly automated, with very little administrative work from you in ensuring that all staff gets to prioritize training to reinforce learning and that they retain information permanently. By combining well trained and suspicious users (who are less likely to fall for phishing emails and other social engineering attacks) with a strong Advanced Threat Protection solution such as Total Protection by Hornetsecurity security teams will be well on their way to achieving complete cyber protection for their organization. Curious to know more about our Security Awareness Services? Then contact us now to request a demo and turn your employees from a risk factor to a security asset.
<urn:uuid:243f99e3-7842-476b-b195-ca1188c7c5b9>
CC-MAIN-2024-38
https://www.hornetsecurity.com/en/blog/why-cyber-awareness-training-is-an-ongoing-process/
2024-09-20T21:33:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00627.warc.gz
en
0.954145
1,258
2.921875
3
Supercharging the Smart City with AI-Enhanced Edge Computing The integration of AI-enhanced edge computing in smart cities revolutionizes urban management, optimizing resource allocation, enhancing security, promoting sustainability, and fostering citizen engagement. That ultimately leads to a higher quality of life for residents. A smart city utilizes information and communication technologies (ICT) to enhance the efficiency of urban services, optimize resource use, and improve the overall quality of life for its residents. This involves data captured by sensors, automated processes, and data analytics, all converging to help city officials make better decisions for sustainable and intelligent urban development. Smart cities aim to address challenges related to transportation, energy, healthcare, and other key aspects of urban living through innovative technological solutions. Smart city tech also offers people-centric solutions such as improved communications in the event of an emergency or to facilitate conversations with government or utility representatives. AI-enhanced edge computing makes cities even smarter As smart city capabilities continue to evolve, large amounts of data are generated by end users. You might envision a smart energy grid with thousands of embedded sensors to measure power metrics. Efficiently collecting, analyzing, and putting all this data to use can be demanding, to say the least. Edge computing solves this problem. By capturing and processing end-user data, edge technology enables adjustments to occur with dramatically reduced latency. Instead of making the long trip to a data center or cloud, compute happens near the data source. This means city administrators get optimized, actionable insights in real time. Now, AI-enhanced edge capabilities have raised the bar even higher for decision-making speed, efficiency, and autonomy. Not only is the data analyzed with reduced latency, but follow-up actions can occur automatically, guided by AI processes running directly on connected devices at the network edge. What can AI-enhanced edge computing actually look like in a smart city? Let’s highlight some examples of intelligent city applications that are being used now or will be available in the near future. Traffic management systems: Sensors and video cameras can track and respond to real-time traffic patterns with AI-enhanced edge systems, automatically rerouting vehicles away from heavy congestion, road repair, or accidents. Fire trucks and ambulances also arrive faster, potentially saving lives. Smart parking solutions: Vehicles can be diverted automatically to areas where ample parking is available. Less idling and circling means better air quality for residents. Public transportation system: By tracking passenger flow and volume, buses, and trains can be added or removed in short- and long-term time frames to match demand. People can get where they need to go faster, while the city can save money at times when demand drops. Smart grids: AI-equipped sensors and edge servers optimize energy output, distribution, and storage. This saves energy and avoids outages. Energy-efficient street lighting: Lights turn on only when and where they are needed, guided by light sensors. Lights last longer, and energy costs drop. Smart water meters: Provides accurate data on water usage at the edge in real time to understand usage patterns and save water. Flood monitoring systems: Water levels monitored 24/7 with alerts going out to emergency task forces without delay can result in better citizen protection. Air quality sensors: Continuous monitoring enables air quality warnings to be faster and more accurate. This can lead to lower asthma rates and reduced healthcare costs. Green infrastructure: Natural and semi-natural green spaces leverage edge computing-controlled water management to protect, restore, or mimic natural water cycles to protect the environment. Building energy optimization: Real-time data collection and analysis provides managers with the ability to automatically divert energy to meet needs-based consumption for more efficient buildings. Waste management: Optimized collection and recycling routes, all monitored and controlled with automated solutions. Smart bins alert only when full to avoid extra truck stops. Computer vision: Cameras monitor for anything from traffic violations to more serious crimes to make neighborhoods safer and law enforcement response faster. Emergency response systems: Emergency calls, fleet management, and incident tracking all integrate with emergency services at the edge for rapid crisis response. Law enforcement: Modern AI-enhanced edge systems can track areas of high crime rates and optimize police resources to make the streets safer. Smartphone apps: Traffic assistance, wayfinding, utility outage notification, and event planning can all be enhanced with AI and edge computing. Map apps can allow people to send location-based complaints for easy government official follow-up. Open data platforms: City planning, job creation, and modernized education and healthcare systems can all benefit from open data initiatives. By pinpointing inefficiencies and inequalities, essential services and facilities are improved. Digital Twin: Digital twins are digital models of a city's terrain, buildings, and infrastructure. A digital twin can simulate parameters like jobs, deliveries, traffic, and pollution. This enables a real-time understanding of a city and can be used to preview the potential impact of new policies or infrastructure projects. The human face underlying the technology In the smart city, residents are happier because they live in a sustainable environment where everything simply works better. City managers are thrilled as well since urban administration efforts are more efficient and cost effective. And work gets done faster in an automated fashion thanks to AI-enhanced edge solutions. People will choose to live where there’s a better quality of life. They want less traffic, less pollution, more security, and cheaper energy. They want their city to be smarter. And AI at the edge makes that possible now. Carrie Tuttle is Senior Product Marketing Manager of PowerEdge at Dell Technologies. Learn more about what Edge Computing can do for your organization – and your city. Read other articles in this series: About the Author You May Also Like
<urn:uuid:111c285c-2f68-4c2e-a52e-8b26a6b9a806>
CC-MAIN-2024-38
https://www.networkcomputing.com/data-center-networking/supercharging-the-smart-city-with-ai-enhanced-edge-computing
2024-09-20T20:27:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00627.warc.gz
en
0.910385
1,192
2.765625
3
When it comes to cybersecurity, any organization is only as strong as its weakest link. It may have invested in the best email security solutions, information security, web security solutions, and Advanced Threat Protection (ATP) on the market. It may also have trained its employees to recognize and react to cyber-attacks and put in place the processes to deal with social engineering lures. But if one of its suppliers or partners has a less than rigorous approach to cybersecurity, all that good work could be put to waste. It only takes one person at one company to lose focus for a moment, and a virus could be working its way through hundreds or even thousands of other organizations. When you consider how inter-connected the business world is now and how much collaboration takes place, there are numerous potential vulnerabilities. There have been many supply chain cyber-attacks over the last two years, and such attacks are only going to increase. But when collaboration and interconnected supply chains are so common, how can organizations hope to keep protected? The rise in supply chain cyber-attacks Supply chain cyber-attacks are a growing trend in cybersecurity in terms of both the volume of attacks that occur and their impact. A recent example of this was the management software firm Kaseya. Kaseya fell victim to a ransomware attack initially thought to have affected less than 40 of its customers. However, a security response firm said three managed service providers it worked with had also been exposed to the attack and that, in total, more than 200 companies were affected. It is thought that the Russia-linked REvil group was behind the attack and also that the final tally of affected parties could be much higher after a supermarket chain closed almost 800 stores after one of its contractors became a target. But this is far from the only example, and attackers will stop at nothing to gain access to an organization’s wider connections across its supply chain. Modes of supply chain cyber-attack Ransomware is a popular mode of attack with which cyber-criminals can target supply chains. Fortra research in 2020 revealed that a lack of awareness among UK public sector employees around cybersecurity was leaving it vulnerable to ransomware attacks. The ease with which ransomware can gain access to an organization’s systems and then spread rapidly across the supply chain is a major cause of concern. Phishing is another common way for cyber-criminals to target a supply chain. Spear-phishing and social engineering techniques are increasingly popular, and emails appear to look like they came from someone known to the recipient. They contain malicious URLs hidden in attachments, files, documents, and images, that when opened, release malware into the network and on through the supply chain. The consequences of such attacks are severe. Not only can they bring an entire supply chain down in just a few moments, disrupting operations and leaving organizations vulnerable to ransomware demands, but there is the long-term damage to reputation to consider. If vulnerabilities on the part of a smaller supplier were responsible for an attack, would a bigger organization remain keen to work with that organization in the future? Would it put off other potential partners and collaborators if they knew that company’s cybersecurity was putting at risk the entire supply chain? Fortra's research with Financial Services (FS) CISOs in Q4 2020 revealed that cybersecurity weakness in the supply chain supply had the potential to cause the most damage in the next 12 months, according to nearly half of respondents. It’s a big problem that requires an immediate solution. Implementing a cyber-supply chain risk management strategy Such is the threat posed by supply chain cyber-attacks, many organizations have started implementing a cyber-supply chain risk management strategy (C-SCRM). This is a process that acknowledges the threat in any supply chain by identifying, assessing, and mitigating the risks associated with supply chains. It’s partly a technological solution, but also a behavioral shift that needs to permeate through the entire organization. Trust plays an important role when considering any new partner or supplier. But more rigorous onboarding is increasingly important and should always include thorough due diligence to ensure that the new company’s cybersecurity is strong enough. C-SCRM needs to be approached as a cybersecurity practice of the highest importance. It won’t be effective if it’s an afterthought or is not given the required time or resources. It’s a crucial element of cybersecurity and must be treated as such. It also involves a commitment to knowing your supply chain. Who are your leading suppliers, partners, and collaborators? What cybersecurity measures do they have in place, and are they GDPR compliant? Who else do they partner with, and what are their own cybersecurity criteria for those partners? Maintaining supply chain cybersecurity When an organization has this level of understanding, it can put in place controls tailored to each supplier's criticality and cybersecurity practices. Such C-SCRM can be very effective but is only one part of a broader cybersecurity strategy. Cybercriminals will not stop targeting the supply chain, so all parties must work together to maintain defenses. The right solutions are a vital part of that process, and Clearswift works with many organizations globally to keep their supply chains secure.
<urn:uuid:13ecb8e8-af14-46f5-9703-a793dfe2b031>
CC-MAIN-2024-38
https://emailsecurity.fortra.com/blog/cybersecurity-risks-supply-chain-are-leaving-organizations-vulnerable
2024-09-08T17:22:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00891.warc.gz
en
0.967925
1,073
2.703125
3
What is an Entity-Relationship (ER) diagram? Also known as an Entity Relationship Model, an ER diagram is a graphical representation of the relationship between various entity sets stored in a database. Simply put, these diagrams explain the logical structure of databases. They can make it easier to understand and communicate the database to others. Any object, person, or concept, their attributes, and the relationships between them can be characterized and shared with the help of an ER diagram. In this post, we’ll go through a step-by-step guide on creating an ER diagram. By the end, you’ll understand the various aspects of these diagrams and know the basics of creating, reviewing, and refining your chart ready to use. Whether you’re doing this for the first time or are an experienced database designer, this guide is designed to help you understand, create, and use ER diagrams more effectively. The Building Blocks of ER Diagrams The first step is understanding the various parts involved in an ER diagram. These are: - Entities: Objects or concepts that hold data. This may be a place, person, thing, or event relevant to the database. These are often represented by squares or rectangles on the diagram. - Attributes: These describe the characteristics of an entity and hold specific information about it. They are generally represented by ovals on the diagram. - Relationships: These define the connection between two or more entities. They describe how entities interact with one another, such as one-to-one, one-to-many, or many-to-many relationships. On the diagram, they will be represented as diamonds. It’s crucial to understand these relationships to effectively create an ER Diagram. Once you have these building blocks in place, you can use an ERD tool to create the diagram. Let’s take a further look at the different aspects of an ER diagram and the information you should have before using a tool for diagramming. Entities and Attributes The first step in using an ER diagram is to identify entities. To do this, consider objects or concepts that hold data and are relevant to the database. For example, consider the categories or types of information that you need to store. Some examples of entities might be: Once the entities are identified, the next step in the process is to identify their attributes. These should hold specific, descriptive information. You can ask several questions to identify an attribute, but an excellent place to begin is by considering what information you want or need to store about each entity. Consider the properties or characteristics that are need-to-know. For example, if your entity is ‘customers’, attributes could include names, email addresses, addresses, or past order types. Defining Entity Relationships Once entities and attributes are identified, you should define how the entities relate to one another. There are three main types of relationships between entities. These are one-to-one, one-to-many, and many-to-many. - One-to-one: In this type of relationship, one entity is related to just one instance of another. For example, each customer may have one unique address. - One-to-many: One entity is related to various instances of another in this type of relationship. For example, a customer may have several email addresses. - Many-to-many: With a many-to-many relationship, multiple entity instances will be related to various models of another entity. For example, a customer can purchase many products, and many can be purchased by one customer. Understanding these relationships is crucial as they determine how entities interact with one another in the database. Ultimately, this impacts the structure of the ER diagram. How to Create an ER Diagram Once you have defined the entities, attributes, and relationships between them, it’s time to put the visual diagram together. To do this, you can use a diagram tool, as mentioned above. Choose a tool that you find easy to use and understand. The steps typically involved in this include: - Place the entities: Start by placing the entities on the diagram, either in radial or linear format. Make sure to clearly label each entity and leave enough space to add attributes. - Add attributes: Add the attributes for each entity and ensure that they are linked to the ones they relate to. Make sure that they are clearly labeled and specify the data types. - Define relationships: Using symbols and lines, clearly define the relationships between entities. Ensure they are labeled to specify the type of relationship – for example, one-to-one or many-to-many. - Review and refine: Review the ER diagram you’ve created. Ensure that it is an accurate representation of the relationships described within. If edits or refinements are needed to improve the clarity and accuracy of the diagram, make them. An ER diagram can be a very effective way to visually show the relationships between a wide range of entities. We hope this guide has helped you start using ER diagrams in your work. ABOUT THE AUTHOR IPwithease is aimed at sharing knowledge across varied domains like Network, Security, Virtualization, Software, Wireless, etc.
<urn:uuid:186f0651-aeb3-477b-ab98-1cbf616a23a4>
CC-MAIN-2024-38
https://ipwithease.com/what-is-an-er-diagram/
2024-09-08T16:18:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00891.warc.gz
en
0.93055
1,086
3.890625
4
AI driven Google searches could require as much electricity as it takes to power a country the size of Ireland, Europe is demanding Elon Musk and Mark Zuckerberg take immediate action against disinformation and a Polish startup features the technology that Google thought was too dangerous to release. These and more top tech stories on Hashtag Trending. I’m your host Jim Love, CIO of IT World Canada and Tech News Day in the US. The surge in interest and application of large language models (LLMs) and generative AI has sparked concerns about a potential spike in datacenter electricity consumption, according to a paper by Alex de Vries, a researcher at a university in Amsterdam. The paper highlights that while the training phase of AI models is often scrutinized for its energy use, the inference phase, or operational use of the trained model, might also significantly contribute to an AI model’s life-cycle costs. For instance, to support ChatGPT, OpenAI required 3,617 servers with a total of 28,936 GPUs, implying an energy demand of 564 MWh per day. If we got the math right, it would power about 20,000 US households. But even more disturbing is a quote from Alphabet’s chairman that if every search became a Large Language Model transaction, it would “likely cost 10 times more than a standard keyword search.” The researcher extrapolated from this that the electricity needed could equate to Ireland’s annual consumption of 29.3 TWh. However, the paper also acknowledges that this represents a worst-case scenario, assuming full-scale AI adoption with current technology, which is “unlikely to happen rapidly.” But the paper also notes that what is referred to as the Jevons paradox might apply – where increased efficiency stimulates demand, potentially wiping out any savings from that new technology. Sources: The Register The European Union (EU) has issued stern warnings to both Elon Musk’s X and Mark Zuckerberg’s Meta about the proliferation of misinformation and “violent and terrorist” content, particularly in the context of the Israel-Hamas conflict. The EU, through its industry chief Thierry Breton, has mandated that these platforms demonstrate “timely, diligent, and objective action” in countering the spread of disinformation and comply with European law, under the Digital Services Act and Terrorist Content Online Regulation, which requires monitoring and removal of illegal content. The platforms have been given a 24-hour window to respond and detail the measures taken. This comes amidst a surge of doctored images, mislabeled videos, and misleading content circulating on these platforms, sowing confusion and tension amidst the ongoing conflict. Some agencies are not waiting for action to be taken. The Federal Anti-Discrimination Agency (FADA) of Germany has pulled out of X, citing a significant increase in hate speech and various forms of hostility since Musk took ownership. Mozilla, the entity behind the Firefox browser, has moved to defend against another form of disinformation – namely fake reviews. It has integrated a “fake reviews detector” into its platform, following its acquisition of Fakespot in May. Fakespot, a startup specializing in identifying fake reviews and news through its website and browser extension, has been utilized to detect fraudulent reviews on platforms like Amazon, Yelp, TripAdvisor, Walmart, and eBay, employing an A-to-F grading scale. The Review Checker feature is slated for release on Firefox version 120 for desktop and Android on November 21, 2023. Fakespot utilizes advanced AI and ML systems to identify patterns and similarities among reviews, flagging those likely to be deceptive and potentially hindering efforts to artificially boost product rankings through fake reviews using AI technologies like ChatGPT. PimEyes, a website developed by a Polish startup, offers facial recognition tools available to the public. These tools allow users to upload a photo of a person’s face and, using AI, scans the internet to find images of that person, even those they might not be aware exist. There is a free version of the software and a paid version that alerts users when a new photo appears online. While it claims to help people monitor their online presence, it has sparked controversy due to its potential use as a surveillance tool for stalkers, its collection of images of minors, and adding pictures of deceased individuals to its database without consent. PimEyes CEO, emphasizes that the tool does not identify people but identifies websites that feature images similar to the search material. However, privacy advocates and experts express concerns about the potential misuse of such technology, especially in the absence of federal laws governing facial recognition technology in the U.S. Despite their disclaimers, PimEyes now blocks access in 27 countries, including Iran, China and Russia, over fears government authorities could use the service to target protesters and dissidents. A journalist with the Hill Times quotes Eric Schmidt as far back as 2011, as saying “this was the one technology that Google had developed and decided to hold back, that it was too dangerous in the wrong hands.” That’s the top tech news stories for today. For more fast reads on top stories, check us out at TechNewsDay.com or ITWorldCanada.com on the homepage. Hashtag Trending goes to air 5 days a week with a special weekend interview show we call “the Weekend Edition.” You can get us anywhere you get audio podcasts and there is a copy of the show notes at itworldcanada.com/podcasts I’m your host, Jim Love – have a Thrilling Thursday.
<urn:uuid:e6376e33-1c6d-4cf1-8ab4-078477473d32>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/hashtag-trending-oct-12-ai-driven-google-searches-could-require-large-amounts-electricity-europe-is-demanding-musk-and-zuckerberg-to-take-action-against-disinformation-polish-startup-features-tech/548849
2024-09-09T22:34:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00791.warc.gz
en
0.945055
1,173
2.515625
3
10 Things You Never Knew About Death Here is a list of 10 things that you never knew about death. 10 Things About Death - When a person dies, hearing is the last sense to go - the first is usually sight, followed by taste, smell, and touch. - A human head remains conscious for about 15 to 20 seconds after it has been decapitated. - 100 people choke to death on pens each year. One is more likely to be killed by a champagne cork than by a spider. - Alexander’s funeral would have cost $600 million today. A road from Egypt to Babylon was built to carry his body. - When inventor Thomas Edison died in 1931, his friend Henry Ford captured his last dying breath in a bottle. - Over 2500 left-handed people are killed each year from using products made for right-handed people. - It takes longer than ever before a body to decompose due to preservatives in the food that we eat these days. - An eternal flame lamp at the tomb of a Buddhist priest in Nara, Japan has kept burning for 1,130 years. - Star Trek creator Gene Roddenberry is the first person to have his ashes put aboard a rocket and ’buried’ in space. - Japanese factory worker Kenji Urada became the first known fatality caused by a robot in July 1981, in a car plant.
<urn:uuid:c6e08cb0-0654-41f8-a439-90a8cdb8ae80>
CC-MAIN-2024-38
https://www.knowledgepublisher.com/article/468/10-things-you-never-knew-about-death.html
2024-09-11T04:15:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00691.warc.gz
en
0.980119
293
2.703125
3
We look at Finnish innovation in the Space technology sector and how it is leading the way to a brighter future for earth. The new space economy is taking a giant leap as space technology turns towards improving the future of life here on earth. As the sector grows, innovative Finnish companies are leading the way using their digital and tech-savvy expertise and stellar engineering skills to bring space back down to earth. According to Morgan Stanley’s Space Team, the global space industry will surge to over $1 trillion by 2040, as companies use space technology and innovation to solve issues that impact our lives, including climate change, rising sea levels, wildfires, and ice melting rates. Finland is fast becoming a technological superpower with innovations that have led to the rise of ‘Astro-preneurship’ and the establishment of the New Space Economy. Independent space companies are set to reshape traditional business models, develop faster and cheaper access to space than ever before, and advance earth observations far beyond today’s satellite capabilities. New space initiatives from Finland-backed start-ups include microsatellites that monitor the impact of climate change on the environment, such as floods and natural disasters. One such start-up is ICEYE. The business, which has almost reached ’unicorn status,’ is disrupting traditional earth imaging with its synthetic-aperture radar (SAR) microsatellites. The SAR technology can deliver reliable imaging data at any time and in any weather, even during darkness and overcast conditions, making it a fitting solution for object detection, target tracking, activity monitoring, and more. “It’s been an incredible journey for us at ICEYE, and we are extremely proud of the progress our team has made in such a short amount of time,” says Rafal Modrzewski, ICEYE’s CEO. “This year, we’ve launched seven additional spacecraft, and ICEYE is now the operator of the world’s largest SAR constellation in the world. We also signed several important partnerships and reached significant milestones, such as our work with the European Space Agency, where we joined the Third Party Missions data portfolio and also became the first European New Space company to officially join Copernicus, the largest satellite Earth observation program in the world.” Another dynamic space tech innovation is Aurora Propulsion Technologies, a Finnish company dedicated to the sustainable use of space. Partnering with the US company Rocket Lab, Aurora is preparing to launch a satellite to test ‘space junk’ removal technologies. Space junk refers to debris from objects such as satellites that are still in space but are no longer functional. Experts estimate that millions of pieces of debris are currently orbiting the earth, with the declining cost of space technology likely to result in even more. The AuroraSat-1 CubeSat contains systems that will help bring satellites back to earth. Also developing space satellite technology to support the sustainable use of natural resources is VTT, one of Europe’s leading research institutions. Hundreds of satellites collect daily global information from the earth’s surface and atmosphere to support the sustainable use of natural resources. VTT’s remote sensing technology includes hyperspectral imaging used by Finnish company Kuva Space. Kuva Space is building the world’s most effective service for global daily data on vegetation and soil to help manage natural resources and grow the economy more sustainably. The technology is the first infrared hyperspectral imager ever flown on a nanosatellite and the smallest ever to be flown in space (at less than 500g). The space sector’s value to life on earth cannot be underestimated. Satellites that orbit our planet provide the most accurate location data, weather reports, and predict storms. They monitor our climate, all day every day, providing valuable data on climate change and its effects, such as rising sea levels, wildfires, and atmospheric changes. They connect millions of people and can connect countless more who lack access in isolated areas; they help us identify and prevent illegal fishing and deforestation and help to ensure the security of states by monitoring and verifying actors’ behavior. Finland is already a leader in ground-breaking digitalization, connectivity, the real-time data economy, AI, smart cities, and smart mobility innovations. Thanks to its world-class research capabilities, technology expertise, and accommodating business climate, it is now supporting, funding, and developing innovative businesses to significantly accelerate the New Space Economy in the coming years. “Finnish people have a connection with nature, which is rooted in our culture – it is intrinsic for us to respect the planet and keep it clean. That’s why we are using our expertise and knowledge to create New Space Technology and innovations that put sustainability at the heart of everything we do and push the frontiers of our understanding of health and material science, robotics, and other technologies,” said Markus Ranne, Head of New Space Economy, Business Finland. Ranne continues, “Finland is committed to bringing new space companies and non-space companies together to create business opportunities and change the way of working. Taking care of space is critical for our life on earth. Only through collaboration and global partnerships can we develop technologies and innovations that will help ensure this precious resource is secured for future generations.” Following the Space Tech Expo Europe in Bremen, Germany, on 16-18 November, Finland calls upon start-ups operating in the New Space Economy to join its mission to develop technologies and innovations that will help businesses, societies, and countries reach their climate change goals and challenges. - Pulsar Fusion test planet-saving clean energy hybrid rocket engine - Creating a more sustainable future with tech - What you need to know from Science and Innovation at COP26 - SpaceX is recycling everything even its astronauts in latest launch Finnish companies are now seeking new partners for cross-border business development and want to collaborate with British companies to develop Europe’s digital and sustainable future. Finland offers a stable business environment for start-ups and international businesses: highly educated and tech-savvy talent, stable infrastructure for testing, and the best know-how in digital technologies. The country has also been voted the happiest globally and offers attractive work opportunities and a high quality of life.
<urn:uuid:d677288d-bcf1-4b14-a779-585f4fbcb3f7>
CC-MAIN-2024-38
https://tbtech.co/space-technology/space-economy-finnish-innovation/
2024-09-17T07:53:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00191.warc.gz
en
0.931116
1,297
2.71875
3
The Impossibility of Measuring IOPS (Correctly) If you have ever used Sysinternals’ Process Monitor, chances are high you were a little intimidated when you looked at your first capture: it probably contained hundreds of thousands of registry and file system events, generated in a minute or less. That amount of activity must surely indicate high system load – but strangely, very often it does not. Looking at the hard disk LED you will only see an occasional flickering, even though thousands of file system events are captured per second. How is that possible? Read on to find out. The main problem with measuring I/O operations per second (IOPS) is how to define what an I/O operation (short: IO) actually is. Depending on where you look, IOs can be entirely different things. Take a typical application. When it wants to write to a file it calls the appropriate function from the framework the developer chose to make his life easier. In case of C++ that function might be fputs. From the application’s point of view, each call to fputs constitutes an IO. But that does not mean that the IO even reaches the disk. There is still a long way to go to permanent storage. On the way the IO could be cached, redirected, split, torn apart, and put back together. Let’s travel down the layers and see what happens. To prevent applications from saturating the disk with many small IOs the framework buffers IOs until they reach 4K in total size. Then the data is flushed, aka written to disk in a single operation. This happens by calling the Windows API function WriteFile. From the point of view of the framework, each call to WriteFile constitutes an IO. The WriteFile call is processed by the kernel which has no intention of hitting the disk with everything user-mode application developers manage to come up with. So it buffers the framework’s data in the file system cache and spawns a background process to deal with it later. This so-called lazy writer evaluates the data in the cache and writes it to disk as it deems necessary. From the point of view of the lazy writer, each cache flush constitutes an IO. Before the lazy writer’s data can be written to disk it must be processed by the file system driver (typically ntfs.sys). The driver might find it necessary to not only write the actual data to the disk but also to update the file system metadata, e.g. the master file table (MFT). When that happens the number of IOs required to store the data increases. Writing data to a file is not a simple each layer reduces the number of IOs by x percent type of scenario. Now that we have reached the hardware level it is surely safe to assume that the IO is not manipulated any further, making this a good place to take measurements? Let’s see. In the simplest case, the disk is a physical hard disk. But even with plain HDDs, there is yet another cache, and there is Native Command Queuing (NCQ, IO reordering to minimize head movements), both changing the IO on its way to permanent storage. So the only way to correctly measure IOPS would be on the disk platter. But what to measure? Head movements? What about SSDs, solid-state drives that thankfully do not have moving heads? What about virtual disks? The device seen by the OS as a physical hard disk might not be physical at all. It could be a LUN in a SAN spread across many physical disks – yet another set of layers. I hope I could make the point that there is no such thing as the IO; neither can there be a single definition of IO throughput (aka IOPS). So how do we measure IOPS? How does our monitoring tool uberAgent for Splunk measure IOPS? uberAgent measures IOPS right before they are handed off from the operating system to device-specific drivers. In other words: the data it collects is as accurate as it can be without interfacing with the hardware directly. The cool thing about what uberAgent does is that it is capable of mapping each IO to an originating process – and thus to a user, a session, and an application. As a result, uberAgent can show you how many IOs each of your applications generate. It can do the same for each user session, too, of course. Measuring IOPS is harder than it may seem. It depends on many factors: the access pattern is very important, but so is the layer at which the measurement is taken. When you have decided how to do it and arrive at a number, you still do not have the single handle to disk performance. There are also throughput and latency to consider. uberAgent for Splunk gives you the information you need to understand what is going on in your systems. No more, no less. Download and try it yourself.
<urn:uuid:1ed1e735-63e6-4941-be64-74e2ea02b97b>
CC-MAIN-2024-38
https://helgeklein.com/blog/the-impossibility-of-measuring-iops-correctly/
2024-09-18T14:31:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00091.warc.gz
en
0.944498
1,024
3.0625
3
The modern CISO The role of CISO first emerged as organizations embraced digital revolutions and began relying on new data streams to help inform business decisions. As technology continued to advance and became more complex, so too did threat actors who saw new opportunities to disrupt businesses, by stealing or holding that data hostage for ransom. As the years have gone by and cyberattacks have become more sophisticated, the role of the CISO has had to advance. The CISO has evolved from being the steward of data to also being a guardian for availability with the emergence of more destructive and disruptive attacks. The CISO also must be highly adaptable and serve as the connective tissue between security, privacy and ultimately, consumer trust.
<urn:uuid:fd1c0299-a68b-4767-ac5f-7ad928cfa1ab>
CC-MAIN-2024-38
https://blog.deurainfosec.com/the-evolution-of-the-modern-ciso/
2024-09-19T18:08:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00891.warc.gz
en
0.982203
144
2.5625
3
Today virtually all websites are powered by CMSes, of which WordPress, Joomla and Drupal have racked up over 70% of the market share between them as per statistics from Web Technology Surveys. A CMS enables anybody to build a web application with minimum technical knowledge. This widespread popularity due to ease of developing has therefore lead many such sites to be targeted. The WPScan Vulnerability Database shows almost 6,000 known vulnerabilities with WordPress, related to the core code or the publicly available plugins. A study of 500 cybersecurity service providing companies carried out by CMS Wire, revealed that “298 had some level of security concerns with their own corporate websites — from an out-of-date CMS or lack of a proper software firewall to something as severe as malware attacks.” The most concerning piece of information is that half of these companies do not know how to secure their own websites. Vulnerable cms installations are highly prone to being hijacked and then become efficient launchpads where attacks to other systems originate. Systems can turn into DDoS zombies and remain at the beck and call of attackers as long as they stay undetected. Unless information is available or specific tools and security measures are properly tuned to be resourceful and proactive, such systems can remain compromised for a very long time relentlessly expanding the area of damages they can be capable of. A compromised web server can become a propagator of malware and malicious codes like viruses, trojans, backdoors, thus spreading infection to website users or other web servers. A vulnerable CMS can suffer from a multitude of attacks including SQL injection, Cross-site scripting (XSS) and malicious code execution. Symantec has reported that CMS powered websites can be exploited over 60% and 30% of the time by XSS and SQL injection respectively. Out-of-date CMS versions or unavailability of support It is hardly possible to pin accountability in case of open source CMSs. Such scenarios can lead to uncertainties in receiving timely updates and patches. As the source codes are publicly accessible to everyone including hackers, they can easily be reverse-engineered to construct exploits. User accounts, that do not enforce strict secure password policies, can be broken by brute-force attacks Insecure plugins and cracked themes can leave CMSs exposed to attacks and with millions of them being downloaded all the time, it is very grave. “On average, 76% of all identified vulnerabilities were located in extensions or add-on modules that can be installed on top of the core package of the application.” Beside ensuring underlying operating systems and software packages are up-to-date, vulnerabilities can be addressed by always installing updates or security patches released by the CMS developers. This is the easiest and the most effective way to keep out malicious activities and threats. Equally important, if not more, is observing caution in installing plugins. The process of installing a plugin should be vetted by gathering as much information about it as possible beforehand. System administrators should consider some important aspects like plugin ratings and number of times the plugin was downloaded. Some time must also be invested in reading through user reviews. Once the plugin has been integrated to the CMS core, then it should be maintained with every update and patch available. Secure authentication policies should be enforced, which includes from using strong password to two-factor authentication. Use of captcha and strong regulation and monitoring of user authentication and failed login attempts will safeguard the system from brute-force and other automated attacks. All default user accounts and administrator login urls must be changed. This is called security by obfuscation and can provide some degree of protection from novice attackers. Further account privileges and file permissions should be properly set. CMSs should as far as possible disallow creation of accounts using automated tools. Creation and termination of user accounts should be carefully moderated. System owners if possible could create and follow user account policies uniformly across all their users. The system should be configured to prevent it from revealing details like version numbers of the CMS, extensions, OS, web application servers and database platforms and avoid responding with self-incriminating information to automated scans, which could in turn help hackers to find or device exploits. Use of SSL should be enforced wherever authentication and sharing of critical information exists. Further refer The Open Web Application Security Project (OWASP) and US-CERT’s Technical Information Paper TIP-12-298-01 on Website Security for additional measures and practices. You can also find vulnerability assessments and remedies at the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD).
<urn:uuid:737c60ff-b3be-43a8-a35d-4046365ea75c>
CC-MAIN-2024-38
https://www.btcirt.bt/cms-security/
2024-09-19T18:47:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00891.warc.gz
en
0.938276
937
2.609375
3
For years, critical infrastructure attacks have been a way for state-backed attackers (APTs) to make a statement or to take steps that may affect a country on a grand level. We have recently seen a clash of superpowers, including Russia, China, and the US. The Russia/Ukraine conflict has prompted diplomatic sanctions from the West, rather than military actions from powerful forces such as NATO, the UK, US, and others. Consequently, we have seen an uptick in cyberattacks and ransomware against Western entities – especially in critical infrastructure. Such state-backed attacks often involve ransomware and other sorts of advanced attacks, which can be devastating. According to our data, more than 85% of ransomware attacks infect backups, thus making it much harder to recover. Twenty-nine percent of organizations who paid ransom still could not recover their data or were compelled to take steps that resulted in significant damage. Let us not forget that paying ransom only encourages attackers to attack again – thus making this sort of venture so lucrative and attractive. We have also seen examples of crippling cyberattacks against the power grid. In July 2021, Saudi Aramco confirmed that some company files were leaked after hackers reportedly demanded a $50 million ransom from the world’s most-valuable oil producer. That November, a quick response thwarted a ransomware attack on a major Queensland energy company. Moreover, two major European oil refineries, Oiltanking/Mabanaft in Germany and ARA in the Netherlands and Belgium, were victims of ransomware in January and February 2022, disrupting a total of 17 refinery terminals in these nations and preventing oil tankers from being loaded and unloaded. These incidents underscore why it is so crucial to not only be prepared for malicious actors, but also for state-backed attacks that exist as part of the larger geopolitical situation. They also illustrate the considerable challenges of securing OT systems. The problem began when industries, over the last century, shifted towards computerized management of the different aspects of their production lines and expanded to digital devices connected to these systems (IIOT). Although these systems were designed for reliability and longevity, they also often sacrificed security to support these goals. According to our experts, here are some of the most dangerous issues your OT environment could face over its lifecycle. Deprecated components and protocols Different components that are deployed throughout the network—whether software components like HMIs and Historian servers or hardware components like PLCs and various sensors—are deployed with a specific mindset: to last for as long as possible. This kind of thinking results in OT networks that include components installed decades ago when designing the network, when security was not even considered, let alone emphasized, and still hasn’t been implemented because of the complexity of such moves. In addition, the standard protocols which are still being used today by the majority of industrial systems such as the Modbus protocol lack even the most basic forms of protection, such as encryption or authentication. Lack of network visibility An additional point of interest, not unlike the deprecated components which could lead to direct exploitation of the hardware or protocols, is an overall lack of visibility in the OT network. Being able to monitor your OT network in a manner that would allow you to detect, block, and respond to an intrusion in a timely manner would not only allow you to minimize the damage a malicious entity could cause, but to mitigate it entirely. Lack of separation between IT and OT networks Today, many industries such as energy and utility companies embrace modernization procedures and processes which rely on remote management, site-to-site connections, and more widespread IoT. Therefore, it is imperative that proper segmentation and segregation between the networks follows suit. Implementing an internal DMZ (demilitarized zone) and proper firewall rules between the different zones would result in a reduced attack surface. A classic example is in the energy sector, where electrical companies build and maintain substations that include servers and equipment connected to the same network as the primary plant. These stations are, in most cases, unmanned and contain minimal physical protection in the form of CCTV, motion sensors, and standard door locks—all of which could be disabled or bypassed by a sophisticated attacker. When such a facility is accessed without sufficient restrictions on the network or physical facilities, an attacker could use such access as a foothold to access and propagate through the network and potentially compromise critical infrastructure. Insufficient awareness of existing security risks One of the major issues in any security-oriented environment is, without a doubt, the human factor. Lack of knowledge and awareness could result in the successful compromise of even the most secure networks. All that it takes is for an employee to be compromised or mistakenly made to click on a suspicious attachment, connect an unknown USB, or even post a photo to social media with various credentials appearing in the background of the control room. All these situations can be avoided with sufficient guidance and awareness training for employees, making sure that they understand the risk and threats cybercriminals are posing and what they can do to minimize such exposure. While there are many ways to approach the mitigation of these issues, it is important to consider the outdated nature of an OT network. Whereas patch management, security monitoring and other areas can be implemented on an IT network with relative ease, engineers typically do not want to make changes to the OT network because they are concerned about destroying it. The reality, however, is that the consequences of cyberattacks on OT networks can be severe, including denial of service, release of hazardous materials, and even loss of life. For these reasons, it’s important to have continuous monitoring and security updates, proactive activity for checking network penetration, and management of cyber incidents in the context of OT networks. Understanding the risks and creating a detailed workplan that includes threat modeling, risk assessments, and remediation plans is crucial for implementing a robust cybersecurity posture improvement strategy. Understanding the physical risk and insider risk are just as important to protect the organization from advanced state-level attacks. Want to learn more about how to protect your organization with strategic IT and OT security? Watch our webinar.
<urn:uuid:14032f42-e5ba-44d3-8983-af5890e38fe5>
CC-MAIN-2024-38
https://cyesec.com/blog/addressing-the-significant-challenges-of-ot-environment-security
2024-09-08T23:18:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00091.warc.gz
en
0.959584
1,239
2.703125
3
A new threat called the Heartbleed bug has a significant impact on systems that use OpenSSL. Additional information may be found at: Heartbleed attacks the heartbeat extension (RFC 6520) implemented in OpenSSL, and allows an attacker to read the memory of the affected system over the Internet. The bug can allow the attacker to compromise the private keys, as well as protected user names, passwords, or content. A Heartbleed compromise is not logged and is difficult to detect. Heartbleed is not a flaw with the SSL/TLS protocol specification, nor is it a flaw with the digital certificate or the certificate authority (CA) system. Heartbleed is an implementation bug in specific versions of OpenSSL: The impact of Heartbeat will be widely felt, affecting both servers and clients. For example, Apache and NGINX, which account for roughly two-thirds of web servers, use OpenSSL. Netcraft reports that more than half a million servers may be affected by Heartbleed. OpenSSL is also used in operating systems such as Debian Wheezy, Ubuntu 12.04.4 LTS, CentOS 6.5, Fedora 18, OpenBSD 5.3 and 5.4, FreeBSD 8.4 and 9.1, NetBSD 5.0.2 and OpenSUSE 12.2. We recommend that customers review the detailed links above and test their SSL site for Heartbleed and other vulnerabilities using the tool at https://www.ssllabs.com/ssltest/ Customers using an affected version of OpenSSL should:
<urn:uuid:07f8ec7d-7a7d-4f7e-a594-ec058ea355e4>
CC-MAIN-2024-38
https://knowledge.digicert.com/nl/nl/quovadis/ssl-certificates/ssl-general-topics/what-is-heartbleed
2024-09-08T23:44:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00091.warc.gz
en
0.880764
326
2.90625
3
T1 and T3 are two common types of digital transmission systems used in telecommunications. AT&T originally developed T1 lines in the 1960s to support telephone service. The service gradually evolved to support both voice and data. T3 lines (also known as DS3) are an enhanced version of T1 that gives better bandwidth. Both T1 and T3 have become popular options for supporting business-class internet service. A T3 line is a point-to-point connection between two sites or between a Phone Company Central office and a business location that provides dedicated high-speed internet connectivity or it can be configured to carry multiple T1’s for voice and data. The T-Carrier system works by multiplexing multiple digital channels. Each channel offers a bandwidth of 64kbps. T1 connections contain 24 channels and offer a speed of 1.544 Megabits per second. A T3 line is, in reality, a large group of T1 lines. 28 T1 lines are bundled together to create one T3 connection. In other words, 28 bundles of 1.544 Mbps are brought together to offer you a speed of 44.736 Mbps. A T3 line carries 672 channels that each run at 64 Kbps. A T3 connection uses TDM or Time Division Multiplexing to interweave these data channels. A multiplexer is a digital switch that accepts all the channels and gives a singular output. It combines inputs by sending signals from different channels over different time slots, enabling several channels to be carried on the same line. Initially, a T3 line starts as four T1 lines multiplexed to create a T2 line (this is fairly uncommon in the marketplace). An M13 (Multiplex T1 to T3) multiplexes 7 T2 lines to form the T3 transmission line. With each channel running at 64 Kbps, a speed of 42.924 Mbps is achieved through these multiple layers of multiplexing—additionally, some extra bits signal and control of data transmission. Fiber optics cables are most common but in some limited instances, a 4 wire twisted pair circuit is possible and is used as the T3 transmission medium. However, the copper signal will not run for more than 50ft. Modern wiring technologies such as coaxial cables and optical fibers are more suitable to carry out T3 data transmission. Like T1 lines, T3 lines also establish physical connectivity between the ISP (Internet Service Provider) or the phone company and the end customer location. This makes them highly reliable high-speed internet, data and voice connections. The T3 internet speed and its reliability are two main reasons why this connection is preferred by large enterprises. In addition to this, the uptime offered by T3 connections is excellent. This is because the repairs of dedicated connections take precedence over repairs of general or shared internet connections. In case of an outage concerning weather conditions, the T3 lines will be repaired and restored before general internet connections. Therefore, T3 internet speed is preferred by businesses requiring dedicated speed with guaranteed uptime. With a dedicated T3 line, you pay for dedicated bandwidth, high-speed access, and almost negligible downtime. While it is undoubtedly a premium service, with costs ranging anywhere between $500 – $1500 every month, the expense can be easily managed and budgeted for as it does not vary over time. Suppose your business requires large file sharing, regular high-volume backups, and constant cloud access. In that case, there is a high possibility you will suffer from slow internet speed while uploading data if you are over a broadband connection. On the other hand, T3 lines are symmetrical, meaning that they offer the same upload and download speed. Both upload and download can take place at the same time without impacting the performance of one another. If your business requires high reliability and upload speeds, a T3 line may be a viable solution. Even with newer technologies, customers may still prefer using T1 or T3 lines due to their uptime guarantees. However, it is important to remember that while T3 internet speed offers amazing support for your business requirements, it can also be quite expensive. CarrierBid has over 180 carriers and service providers in our partner network. Contact us today for a free consultation (we never charge a fee for our service!) to discuss if T3 Lines are right for you! for immediate service or fill out theform and we’ll be in touch right away.
<urn:uuid:d220a4f9-ff89-427a-ade6-dab3597a206f>
CC-MAIN-2024-38
https://www.carrierbid.com/dedicated-t3-line/
2024-09-11T06:02:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00791.warc.gz
en
0.942618
925
3.46875
3
In today’s data-driven landscape, effectively managing and understanding your data assets is crucial. This guide explains the concept of “data inventory.” Data inventory is a methodical way of organizing and comprehending data stored in different databases and storage systems. By creating a data assets inventory, organizations can improve data management and decision-making processes. We will learn how to do data management using built-in tools in common databases and specialized software. The main focus will be on managing various data types, such as images. This article will help you learn how to start analyzing your own data assets with practical examples and insights. What is Data Inventory? Data inventory involves organizing and examining an organization’s data assets to determine their type, location, usage, and governance. This systematic approach helps organizations manage their data efficiently, comply with regulations, and harness their data for strategic decisions. The Importance of Data Assets Analyzing data assets effectively gives a complete view of an organization’s data, leading to better business strategies and operational efficiencies. It helps in data governance, risk management, and the optimization of data storage and retrieval processes. Popular Databases Workflow Many relational databases, like MySQL and PostgreSQL, offer tools and commands for conducting data inventories. For example, to list all databases on a MySQL server, you can use: The result will be a list of all databases managed by the MySQL server. Similarly, PostgreSQL users can retrieve a list of all database names using: Data Inventory with SQL Server SQL Server provides a rich set of tools for data inventory. Using Transact-SQL, you can query metadata to obtain information about database objects. For instance, to find details about the tables in a database, use: SELECT * FROM INFORMATION_SCHEMA.TABLES; This command lists all tables along with schema details, helping you understand the structure of your data environment. Databases like MongoDB handle data assets uniquely because they do not have a set structure. This means that users can store and manage data in a more flexible manner. Users have the freedom to define the structure of their data as they see fit. This allows for greater customization and adaptability in handling data assets. MongoDB offers commands such as: show dbs show collections These commands list all databases and collections, respectively, providing a basic overview of the stored data. Dedicated Software for Data Inventory Beyond native database tools, dedicated data inventory software offers advanced features for managing and visualizing data assets. These tools often support multiple database types and provide deeper insights through data discovery, classification, and data lineage features. DataSunrise offers a wide range of features for managing data inventory, including activity monitoring and sensitive data discovery. Utilizing dedicated software has demonstrated clear advantages over native or non-commercial tools, thanks to its rich feature set. Proper maintenance and auditing of the data inventory are also crucial. Dedicated software typically integrates all necessary tools for these tasks. DataSunrise also offers an intuitively simple web-based user interface. Beginners easily grasp its major features. Apache Atlas is a popular open-source tool designed for data governance and metadata management across various data environments. It enables users to perform comprehensive data inventories by automatically classifying data and managing metadata. Handling Image Data in Data Inventories Image data poses unique challenges for data inventory processes. Unlike textual or numerical data, images require metadata to be fully searchable and manageable. To create a data inventory for image data, you need to extract metadata. You may also need to use image recognition technologies to label and categorize the image content. Example: Inventory of Image Data Consider a database storing image files along with metadata in a NoSQL system like MongoDB. One way to simplify searching and managing files is by using a script. The script can extract metadata such as file size, type, and creation date. You can store this metadata in a separate collection. It is worth mentioning here that DataSunrise includes built-in functionality to make OCR tasks for sensitive data discovery. Implementing Data Inventory Implementing a data inventory process involves several key steps: - Identifying all data sources. - Cataloging the data types and structures. - Analyzing the usage and access patterns of the data. - Implementing tools and scripts to automate the inventory process. For a SQL database, you might start by creating a user specifically for data inventory purposes: CREATE USER 'inventory_user' IDENTIFIED BY 'password'; This user can then run queries to catalog data without affecting the operational integrity of the database. To collect, automate, and visualize data inventory results effectively, you can follow these concise steps: - Data Collection: Identify and catalog all data sources using scripts or data inventory tools. For SQL databases, utilize queries to extract metadata; for NoSQL, use commands to list databases and collections. For image data, you should extract relevant data from images using OCR tools. - Automation: Set up automated scripts or employ data inventory software like DataSunrise or Apache Atlas to regularly update your data catalog. Use cron jobs for periodic assessments or triggers in databases to log changes. - Use tools like Tableau, Power BI, or custom web-based dashboards to create visual representations of your data. These visualizations can depict the volume, distribution, and types of data across the organization, providing insights at a glance. To improve data governance, organizations should follow these steps to keep an updated and easily accessible inventory. Effective data management begins with a thorough data inventory. Understanding your data, knowing where you store it, and understanding how you use it can help you make better decisions. It can also help you meet legal requirements and improve how you handle data. Modern organizations need to conduct a data inventory using either native database tools or dedicated software. This guide provides a starting point for those looking to understand and implement data inventory techniques in their operations. Discover the power of efficient data management with DataSunrise’s suite of data discovery and compliance features. We invite you to visit DataSunrise Team Online and experience our live demo. See firsthand how our tools can enhance your data security, compliance, and governance efforts. Don’t miss the opportunity to simplify your data operations. Come join us online today to see how DataSunrise can assist you.
<urn:uuid:cd047dfe-c017-4489-9477-2198fb5f88ea>
CC-MAIN-2024-38
https://www.datasunrise.com/knowledge-center/data-inventory/
2024-09-11T07:31:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00791.warc.gz
en
0.850965
1,324
3.265625
3
In a world that is dominated by digitalization, the Internet of Things (IoT) is playing a vital role in disrupting the way we live and conduct business. From smart living to workplace collaboration and connected on-field employees, IoT continues to save time and boost productivity like never before. According to a Microsoft research report, focused on IoT signals and designed to provide a global overview of the IoT landscape, around 85% of respondents say that they are currently in the midst of IoT adoption, and three-fourths have projects in the planning stages. Furthermore, an 88% of respondents believe that IoT is “critical” to the success of their business. Generally, an IoT lifecycle involves the collection and management of data by means of a vast network of sensors and devices. Next, this data is processed and analyzed to make real-time decisions – in order to execute an effective IoT lifecycle, you need a programming language that allows you to easily establish high-level communication between different devices and maintain seamless connectivity throughout the ecosystem. - Memory Management - Event-Driven Programming - Ease of Implementation Talking about its application in IoT, it can be used to handle a large number of requests generated by devices such as sensors, beacons, transmitters and motors. In fact, Node.js makes the request-response flow smoother and faster. Moreover, sockets and MQ Telemetry Transport (MQTT) protocol are well suited in Node.js which are normally used for continuous data transmission in IoT applications. Node.js comes with the NPM (Node Package Manager) equipped with more than 80 packages for IoT-application cable boards such as Arduino controller, BeagleBone Black, Raspberry Pi and Intel IoT Edison. This means that you can rapidly develop robust IoT applications with Node.js development services. The garbage collector feature allows IoT developers to focus on aspects of development rather than wasting time on memory management. In a way, the automatic freeing of the unused memory results in a stable IoT solution as the garbage collector eliminates memory leaks. Ease of Implementation IoT.js aims to provide an inter-operable service platform in the world of IoT, based on web technology. It can be used with resource-constrained devices that consume only a few kilobytes of RAM. Because of this, it supports a wide range of “things”.
<urn:uuid:11fae0cb-2636-42ac-bb13-e407ef00d05a>
CC-MAIN-2024-38
https://www.iotforall.com/javascript-iot
2024-09-11T05:38:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00791.warc.gz
en
0.907985
489
2.6875
3
When it comes to industrial control systems (ICS) specifically to supervisory control and data acquisition (SCADA) then a basic unterstanding of the business is crucial. In the curse of my master thesis I am currently digging into parts of the electrical grid and try to examine the issues and security level of some specific protocols. Thus, I will regularly keep you posted on grid aspects over the next two months For a starter, this article shall give a short introduction into electrical grids in general. It aims to introduce general terms and to state the difference between the former electrical grid architecture and the smart grid. Additionally, paradigm changes and challenges to the current grid will be pointed-out and the conclusion will include some reasoning for a more flexible architecture – the smart grid. Electrical grids consist of power plants that create electricity from some form of energy. They consist of towers and poles that hold wires to transport the electricity and finally make it available to the consumer. The figure provides an overview how these facilities are logically grouped into four major electric grid domains. The domain concept is not entirely new and was similarly outlined in a description of cyber security on the essential parts of the smart grid . Generator domain; includes all sort of bulk power generation plants such as nuclear reactors, fossil fuel (coal or gas) plants as well as hydroelectricity plants. Typically, these are power plants that can continuously generate electricity of several hundred million watts (MW). Transmission domain; represents the long-distance transmission network components. This includes components such as large interconnection nodes, substations and of course, cables either mounted on towers or buried underground. Electrical lines at this domain normally work on very high voltage. The voltage for that size of transmissions networks is several hundred of thousand volts (kV). Among Europe typically values are 230kV and 400kV. Traditionally, the domain is under control of the transmission system operator (TSO). In some countries a national body or a super body of utilities operates that domain. Distribution domain; provides the whole infrastructure to bring power to the end user (consumer). The domain also includes transformer equipment which is necessary to reduce the voltage as power is transported to the consumer. Bulk consumers typically get their power at higher voltages, for example 16kV, then common house holds for which 230 Volts and 400 Volts present common values. The domain is manged by the so-called distribution system operator (DSO). Consumer domain; groups all sort of consumers. The industries as well as household regardless of the amount of consumption and the consumer geographic location. The four domain model gives a good introduction into the basic concept of an electrical grid but it does by no means appreciate the full detail of the electrical grid nor does it fully model the energy flow. Due to the liberalization of the power market the generation domain is not exclusively subject to large utilities anymore. For example, consumers may want to invest into renewable energy such as photo voltaic (PV) equipment in order to cover their own power consumption and to supply current out of surplus production to others. Thus, “consumers are becoming producers or producing consumers – prosumers” . Comparable changes also apply to the distribution domain. Local utilities more frequently setup own facilities to generate power which will be feed-in directly at the distribution level at high voltages. Distributed generation (DG) is nothing new to grid operators and utilities as it was already discussed in literature in 2001. The referenced book does also introduce several forms of generators and does recognize the technical and financial impact of distributed generation to the grid. The reader will find information on combustion turbines, PV systems, micro turbines, fuel cells, combined heat and power as well as background information on grid operations with distributed generation and storage. However, security relevant aspects are not being discussed. Since 2001 distributed power generation significantly emerged due to renewable energy got political attention and national funding . These fundings do not only focus on large installations but also take small generators in home scale into account. Meanwhile, distributed generation has taken off and demands for advances in measurement and operations of the electrical grid. Only the introduction of additional information technology (IT) will allow to coordinate all generators, storages and consumers and thus to ensure efficiency and reliability of the grid. A functional and reliable grid is evident for a country’s stability. Therefore, governments provide guidance in form of critical infrastructure protection (CIP) programmes [6,7] and in form of written recommendations [8,9] on how to securely operate the IT stuffed new generations of grids. European Commission, Energy Efficiency Plan, 2011 United States of America, H.R. 6582: American Energy Manufacturing Technical Corrections Act, 2012 P. Hasse, Smartmeter: A technological overview of the German roll-out, 29th Chaos Communication Congress, Online http://events.ccc.de/congress/2012/Fahrplan/events/5239.en.html, 2012 A. Borbely and J.F. Kreider, Distributed Generation: The Power Paradigm for the New Millenium, CRC Press, 2001, ISBN 0-8493-0074-6 European Commission for Energy, Financing Renewable Energy in the European Energy Market, 2011 North American Electric Reliability Corporation (NERC), http://www.nerc.com/ Federal Office for Civil Protection (FOCP), The Swiss Programm on Critical Infrastructure Protection, Nov 2010, Online http://www.bevoelkerungsschutz.admin.ch/internet/bs/en/home/themen/ski. parsysrelated1.82246.downloadList.18074.DownloadFile.tmp/factsheete.pdf NIST Cyber Security Coordination Task Group, Security Profile for Advanced Metering Infrastructure, v2.0, June 2010 ENISA, Smart Grid Security: Recommendations for Europe and Member States, July 2012, Online http://www.enisa.europa.eu/activities/Resilience-and-CIIP/critical-infrastructure-and-services/smart-grids-and-smart-metering/ENISA-smart-grid-security-recommendations/at_download/fullReport Note, this work is a preview version of an MSc Information Security dissertation in the fields of electrical grids.
<urn:uuid:449cea02-a17c-49e0-9399-48acc2ca9148>
CC-MAIN-2024-38
https://blog.compass-security.com/2013/01/introduction-to-the-electrical-grid/
2024-09-12T12:42:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00691.warc.gz
en
0.907476
1,311
3.1875
3
What is Remote Access? Remote access is the capability to connect to and manage a computer, network, or system from any location. It lets users interact with an off-site device to conduct operations or access resources in the same way as if they were there in person. Several technologies and protocols provide seamless connectivity between the local and remote systems. Remote Access Types Remote access mechanisms can be categorized into the following types: - Point-to-Point Protocol: PPP is a data connection protocol that allows two linked computers to connect directly to one another. It is commonly used for fast and high-load broadband networking. - Remote Access Services: With a RAS configuration, users can establish a VPN or direct dial-up network connection to organizational networks from a remote location. - Remote Desktop Protocol: RDP is a protocol created by Microsoft that offers a graphical user interface (GUI) when users connect to another machine. - Internet Protocol Security: IPsec is a secure network protocol that provides data encryption and authentication between computers connected to an IP network. - Propriety Protocols: Many remote access tools use custom protocols for enabling access. To ensure security throughout a session, custom protocols can utilize Secure Sockets Layer/Transport Layer Security key exchange and AES-256. How to Use Remote Access? Users can follow these steps to utilize remote access: - Pick the Appropriate Remote Access Protocol: Depending on your unique demands and specifications, choose the best remote access protocol. Think about things like compatibility with your devices and network architecture, security, and performance. - Configure Security Settings: Make sure that all necessary security measures are in place before initiating a remote connection. To stop unwanted access, set up strong authentication mechanisms like multi-factor authentication or complicated passwords. Use encryption to further secure data transferred between local and remote devices. - Install and Set Up Remote Access: On the local and remote devices, install the required remote access software. Configure settings, including connection preferences, user permissions, and access controls, by following the software’s installation instructions. - Establish a Connection: Start the local device’s remote access program to establish a connection with the distant system. To authenticate and establish a connection, enter the IP address, hostname, or domain name of the remote device together with the necessary credentials (password and username). - Transfer Files and Data: To exchange files between local and remote devices, make use of the file transfer capabilities provided by remote access software. This is especially helpful for transferring data, including media files and documents, across distant places. - Monitor and Troubleshoot: Monitor and troubleshoot remote systems to diagnose faults, carry out system maintenance, and fix issues remotely. - Safely End a Remote Session: To disconnect from the remote system, safely end the remote session. To protect security and privacy, log out of all accounts and close any open files or applications. The Benefits of Remote Access Using remote access protocols provides the following benefits: - Flexibility: Users can work remotely from any location, giving them the freedom and flexibility to access resources and carry out tasks offsite. - Enhanced Productivity: Remote access removes geographical obstacles and boosts productivity by allowing employees to easily collaborate and access resources. - Cost Savings: By eliminating the need for physical infrastructure and office space, remote access helps businesses save money on equipment, utilities, and rental fees. - Enhanced Productivity: IT managers can remotely identify and resolve problems on distant systems, which minimizes downtime and boosts productivity. - Work-Life Balance: By removing the need for commuting and providing flexible work hours, remote access allows people to better balance their personal and professional lives. Use Remote Access Tools Remote access makes it possible to access systems and resources with ease from any location, which increases efficiency, productivity, and flexibility. Suitable safeguards must be put in place to protect data and maintain the integrity of distant connections.
<urn:uuid:390d59d2-375a-45f1-9745-b52ca465f039>
CC-MAIN-2024-38
https://www.motadata.com/it-glossary/remote-access/
2024-09-13T19:17:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00591.warc.gz
en
0.894818
809
3.703125
4
Part 1: What Copyright Is ------------------------- On January 1, 1978, the Copyright Act of 1976 (title 17 of the United States Code) came into effect. This general revision of the copyright law of the United States, the first such revision since 1909, made important changes in our copyright system and superseded the previous federal copyright statute. Copyright is a form of protection provided by the laws of the United States (title 17, U.S. Code) to the authors of "original works of authorship" including literary, dramatic, musical, artistic, and certain other intellectual works. This protection is available to both published and unpublished works. Section 106 of the Copyright Act generally gives the owner of copyright the exclusive right to do and to authorize others to do the following: - To reproduce the copyrighted work in copies or phonorecords; - To prepare derivative works based upon the copyrighted work; - To distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending; - To perform the copyrighted work publically, in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audivisual works, and - To display the copyrighted work publically, in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work. It is illegal for anyone to violate any of the rights provided by the Act to the owner of copyright. These rights, however, are not unlimited in scope. Sections 107 through 118 of the Copyright Act establish limitations on these rights. In some cases, these limitations are specified exemptions from copyright liability. One major limitation is the doctrine of "fair use," which is given a statutory basis by Section 107 of the Act. In other instances, the limitation takes the form of a "compulsory license" under which certain limited uses of copyrighted works are permitted upon payment of specified royalties and compliance with statutory conditions. Part 2: Who Can Claim Copyright ------------------------------- Copyright protection subsists from the time the work is created in fixed form: that is, it is incident of the process of authorship. The copyright in the work of authorship IMMEDIATELY becomes the property of the author who created it. Only the author or those deriving their rights through the author can rightfully claim copyright. In the case of works made for hire, the employer and not the employee is presumptively considered the author. Section 101 of the Copyright Act defines a "work made for hire" as: (1) a work prepared by an employee within the scope of his or her employment; or (2) a work specially ordered or commissioned for use as a contribution to a collective work, as part of a motion picture or other audiovisual work, as a translation, as a supplementary work, as a compilation, as an instructional text, as a test, as answer material for a test, or as an atlas, if the parties expressly agree in a written instrument signed by them that the work will be considered a work made for hire.... The authors of a joint work are co-owners of the copyright in the work, unless there is an agreement to the contrary. Copyright in each separate contribution to a periodical or other collective work is distinct from copyright in the collective work as a whole and vests initially with the author of the contribution. Two General Principles ---------------------- - Mere ownership of a book, manuscript, painting, or any other copy or phonorecord does not give the possessor the copyright. The law provides that transfer of ownership of any material object that embodies a protected work does not of itself convey any rights in the copyright. - Minors may claim copyright, but state laws may regulate the business dealings involving copyrights owned by minors. For information on relevant state laws, consult an attorney in your state. AUTHOR'S ADDENDUM The determination of what constitutes a "work for hire" under the Copyright Act is based upon common law agency principles, and not upon who has the right to control the product or has actual control of the product, a unanimous U.S. Supreme Court ruled on June 5 (COMMUNITY FOR CREATIVE NON-VIOLENCE v. REID). The court, in an opinion by Justice Marshall, ruled that a statue, dramatizing the plight of the homeless, which was commissioned by the Community for Creative Non-Violence, a non-profit group whose mission is to eradicate homelessness, is not a work for hire, although the court noted that CCNV could still be found to be a joint author. Section 101 of the Copuright Act, 17 USC 101, provides that a work is "for hire" under two possible circumstances: first, a work can be "prepared by an employee within the scope of his or her employment," 17 USE 101(1), or, as specified in 17 USC 101(2), a work can be specially ordered or commissioned as part of a collective work, a movie or other audiovisual work, a translation, a supplementary work, a compilation, an isstructional text, a test, or an atlas. The parties agreed that the statue did not satisfy the requirements of 17 USE 101(2), so the only issue was whether it could be considered a "work prepared by an employee within the scope of his or her employment." CCNV, supported by amicus briefs filed by publishers, asserted that a work created by an independent contractor can be a work for hire under 17 USC 101(1) if the employer retains the right to control the product, or if the employer has actually wielded control with respect to the creation of a particular work. In a joint brief, the Magazine Publishers of America Inc., the Hearst Corp., The New York Times Co., Playboy Enterprises, and Time Inc. said that magazine and newspaper publishers "shape and direct" the creative process, and that they must be able to rely upon work for hire relationships with contributors they supervise and direct. Neither of the "control" tests "is consistent with the text of the act," the court said. "Section 101 clearly delineates between works prepared by an employee and commissioned works. Sound though other distinctions might be as a matter of copyright policy, there is no statutory support for an additional dichotomy between commissioned works that are actually controlled and supervised by the hiring party and those that are not." The hiring party's right to control the product "simply is not determinative," the court said. The term "employee" as used in 17 USC 101(1) "should be understood in light of the general common law of agency," the court said. Using that criterion, the court found that the sculptor, James Earl Reid, was not a CCNV employee, noting that Reid supplied his own tools, worked in his own studio, was retained by CCNV for only two months, and was paid a specific sum contingent upon completion of the specific job. CCNV paid no payroll or social security taxes for Reid, provided no employee benefits, and did not contribute to any unemployment insurance or workers' compensation funds, the court said. Part 3: Copyright and National Origin of the Work ------------------------------------------------- Copyright protection is available for all unpublished works, regardless of the nationality or domicile of the author. Published works are eligible for copyright protection in the United States if either of the following conditions are met: - On the date of first publication, one or more of the authors is a national or domiciliary of the United States or is a national or domiciliary, or soverign authority of a foreign nation that is a party to a copyright treaty to which the U.S. is also a party, or is a stateless person wherever that person may be domiciled; or - The work is first published in the United States or in a foreign nation that, on the date of the first publication, is a party to the Universal Copyright Convention; or the work comes within the scope of a Presidential proclamation. The Manufacturing Clause ------------------------ The manufacturing clause in the copyright law, section 601 of the 1976 Copyright Act (title 17, U.S. Code), expired July 1, 1986. What Works Are Protected ------------------------ Copyright protects "original works of authorship" that are fixed in a tangible form of expression. The fixation need not be directly perceptible, so long as it may be communicated with the aid of a machine or device. Copyrightable works include the following categories: (1) literary works; (2) musical works, including any accompanying words; (3) dramatic works, including any accompanying music; (4) pantomimes and choreographic works; (5) pictorial, graphic, and sculptural works; (6) motion pictures and other audiovisual works; and (7) sound recordings. These categories should be viewed quite broadly: for example, computer programs and most "compilations" are registrable as "literary works"; maps and architectural blueprints are registerable as "pictorial, graphic, and sculptural works." Part 4: What Is Not Protected By Copyright ------------------------------------------ Several categories of material are generally not eligible for statutory copyright protection. These include among others: - Works that have NOT been fixed in a tangible form of expression. For example, choreographic works that have not been notated or recorded, or improvisational speeches or performances that have not been written or recorded. - Titles, names, short phrases, and slogans; familiar symbols or designs; mere variations of typographic ornamentation, lettering, or coloring; mere listings of ingredients or contents. - Ideas, procedures, methods, systems, processes, concepts, principles, discoveries, or devices, as distinguished from a description, explanation, or illustration. - Works consisting ENTIRELY of information that is common property and containing no original authorship. For example: standard calendars, height and weight charts, tape measures and rulers, and lists or tables taken from public documents or other common sources. How To Secure A Copyright ------------------------- The way in which copyright protection is secured under the present law is frequently misunderstood. No publication or registration or other action in the Copyright Office is required to secure copyright. There are, however, certain definite advantages to registration. (See NOTE below.) Copyright is secured AUTOMATICALLY when the work is created, and a work is "created" when it is fixed in a copy or phonorecord for the first time. In general, "copies" are material objects from which a work can be read or visually perceived either directly or with the aid of a machine or device, such as books, manuscripts, sheet music, film, videotape, or microfilm. "Phonorecords" are material objects embodying faxations of sounds (excluding, by statutory definition, motion picture soundtracks), such as audio tapes and phonograph disks. Thus, for example, a song (the "work") can be fixed in sheet music ("copies") or in phonograph disks ("phonorecords"), or both. If a work is prepared over a period, the part of the work that is fixed on a particular date constitutes the created work as of that date. NOTE: Before 1978, statutory copyright was generally secured by the act of publication with notice of copyright, assuming compliance with all other relevant statutory conditions. Works in the public domain on January 1, 1978 (for example, works published without satisfying all conditions for securing statutory copyright under the Copyright Act of 1909) remain in the public domain under the current Act. Statutory copyright could also be secured before 1978 by the act of registration in the case of certain unpublished works and works eligible for ad interim copyright. The current Act automatically extends to full term (section 304 sets the term) copyright for all works in which ad interim copyright was subsisting or was capable of being secured on December 31, 1977. AUTHOR'S ADDENDUM Anyone wishing to obtain a copyright should write to the Copyright Office (address below) and request the appropriate forms. Write to: Copyright Office LM 455 Library of Congress Washington, D.C. 20559 NOTE: The Copyright Office itself is not permitted to give legal advice. If you need information or guidance on matters such as disputes over the ownership of a copyright, suits against possible infringers, the procedure for getting a work published, or the method of obtaining royalty payments, it may be necessary to consult with an attorney. Part 5: Publication ------------------- Publication is no longer the key to obtaining statutory copyright as it was under the Copyright Act of 1909. However, publication remains important to copyright owners. The Copyright Act defines publication as follows: "Publication" is the distribution of copies or phonorecords of a work to the public by sale or other transfer of owner- ship, or by rental, lease, or lending. The offering to distribute copies or phonorecords to a group of persons for purposes of further distribution, public performance, or public display, constitutes publication. A public perform- ance or display of a work does not of itself constitute publication. A further discussion of the definition of "publication" can be found in the legislative history of the Act. The legislative reports define "to the public" as distribution to persons under no explicit or implicit restrictions with respect to disclosure of the contents. The reports state that the definition makes it clear that the sale of phonorecords constitutes publication of the underlying work, for example, the musical, dramatic, or literary work embodied in a phonorecord. The reports also state that it is clear that any form of dissemination in which the material object does not change hands, for example, performances or displays on television, is NOT a publication no matter how many people are exposed to the work. However, when copies of phonorecords are offered for sale or lease to a group of wholesalers, broadcasters, or motion picture theaters, publication does takes place if the purpose is further distribution, public performance, or public display. Publication is an important concept in the copyright law for several reasons: - When a work is published, all published copies should bear a notice of copyright. (See Part 6 at a later date) - Works that are published with notice of copyright in the United States are subject to mandatory deposit with the Library of Congress. - Publication of a work can affect the limitations on the exclusive rights of the copyright owner that are set forth in sections 107 through 118 of the law. - The year of publication may determine the duration of copyright protection for anonymous and pseudonymous works (when the author's identity is not revealed in the records of the Copyright Office) and for works made for hire. - Deposit requirements for registration of published works differ from those for registration of unpublished works. Part 6: Notice of Copyright --------------------------- When a work is published under the authority of the copyright owner, a notice of copyright should be placed on all publicly distributed copies and on all publicly distributed phonorecords of sound recordings. This notice is required even on works published outside of the United States. Failure to comply with the notice requirement can result in the loss of certain additional rights otherwise available to the copyright owner. The use of the copyright notice is the responsibility of the copyright owner and does not require advance permission from, or registration with, the Copyright Office. As mentioned earlier, use of the notice makes the published works subject to mandatory deposit requirements. Form of Notice for Visually Perceptible Copies The notice for visually perceptible copies should contain all of the following three elements: (1) The SYMBOL (c) -- the letter C in a circle -- or the word "Copyright," of the abbreviation "Copr."; and (2) THE YEAR OF FIRST PUBLICATION of the work. In the case of compilations or derivative works incorporating previously published material, the year date of first publication of the compilation or derivative work is sufficient. The year date may be omitted where a pictorial, graphic, or sculptural work, with accompanying textual matter, if any, is reproduced in or on greeting cards, postcards, stationery, jewelry, dolls, toys, or any useful article; and (3) THE NAME OF THE OWNER OF COPYRIGHT in the work, or an abbreviation by which the name can be recognized, or a generally known alternative designation of the owner. Examples: (c) 1989 VITRON Management Consulting, Inc. Copyright 1989 James J. Spinelli (c) Copyright 1989 RelayNet The "(c)" notice is required only on "visually perceptible copies." Certain kinds of works -- for example, musical, dramatic, and literary works -- may be fixed not in "copies" but by means of sound in an audi recording. Since audio recordings such as audio tapes and phonograph disks are "phonorecords" and not "copies," there is no requirement that the phonorecord bear a "(c)" notice to protect the underlying musical, dramatic, or literary work that is recorded. Form of Notice for Phonorecords of Sound Recordings The copyright notice for phonorecords of sound recordings has somewhat different requirements. (Sound recordings are defined as "works that result from the fixation of a series of musical, spoken, or other sounds, but not including the sounds accompanying a motion picture or other audiovisual work, regardless of the nature of the material objects, such as disks, tapes, or other phonorecords, in which they are embodied.") The notice appearing on phonorecords should contain the following three elements: 1. The SYMBOL (p) -- the letter P in a circle; and 2. The YEAR OF FIRST PUBLICATION of the sound recording; and 3. THE NAME OF THE OWNER OF COPYRIGHT in the sound recording, or an abbreviation by which the name can be recognized, or a generally known alternative designation of the owner. If the producer of the sound recording is named on the phonorecord labels or containers, and if no other name appears in conjunction with the notice, the producer's name shall be considered a part of the notice. Example: (p) 1989 A.B.C., Inc. NOTE: Because of problems that might result in some cases from the use of variant forms of the notice, any form of the notice other than those given above should not be used without first seeking legal advice. Position of Notice ------------------ The notice should be affixed to copies or phonorecords of the work in such a manner and location as to "give reasonable notice of the claim of copyright." The norice on phonorecords may appear on the surface of the phonorecord or on the phonorecord label or container, provided the manner of placement and location give reasonable notice of the claim. The three elements of the notice should ordinarily appear together on the copies or phonorecords. Publications Incorporating United States Government Works --------------------------------------------------------- Works by the U.S. Government are not subject to copyright protection. Whenever a work is published in copies or phonorecords consisting preponderantly of one or more works of the U.S. Government, the notice of copyright shall also include a statement that identifies one of the following: those portions protected by the copyright law OR those portions that constitute U.S. Government material. Unpublished Works ----------------- The copyright notice is not required on unpublished works. (See earlier post that defines the concept of "publishing.") To avoid an inadvertent publication without notice, however, it may be advisable for the author or other owner of the copyright to affix notices, or a statement such as UNPUBLISHED WORK (c) Copyright 1989, John Smith, to any copies or phonorecords which leave his or her control. Effect of Omission of the Notice or of Error in the Name or Date ---------------------------------------------------------------- Unlike the law in effect before 1978, the new Copyright Act, in sections 405 and 406, provides procedures for correcting errors and omissions of the copyright notice on works published on or after January 1, 1978. In general, the omission or error does not automatically invalidate the copyright in a work if registration for the work has been made before or is made within 5 years after the publication without notice, and a reasonable effort is made to add the notice to all copies or phonorecords that are distributed to the public in the U.S. after the omission or error has been discovered. Here's a post that, by necessity, I need to provide to you. As you will note, it is pertinent to the discussions pertaining to copyrights. As you may recall, several users have begun a brief discussion regarding whether or not government agencies and government entities may disregard the "unauthorized" duplication of copyrighted material. Well, the following may indicate that such a policy may be near its end. From: Rachel Parker, as appeared in the current issue of INFOWORLD -- Software lobbying groups are celebrating a new Supreme Court ruling that they believe lends support to their efforts to close a loophole in the Copyright Act. In PENNSYLVANIA vs. UNION GAS CO., the state of Pennsylvania argued that it could not be required to pay monetary damages for violating an environmental law because states are immune from such federal interference under the 11th AAmendment. In a closely divided ruling, the Court held that while the state was protected by the 11th Amendment, the state agency could be held liable for monetary damages if Congress had specifically named such organizations in laws. This issue is a familiar one to the PC software industry. In 1988, the University of California at Los Angeles defended itself in a computer software copyright action, saying that as a state agency it was immune from paying monetary damages. The court in that case, called BV ENGINEERING vs. U.C.L.A., said that while the school had illegally copied the software, BV Engineering could not collect any monetary damages from the school. Since that case was decided, the Software Publishers Association and Adapso have been working with legislators to amend the Copyright Act, making states and their agencies specifically liable for monetary damages in copyright infringement cases. "I suspect that now that UNION GAS has come downm we will get our law," said Mary Jane Saunders, general counsel for SPA. A proposed law has been drafted, and the two trade groups have been sponsoring testimony before Congress supporting the proposal. Part 7 - How Long Copyright Protection Endures ---------------------------------------------- Works Originally Copyrighted on or After January 1, 1978 -------------------------------------------------------- A work that is created (fixed in tangible form for the first time) on or after January 1, 1978, is automatically protected from the moment of its creation, and it is ordinarily given a term enduring for the author's life, plus an additional 50 years after the author's death. In the case of "a joint work prepared by two or more authors who did not work for hire," the term lasts for 50 years after the last surviving author's death. For works made for hire, and for anonymous and pseudonymous works (unless the author's identity is revealed in Copyright Office records), the duration of copyright will be 75 years from publication or 100 years from creation, whichever is shorter. Works that were created but not published or registered for copyright before January 1, 1978, have been automatically brought under the statute and are now given Federal copyright protection. The duration of copyright in these cases will generally be computed in the same way as for works created on or after January 1,1978. The law provides that in no case will the term of copyright for works in this category expire before December 31, 2002, and for works published on or before December 31, 2002, the term of copyright will not expire before December 31, 2027. Works Copyrighted Before January 1, 1978 ---------------------------------------- Under the law in effect before 1978, copyright was secured either on the date a work was published or on the date of registration if the work was registered in unpublished form. In either case, the copyright endured for a first term of 28 years from the date it was secured. During the last (28th) year of the first term, the copyright was eligible for renewal. The current copyright laws has extended the renewal term from 28 to 47 years for copyrights that were subsisting on January 1, 1978, making the works eligible for a total term of protection of 75 years. However, the copyright MUST be renewed to receive the 47-year period of added protection. This is accomplished by filing a properly completed Form RE accompanied by a $6 filing fee in the Copyright Office before the end of the 28th calendar year of the original term. Part 8 - Transfer of Copyright ------------------------------ Any or all of the exclusive rights, or any subdivision of those rights, of the copyright owner may be transferred, but the transfer of exclusive rights is not valid unless that transfer is in writing and signed by the owner of the rights conveyed (or such owner's duly authorized agent). Transfer of a right on a nonexclusive basis does not require a written agreement. A copyright may also be conveyed by operation of law and may be bequeathed by will or pass as personal property by the applicable laws of intestate succession. Copyright is a personal property right, and it is subject to the various state laws and regulations that govern the ownership, inheritance, or transfer of personal property as well as terms of contracts or conduct of business. For information about relevant state laws, you are advised to consult with an attorney within your state. Transfers of copyright are normally made by contract. The Copyright Office does not have or supply any forms for such transfers. However, the law does provide for the recordation in the Copyright Office of transfers of copyright ownership. Although recordation is not required to make a valid transfer between the parties, it does provide certain legal advantages and may be required to validate the transfer as against third parties. (See Circular 12) Termination of Transfers ------------------------ Under the previous law, the copyright in a work generally reverted to the author, if living, or if the author was not living, to other specified beneficiaries, provided a renewal claim was registered in the 28th year of the original term. The present law drops the renewal feature except for works already in the first term of statutory protection when the present law took effect. Instead, the present law generally permits termination of a grant of rights after 35 years under certain conditions by serving written notice on the transferee within specified time limits. For works already under statutory copyright protection before 1978, the present law provides a similar right of termination covering the newly added years that extended the former maximum term of the copyright from 56 to 75 years. (See Circulars 15a and 15t) Part 9 - International Copyright Protection ------------------------------------------- There is no such thing as an "international copyright" that will automatically protect an author's work throughout the entire world. Protection against unauthorized use in a particular country depends, basically, on the national laws of that country. However, most countries do offer protection to foreign works under certain conditions, and these conditions have been greatly simplified by international copyright treaties and conventions. (See Circular 38a) The United States is a member of the Universal Copyright Conven- tion (the UCC), which came into force on September 16, 1955. Generally, a work by a national or domiciliary of a country that is a member of the UCC or a work first published in a UCC country may claim protection under the UCC. If the work bears the notice of copyright in the form and position specified by the UCC, this notice will satisfy and substitute for any formalities a UCC member country would otherwise impose as a condition of copyright. A UCC notice should consist of the symbol (c) -- the letter "C" in a circle -- accompanied by the name of the copyright proprietor and the year of first publication of the work. (Note: to qualify, a work must be considered "published." Unpublished works do not generally qualify. See definition of "a published work" above.) An author who wishes protection for his or her work in a particular country should first find out the extent of protection of foreign works in that country. If possible, this should be done before the work is published anywhere, since protection may often depend on the facts existing at the time of FIRST publication. If the country in which protection is sought is a party to one of the international copyright conventions, the work may generally be protected by complying with the conditions of the convention. Even if the work cannot be brought under an international convention, protection under the specific provisions of the country's national laws may still be possible. Some countries, however, offer little or no copyright protection for foreign works. Copyrights - Part 10a -------------------- Copyright Registration ---------------------- In general, copyright registration is a legal formality intended to make a public record of the basic facts of a particular copyright. However, except in two specific situations, registration is not a condition of copyright protection. (The two specific situations are: Works published with notice of copyright prior to January 1, 1978, must be registered and renewed during the first 28-year term of copyright to maintain protection. Under section 405 and 406 of the Copyright Act, copyright registration may be required to preserve a copyright that would otherwise be invalidated because the copyright notice was omitted from the published copies or phonorecords, or the name or year date was omitted, or certain errors were made in the year date.) Even though registration is not generally a requirement for protection, the copyright law provides several inducements or advantages to encourage copyright owners to make registration. Among these advantages are the following: - Registration establishes a public record of the copyright claim; - Registration is ordinarily necessary before any infringement suits may be filed in court; - If made before or within 5 years of publication, registration will establish prima facie evidence in court of the validity of the copyright and of the facts stated in the certificate; and - If registration is made within 3 months adfter publication of the work or prior to an infringement of the work, statutory damages and attorney's fees will be available to the copyright owner in court actions. Otherwise, only an award of actual damages and profits is available to the copyright owner. Registration may be made at any time within the life of the copyright. Unlike the law before 1978 (i.e., effective in 1978, while passed in 1976), when a work has been registered in unpublished form, it is not necessary to make another registration when the work becomes published (although the copyright owner msy choose to register the published edition, if desired). Registration Procedures ----------------------- In General: A. To register a work, send the following three elements IN THE SAME ENVELOPE OR PACKAGE to the Register of Copyrights, Copyright Office, Library of Congress, Washington, DC 20559: 1. A properly completed application form; 2. A nonrefundable filing fee of $10 for each application; 3. A nonreturnable deposit of the work being registered. The deposit requirements vary in particular situations. The GENERAL requirements follow. Also note that information under "Special Deposit Requirements" will follow this section (but in a different post). - If the work is unpublished, one complete copy or phono- record. - If the work was first published in the United States on or after January 1, 1978, two complete copies or phonorecords of the best edition. - If the work was first published in the United States before January 1, 1978, two complete copies or phonorecords of the work as first published. - If the work was first published outside the United States, whenever published, one complete copy or phonorecord of the work as first published. NOTE: Before 1978, the copyright law required, as a condition for copyright protection, that all copies published with the authorization of the copyright owner bear a proper notice. If a work was published under the copyright owner's authority before January 1, 1978, without a proper copyright notice, all copyright protection for that work was permanently lost in the United States. The current copyright law does not provide retroactive protection for those works. B. To register a renewal, send: 1. A properly completed RE application form, and 2. A nonrefundable filing fee of $6 for each work. Copyrights - Part 10b --------------------- Special Deposit Requirements ---------------------------- Special deposit requirements exist for many types of work. In some instances, only one copy is required for published works, in other instances only identifying material is required, and in still other instances, the deposit requirement may be unique. For example, in the case of a published motion picture, only one copy of the work is required, but is must be accompanied by a separate written description of the work. In the case of works reproduced in three-dimensional copies, identifying material such as photographs or drawings is ordinarily required. Other examples of special deposit requirements (but by no means an exhaustive list) include many works of the visual arts, such as greeting cards, toys, fabric, oversized material; video games and other machine-readable audiovisual works; and contribution to collective works. Unpublished Collections ----------------------- A work may be registered in unpublished form as a "collection," with one application and one fee, under the following conditions: - The elements of the collection are assembled in an orderly form; - The combined elements bear a single title identifying the collection as a whole; - The copyright claimant in all the elements and in the collection as a whole is the same; and - All of the elements are by the same author, or, if they are by different authors, at least one of the authors has contributed copyrightable authorship to each element. Unpublished collections are indexed in the Catalog of Copyright Entries opnly under the collection titles. If you are unsure of the proper deposit required for your work, write to te Copyright Office for that information and describe the work you wish to register. NOTE: LIBRARY OF CONGRESS CATALOG CARD NUMBERS: A Library of Congress Catalog Card Number is different from a copyright registration number. The Cataloging in Publication (CIP) Division of the Library of Congress is responsible for assigning LC Catalog Card Numbers and is operationally separate from the Copyright Office. A book may be registered in or deposited with the Copyright Office but not necessarily cataloged and added to the Library's collections. For information about obtaining an LC Catalog Card Number, contact the CIP Division, Library of Congress, Washington, D.C. 20540. For information on International Standard Book Numbering (ISBN), write to: ISBN Agency, R.R.Bowker Company, 205 East 42nd Street, New York, N.Y. 10017. For information on International Standard Serial Numbering (ISSN), write to: Library of Congress, National Serials Data Program, Washington, D.C. 20540. Copyrights - Part 11a --------------------- Corrections and Amplifications of Existing Registrations -------------------------------------------------------- To correct an error in a copyright registration or to amplify the ionformation given in a registration, file a supplementary registration form -- FORM CA -- with the Copyright Office. The information in a supplementary registration augments but does not supersede that contained in the earlier registration. Note also that a supplementary registration is not a substitute for an original registration, for a renewal registration, or for recording a transfer of ownership. For further information, write to the Copyright Office and request Circular 8. Mandatory Deposit for Works Published in the United States with Notice ---------------------------------------------------------------------- of Copyright ------------ Although a copyright registration is not required, the Copyright Act establishes a mandatory deposit requirement for works published with notice of copyright in the United States. In general, the owner of copyright, or the owner of the exclusive right of publication in the work, has a legal obligation to deposit in the Copyright Office, within three months of publication in the U.S., two copies (or, in the case of sound recordings, two phonorecords) for the use of the Library of Congress. Failure to make the deposit can result in fines and other penalties, but does not affect copyright protection. Certain categories of works are EXEMPT ENTIRELY from the mandatory deposit requirements, and the obligation is reduced for certain other categories. For further information, contact the Copyright Office and request Circular 7d. Use of Mandatory Deposit to Satisfy Registration Requirements ------------------------------------------------------------- For works published in the U.S. the Copyright Act contains a provision under which a single deposit can be made to sarisfy both the deposit requirements for the Library and the registration requirements. In order to have this dual effect, the copies or phonorecords must be accompanied by the prescribed application and fee for registration. Who May File an Application Form -------------------------------- The following persons are legally entitled to submit an application form: - The author. This is either the person who actually created the work, or, if the work was made for hire, the employer or other person for whom the work was prepared. - The copyright claimant. The copyright claimant is defined in the Copyright Office regulations as either the author of the work or a person or organization that has obtained ownership of all the rights under the copyright initially belonging to the author. This category includes a person or organization who has obtained by contract the right to claim legal title to the copyright in an application for copyright registration. - The owner of exclusive right(s). Under the new law, any of the exclusive rights that go to make up a copyright and any subdivision of them can be transferred and owned separately, even though the transfer may be limited in time or place of effect. The term "copyright owner" with respect to any one of the exclusive rights contained in a copyright refers to the owner of that particular right. Any owner of an exclusive right may apply for registration of a claim in the work. - The duly authorized agent of such author, other copyright claimant, or owner of exclusive right(s). Any person authorized to act on behalf of the author, other copyright claimant, or owner of exclusive right(s) may apply for registration. There is no requirement that applications be prepared or filed by an attorney. Copyrights - Part 12 -------------------- Application Forms ----------------- Though not part of our original outline, we have decided to provide you with the information regarding the specific application forms required for particular materials. 1. For Original Registration ---------------------------- Form TX: for published and unpublished non-dramatic literary works. Form SE: for serials, works issued or intended to be issued in successive parts bearing numerical or chronological designations and intended to be continued indefinitely (periodicals, newspapers, magazines, newsletters, annuals, journals, etc.) Form PA: for published and unpublished works of the performing arts (musical and dramatic works, pantomimes and choreographic works, motion pictures and other audiovisual works) Form VA: for published and unpublished works of the visual arts (pictorial, graphic, and sculptural works) Form SR: for published and unpublished sound recordings 2. For Renewal Registration --------------------------- Form RE: for claims to renewal copyright works copyrighted under the law in effect through December 31, 1977 (1909 Copyright Act) 3. For Corrections and Amplifications ------------------------------------- Form CA: for supplementary registration to correct or amplify informationm given in the Copyright Office record of an earlier registration. 4. For a Group of Contributions to Periodicals ---------------------------------------------- Form GR/CP: ad adjunct application to be used for registration of a group of contributions to peridicals in addition to an application Form TX, PA, or VA Applications are supplied by the Copyright Office. You may obtain free copies by calling (202) 707-9100. Mailing Instructions -------------------- All applications and materials related to copyright registration should be addressed to the Register of Copyrights, Copyright Office, Library of Congress, Washington, D.C. 20559. The application, nonreturnable deposit (copies, phonorecords, or identifying material), and nonrefundable filing fee should be mailed in the same package. Fees must be in U.S. funds drawn on an American Bank. Effective Date of Registration ------------------------------ A copyright registration is effective on the date of receipt in the Copyright Office of all the required elements in accdeptable form, regardless of the length of time it takes thereafter to process the application and mail the certificate of registration. The length of time required by the Copyright Office to process an application varies from time to time. If you are filing an application for copyright registration in the Copyright Office, you will NOT receive an acknowledgement that your application has been received, but you can expect within 120 days: - A letter or telephone call from a copyright examiner if further information is needed; - A certificate of registration to indicate the work has been registered, or if the application cannot be accepted, a letter explaining why it has been rejected. If you want to know when the Copyright Office receives your material, you should send it by registered or certified mail and request a return receipt from the post office. Allow at least three weeks for the return of your receipt. ======================================================================= This completes our discussion of copyrights in general. In our next post in the near future, we'll provide you with information on Circular 61 - Copyright Registration for Computer Programs, and on Circular 93 - Highlights of the U.S. Adherence to the Berne Convention.
<urn:uuid:82ba727f-ec83-4fa7-956e-392e0e4076a2>
CC-MAIN-2024-38
https://artofhacking.com/tucops3/etc/law/live/aoh_copyrite.htm
2024-09-14T23:11:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00491.warc.gz
en
0.930522
8,454
3.21875
3
FinOps, a combination of “Finance and DevOps,” refers to the practice of managing the financial aspects of cloud computing services. It involves optimizing costs, usage, and performance of cloud resources while maintaining security, compliance, and governance. The primary goal of FinOps is to enable organizations to achieve maximum value from their cloud investment while ensuring cost-effectiveness. It involves collaboration among teams such as finance, operations, and development to achieve this objective. FinOps involves implementing a set of best practices, tools, and processes for cost management, cost allocation, budgeting, forecasting, and optimization of cloud resources. It helps organizations gain visibility and control over their cloud spending, identify cost-saving opportunities, and make informed decisions about cloud usage. 5 Reasons FinOps is Important - Cost Optimization: One of the primary objectives of FinOps is to optimize the cost of cloud services. With FinOps practices, organizations can identify underutilized resources and optimize their usage to reduce costs. - Better Financial Management: FinOps enables organizations to get a better understanding of their cloud usage and associated costs, which can help them make more informed financial decisions. - Increased Agility: By implementing FinOps, organizations can scale their cloud services up or down as needed – giving them increased agility and flexibility in responding to changing business needs. - Enhanced Collaboration: FinOps promotes collaboration among teams – such as finance, operations, and development – to manage cloud costs effectively. This collaboration helps teams work together to achieve common goals and align their efforts with the organization’s overall objectives. - Governance and Compliance: FinOps practices help ensure that organizations are complying with regulations and internal policies related to cloud usage, security, and governance. FinOps is important because it enables organizations to achieve maximum value from their cloud investment while maintaining cost-effectiveness, compliance, and governance. 6 Steps to Create a FinOps Group at Your Organization - Define the Objectives: The first step is to define the objectives and scope of the FinOps group. This includes identifying the cloud services and resources to be managed, the stakeholders involved, and the goals to be achieved. - Build the Team: The next step is to build the FinOps team. This team should consist of members from different departments, including finance, operations, and development. They should have a deep understanding of the cloud services and technologies used in the organization. - Establish Processes: The FinOps team should establish processes for cost management, cost allocation, budgeting, forecasting, and optimization of cloud resources. These processes should be standardized, repeatable, and scalable to ensure consistency and efficiency. - Choose Tools: The team should select the appropriate tools to support the FinOps processes. These tools should provide visibility and control over cloud costs and usage, automate manual tasks, and enable collaboration among team members. - Implement Policies: The team should implement policies and guidelines for cloud usage, security, compliance, and governance. These policies should be communicated to all stakeholders and enforced to ensure that cloud services are used in a compliant and secure manner. - Monitor and Optimize: The FinOps team should regularly monitor and optimize cloud services and resources to identify cost-saving opportunities and ensure that the organization is achieving maximum value from its cloud investment. By following these steps, your organization can create a FinOps group that effectively manages its cloud costs and usage, promotes collaboration among teams, and ensures compliance and governance. If you’re not sure where to start, contact us. Our cloud experts can help your organization make the most of your cloud investment.
<urn:uuid:581182b0-1720-4e64-9a70-cc3e599969ef>
CC-MAIN-2024-38
https://corebts.com/blog/what-is-finops-and-why-it-matters/
2024-09-16T06:34:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00391.warc.gz
en
0.934295
737
2.640625
3
As companies continue to see the value of collecting information on clients and business processes, the amount of data is increasing at an enormous rate. According to Smart Data Collective, 90 percent of the data being stored today was created within just the past two years. With companies information growing, the need for larger, more energy-sucking data centers also increases. As The New York Times reported, one large-scale data center can consume energy equivalent to a small town’s use. And while data centers eat up an incredible amount of energy, only between 6 and 12 percent of it is used for actual computation. The remaining 90 percent is used for letting servers run idle in case of usage spikes. Because of the dramatic amount of energy used by data centers and the cost to companies, many organizations are trying to manage their information stockpiles while dealing with the financial and environmental ramifications that come along with it. An article by Smart Data Collective contributor Cameron Graham noted that data centers are responsible for almost 20 percent of technology’s carbon footprint. The environmental impact of such facilities is leading companies to operate in a more efficient and sustainable way. Schneider Electric recently released a survey of business leaders that found data center efficiency will be one of the most popular techniques for energy management employed by organizations in the next five years. While businesses often look to make physical improvements to their data center in an effort to increase efficiency, many companies are now utilizing either colocation or cloud providers that employ energy efficient and sustainable practices. Fixed costs associated with a data center’s cooling, hardware and power can be reduced with the help of cloud computing, which in turn allows a company to increase agility and growth. Adopting a virtualized environment, be it by the transition of applications into the cloud or server virtualization, helps companies to consolidate their systems and reduce their overall IT electrical load. Capital costs can also be shifted into operational expenses and help organizations find savings in a variety of sectors.
<urn:uuid:a4fd3bc2-4d0a-4bf2-a502-f1b26874401a>
CC-MAIN-2024-38
https://www.isgtech.com/companies-look-to-cloud-for-data-center-efficiency/
2024-09-18T18:46:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00191.warc.gz
en
0.955852
398
2.96875
3
Education Technology Services Transforming learning through digital assessment Boosted by the pandemic, the wave of digitisation has pervaded all spheres of life. In such a scenario, globalisation and digital technologies are increasingly altering the educational landscape, profoundly impacting learning. The focus of education is shifting from cramming up study material to indulging in critical thinking. In line with the change, accurate assessments, engaging experience, better security, and higher flexibility have emerged as significant advantages of digital assessment. Digital assessment and its benefits Digital assessment is defined as the delivery of assessments, exams, surveys, and evaluations of learning outcomes using digital devices and the internet or intranet. The fundamental purpose of online assessment methods is to administer evaluations and provide feedback as quickly, conveniently, and accurately as feasible. Digital assessment tools provide better insights into students’ learning outcomes and help institutions take appropriate decisions on their methods and means. Here are a few benefits of online assessment methods for educators: Digital assessments are more secure than paper-based assessments as institutions secure the files before and after the exam. Even in paper-based exams, digital assessment tools like e-marking can protect the results. Moreover, assessments in online learning are fully traceable with remote proctoring and time-bound tests, which ensure a secure environment and prevent breaches.Flexibility: Digital assessment allows educators to create and design new modules, mark papers anytime and anywhere, and use various assessment methods for summative and formative evaluation. Online assessment tools help teachers and institutions to offer better assessment experiences.Integration with other technologies: Digital assessment systems can be integrated with other tech processes and workflows in an organisation, such as student information systems, administrative structure, and learning management systems. The integration helps keep data centralised and easily accessible to all departments. Moreover, faculty members can easily store, manage, and retrieve the data.Time efficiency: Online assessment tools make designing, managing and evaluating assessments faster. Routine tasks can be automated, and intelligent exam software can digitise evaluation. Proctoring services make manual invigilation obsolete and save precious time. Automated item generation, item banking, test creation, and publishing make digital assessment a time-saving advantage for educators.Data analysis: The data available from digital assessments is a storehouse of performance information for both the students and the teachers. Digital assessment tools allow data analysis to derive valuable, actionable insights, which can drive improvement. Real-time data on examiners’ performance and adaptive comparative judgement of learners enable institutions to tweak processes, share best practices, and introduce improvements faster. How digital assessments benefit students? The benefits of digital assessment to educators, in turn, benefit the system. The different approaches to assessment may change the degree of convenience for educators as well as for students. Here’s how an online assessment system benefits students: Equity and accessibility: Digital assessment delivers equity as the process ensures fairness, accessibility, and accuracy in marking and grading. It reduces the bias as the details of the learners are anonymised. Appearing remotely for tests breaks geographical and social barriers. Computer-based exams provide a level playing field for special needs students through accessibility software.Flexibility: The flexibility of e-learning from anywhere at any time also extends to digital assessments. Candidates have the freedom to complete tests as and when they best can, thus minimising disruptions and travel costs.Well-being: Online assessment methods reduce exam anxiety and stress. The flexibility of digital assessment lets candidates appear for the test at a time when they are prepared instead of on pre-specified dates. Moreover, by using remote proctoring, students can appear for examinations in comfortable environments.Personalised learning: Real-time assessments and individual feedback help students absorb knowledge faster, as well as boost their confidence.Faster feedback: With autoscoring, candidates get instant feedback on their tests. This saves time and allows students to act on performance improvements faster. Online assessment tools expedite detailed teacher feedback using the reporting functions embedded in the systems. For organisations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed organisational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like living organisms will be imperative for business excellence. A comprehensive yet modular suite of services is doing precisely that. Equipping organisations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organisations that are innovating collaboratively for the future. How can Infosys BPM help? Infosys BPM offers AI-powered Intelligent Assessment Services, Smart Virtual Event Hosting Services, Gamification Services, Enterprise Services, and Learner Segmentation & Recommendation Services for a complete online assessment solution.
<urn:uuid:d15ff13a-4fca-4ab4-bc7a-66e26b1970c6>
CC-MAIN-2024-38
https://www.infosysbpm.com/blogs/education-technology-services/how-does-digital-assessment-transform-learning.html
2024-09-13T21:11:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00691.warc.gz
en
0.919971
1,021
3.28125
3
Web applications are becoming more and more dependent as a result of the rise in cyberattacks, which makes them desirable targets. Sensitive data is accessed by key systems through online apps, increasing their susceptibility to security vulnerabilities. A framework for comprehending and managing web application security concerns is provided by the Open Web Application Security Project (OWASP), a nonprofit organization. The “OWASP TOP 10 List” is the main accomplishment of OWASP. The most typical flaws that attackers use to compromise web applications are covered in-depth in this list. With the help of this blog, you can have a deeper look at the OWASP Top 10 list and learn more about each risk. Organizations may defend themselves against online threats and safeguard their sensitive data by being aware of and addressing these crucial web application security issues. Table of Content What is OWASP? Open Web Application Security Project is what OWASP stands for. It is a global non-profit organization that concentrates on enhancing the security of software programs. Developers, security experts, and organizations can improve the security of their web applications by using the information, tools, and standards provided by OWASP. The group is most well-known for its Top Ten Project, which lists the most critical web application security concerns and offers advice on how to address them. OWASP also organizes conferences, training sessions, and community activities to encourage education and cooperation in online application security. The purpose of OWASP is to strengthen online application security and safeguard users’ sensitive and personal data. Benefits of OWASP The OWASP Top 10 is a widely recognized and respected list of the most critical security risks to web applications. The OWASP 10 major area of focus is on the most critical threats rather than specific susceptibilities. They are considered the main standard awareness document for both the developers and web application security. There are a few benefits associated with OWASP 10 that will make you understand how important it is. - Prioritization: The OWASP Top 10 lists security concerns in order of importance, enabling businesses to concentrate their efforts on the most pressing issues first. - Compliance: A lot of regulatory and compliance frameworks demand that businesses take precautions to guard against the OWASP Top 10 dangers. - Awareness – The OWASP Top 10 highlights the most prevalent security risks that online applications encounter. Developers and security experts are better able to handle these dangers by highlighting them. - Education– For developers and security experts, the OWASP Top 10 is a great instructional tool. They can choose better ways to design, build, and secure their apps by being aware of the typical hazards that web applications encounter. - Best Practices– A list of best practices for tackling typical security concerns in web applications is provided by the OWASP Top 10. Organizations may create more secure applications by implementing these recommended practices. - Risk Management -The OWASP Top 10 can assist companies in determining and controlling their exposure to risk. Organizations can lessen the possibility and effects of successful attacks by addressing these basic risks. The OWASP Top 10 is a crucial tool for anyone working on the design, testing, or security of web applications. Organizations can greatly enhance the security of their apps and lower their risk exposure by implementing the recommendations in this report. The OWASP TOP 10 Web Application Threats – - Unstable Data Exposure - Collapsed Authentication - External Entities - Broken Access Control - Security Misconfiguration - Cross-site Scripting - Insecure Deserialization - Insufficient Logging and Monitoring - Using Components with Known Vulnerabilities Let us briefly explain the OWASP TOP 10 Web application security threats. - Unstable Data Exposure – Financial, healthcare, and other personally identifiable information (PII) can be taken over or altered, used for fraud, identity theft, or other illegal actions if online apps and APIs are not adequately secured. Strong authentication, appropriate controls, encryption, and the deletion of superfluous data can all assist prevent exposure. - Collapsed Authentication – Attackers have the ability to steal passwords and tokens, or pose as users when authentication is falsely enforced. Due to improperly established identification and access rules, this occurs endlessly. To assist prevent this issue, putting into place weak password checks and multi-factor authentication is an excellent place to start. - External Entities – Internal port scanning, remote code execution, and DDoS attacks can be carried out by external actors, or they can be used to distribute internal files. While finding and removing XXE vulnerabilities can be challenging, there are several simple enhancements that can be made, such as – Updating all XML processors, ensuring thorough validation of XML input in accordance with a schema, and, when possible, limiting XML input. - Broken Access Control– Broken access control usually results from insufficiently implemented user access regulations. As a result, hackers gain access to data and features they would not otherwise be allowed to use by taking advantage of weaknesses. - Security Misconfiguration – The most frequent and usual dangers to organizations’ web security come from misconfigurations. They are brought on by inadequate or unsafe delinquency setups, public cloud storage, or cryptic error signals. To assist prevent security misconfiguration, all operating systems, frameworks, libraries, and applications must be securely configured, patched, and adhered to best practices recommended by each hardware or software manufacturer. - Cross-Site Scripting (XSS) – When an application delivers untrusted data to a web browser without performing the necessary validation or escaping, an XSS vulnerability results. Via the use of cross-site scripting (XSS), attackers can run scripts in the victim’s browser that can hijack user sessions, alter websites, or divert users to dangerous websites. - Insecure Deserialization – Insecure deserialization frequently results in remote code execution situations. These weaknesses allow replay, injection, and advantage escalation attacks to be carried out even if remote code execution doesn’t take place. Denying calibrated objects from unreliable sources is one approach to stop this from happening. - Insufficient Logging and Monitoring – It may be difficult or even impossible to identify attackers or detect assaults with insufficient recording and monitoring. It’s frequently impossible to figure out what happened when breaches occur because of inadequate logging and monitoring. - Using Components with Known Vulnerabilities– Libraries, frameworks, and other software modules, among others, virtually usually execute with full rights. A server takeover or significant data loss may result from an attack that successfully exploits a weak component. - Injection – Untrusted data being given to an interpreter as part of a command or query can lead to injection issues in SQL, NoSQL, OS, and LDAP. The malicious data from the attacker can then control the activities of the interpreter. Let’s deal with the Top 10 OWASP Threats Businesses that do not adequately secure their online applications are more vulnerable to hostile assaults, which can lead to data theft, license revocations, strained client relationships, and legal action. Remember that there are thousands of vulnerabilities that can be exploited and manipulated by cybercriminals, and the OWASP Top 10 risks are the most insignificant. While developing their security strategy, organizations may disregard online apps or believe their network firewalls would secure them. Consider integrating a web application firewall in your organization’s security strategy and technology stack to aid in your protection against the risks we mentioned above. In addition to the aforementioned precautions, conducting routine vulnerability assessments and penetration tests is crucial. In order to assess a web application’s security flaws, VAPT searches for potential and frequent vulnerabilities related to the platform, technological framework APIs, etc. Reports on vulnerabilities found are given to the businesses, together with information on their type, threat level, impact, and remediation steps. Want to Confirm the Security of Your Application? Get your Application Security Testing Now!!
<urn:uuid:72878af8-17f8-4707-92f5-9d4e50e65587>
CC-MAIN-2024-38
https://kratikal.com/blog/owasp-top-10-the-most-critical-web-application-security-risks/
2024-09-18T22:16:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00291.warc.gz
en
0.913518
1,640
2.5625
3
InfoConnect Print and Transaction Router (PTR) is optional print delivery software designed for airlines multi-device networks. PTR provides a communication path between the host and a supported device connected to your Windows system, and is independent of a terminal emulator. Print jobs can come from one or more mainframes or from an application that uses the PTR API. Scanned input from readers is always handled by a local application. PTR can configure up to 16 printers for each Windows system. The status of four printers can be viewed at one time. Using PTR's interface functions, a host computer can send automated data through your Windows system to an attached printer. PTR doesn't initiate or even control print jobs; it provides communication paths between the host and the Windows printer. Before you can send print jobs through PTR, you must create a route. Each route consists of three main parts: The host path An InfoConnect path is a named collection of configuration settings that allows you to connect to a host. Paths are required for connections to ALC, T27 and UTS terminal sessions, and for PTR router connections. Path configuration data is stored in the InfoConnect database. . This path configures the communication link between PTR and the host. The host filter. This is a DLL that initializes the host connection, manipulates the printer data for the selected output device, and sends the data to the output device. The printer queue path. This path configures the communication link between PTR and the output device, such as a printer or file. Route configuration information is saved to PTR32.INI, which is created in the InfoConnect data folder This folder location is configurable using the C:\Users\Public\Documents\Micro Focus\InfoConnect. tab during installation. The default is . Path configuration is saved to the InfoConnect database The InfoConnect database (ic32.cfg) contains connection settings information for ALC, T27, and UTS terminal sessions. The database contains information about all the InfoConnect packages, path templates and libraries that have been installed, as well the paths that have been created. The InfoConnect packages, path templates and libraries are included based on which product features (emulations and transports) are installed. .
<urn:uuid:196df459-4d03-45ec-a4b6-70ce97e2372d>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/amc-archive/infoconnect-16-1-sp1/infoconnect-help/data/t_29340.htm
2024-09-20T03:19:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00191.warc.gz
en
0.916105
465
2.515625
3
How Gamification in EdTech Enhances Student Learning and Retention by Rajesh Shashikant Renukdas, on Jan 17, 2021 11:08:21 AM The technological contribution of gamification in education isn’t new but the support and attention it has garnered to become widely adopted across institutions is a recent phenomenon. With COVID-pandemic ushering an ubiquitous digital transformation, gamification has become one of the most fundamental components of an online academic curriculum. Gamification in education aims to make learning more engaging for students and instills motivation by enhancing their competitive spirits. Many institutions have embraced the fundamental tools and theories from video games to retain students and keep the fun element intact through the entire remote learning era. In this blog, we will have a look at the most innovative and effective use cases of gamification in education aimed to fuel remote participation and enhanced learning. There are primarily two ways gamification can aid digital education. Game-based learning and Gamification Game-based learning requires creating the academic content around a gaming storyline whereas Gamification shapes the video games around learning content. Both the concepts however, involve active participation and interaction of a student with the software in an educational conduct. This influences the nature of the student and makes learning super easy and fun. The motive behind both is to encourage students to study in an engaging, challenging, and safe environment. Now let us have a look at the use cases. Use Cases of Gamification and Game-based Learning in EdTech Engineering of gamified content Participation in quizzes is an active element of every offline and online institution. Recently, a new trend that requires students to engineer a competitive environment and content that quizzes involve is seen. A number of platforms and digital apps have been developed that students can use to create their own contests such as math word problems. This entails active student involvement in the course content and also understanding the concepts from a tutor’s perspective. Kahoot, a game-based learning software, is a popular example of a platform that lets students create their own problems by brainstorming the solutions upfront. Role-playing digital games One of the key propelling forces in learning for kids is the elimination of any kind of fear, be it stage fear, fear of judgment, or any other fears rising from insecurities. When a student steps into a gaming environment, the appeal of fun overshadows fear and students can put in their undivided attention to win the game. Role-playing games can range from enacting a role from play in literature, to being a doctor or a scientist from a particular lesson, a computer programmer, or even being a teacher. A dedicated digital environment for a particular role can be set up through a mobile app and a combination of technologies like ML, IoT, and extended reality. This will help participating students to actively learn and the spectating students to indulge in an otherwise boring recitation of a lesson. Games for corporate learning Games can be designed to enable students to get an idea of the corporate working environment and skills they would need to master to excel in the corporate world. For instance, Ribbon Hero is a game that teaches students to use Microsoft Office products like Word, Powerpoint, and Excel. Students are given different challenges they need to complete to earn reward coins. These coins or rewards can be redeemed for gifts that a student desires such as a PlayStation console, or anything else that keeps up their excitement. Such apps and games can also be integrated with progress tracking, notes features, for teachers to assign and guide students wherever required. Gamification of digital career counseling More often than not students are greatly influenced by the movies they see and the roles they see people play around. This gets them excited about a career option without knowing what it entails. Game-based career counseling will involve letting students have hands-on experience in a particular career option through AR and VR environments in mobile apps. This will greatly reduce drop-out rates that result from lack of interest, unexpected and new environments, etc. Mindler is one such platform that analyzes a series of answers to questions to give insights on the suitability of a career for a student. Games for getting organized and sharper Gamification works on the principle of end-rewards that motivate students to perform actions that they otherwise won’t. Special games that involve completing minor tasks like eating green vegetables, waking up on time, tidying rooms, studying in a clean and quiet environment, etc., in return for reward will help students to become organized and disciplined in their behavior. Lumosity is an app that aims to train students on aspects like logical ability, concentration, vocabulary, etc., with interesting games and generates regular reports on individual performances. Positive reinforcements are imperative, and students and kids, more than anyone, need them. The small gestures of positive reinforcements can boost their confidence through the remote digital learning process. Here are some ways of rewarding students in a gamified learning environment: - Coins — Students can be rewarded with thousands of coins, 10 at a time for one single learning activity. These coins can further be redeemed for a gift. - Leaderboards — Leaderboards will help students gain more confidence in their learning when they compete against their yesterday’s self. - Badges — Good students or even average students can be awarded a badge for recognition of their efforts. - Trophies — Trophies are shiny, regardless of a physical or a digital format. The one awarded to a student will keep reminding one of his/her capabilities and a job well done. Perhaps a kid didn’t get a trophy for winning the quiz, but he/she can be rewarded for their work with access to their favorite superhero movie. So, when students experience fun and associate positive thoughts with learning activities, everyone's work gets better and efficient. Gamification can do wonders at levels from kids to adults in college. To create your own custom gamified learning experience for students, as an educational institution, or even as a teacher, or a parent, get in touch with a renowned digital transformation customer experience provider.
<urn:uuid:9c3d64de-f38e-462b-ac6c-19e584470c31>
CC-MAIN-2024-38
https://blog.datamatics.com/how-gamification-in-edtech-enhances-student-learning-and-retention
2024-09-07T23:42:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00391.warc.gz
en
0.948395
1,250
3.5625
4
5 Reasons Why CISOs are Investing in API Security Digital transformation has ushered in a new era of devices, applications and online services. And though apps get most of the credit, application… Static Application Security Testing (SAST) is a vital tool for analyzing application security code before it’s compiled for security vulnerabilities. SAST takes place very early on in the software development lifecycle, which enables early detection and resolution of issues, enhancing overall software security. Static application security testing (SAST), sometimes referred to as source code analysis or static analysis, is a white box methodology for testing that analyzes application source code before it is compiled for security vulnerabilities. According to Gartner, the term SAST represents a set of technologies created to help developers analyze binaries, byte code, and application source code to detect coding and design conditions that flag security flaws. SAST is a commonly used application security (AppSec) tool which identifies and helps remediate underlying the root cause of security vulnerabilities. SAST tools do not need a system to be running to perform a scan because they analyze web applications from the inside out. For example, SAST testing may be used for regulatory compliance with the payment card industry data security standard (PCI/DSS), or to improve insight into software risk. The reality is, there are far more developers than security staff. SAST tools can analyze 100% of the codebase much more rapidly than human secure code reviews. It takes only minutes for these SAST tools to scan millions of lines of code to identify critical vulnerabilities. Ultimately, these tools help organizations achieve key goals, such as: Integrating SAST into the earliest stages of software development helps shift security testing left. Detecting proprietary code vulnerabilities and other security issues during the design stage while they are easier to resolve is an important practice. SAST readily identifies basic coding errors so development teams can easily comply with the best practices for secure coding standards. Integrating SAST earlier in the existing CI/CD pipeline and DevOps environment saves developers from needing to trigger or separately configure scans. This makes it more efficient, convenient and easier for them to use, and eliminates the need for developers to leave their environment to conduct scans, see results, and remediate security issues. Because it can take place without code being executed and does not require a working application, SAST takes place very early in the software development life cycle (SDLC). This helps developers quickly resolve issues and identify vulnerabilities in the project’s initial stages without passing on vulnerabilities to the final application. SAST testing typically happens in several steps: There are important differences between SAST and DAST. Static application security testing (SAST) comes early in the CI pipeline and focuses on bytecode, source code, or binary code to identify coding patterns that are problematic or conflict with best practices. Although modern SAST supports multiple programming languages, the methodology is programming-language dependent. Dynamic application security testing (DAST) is an approach to black-box testing. Because it requires runtime to scan applications, it is applied later in the CI/CD pipeline. DAST doesn’t depend on a specific programming language so it is a good method for preventing regressions. Consider the major differences in DAST vs SAST: In practice, given the difference between SAST and DAST tools, best practices suggest using both. A SAST tool and DAST tool complement each other, and each finds vulnerabilities the other does not. Like DAST, interactive application security testing (IAST) focuses on application behavior during runtime. However, IAST analysis takes more of a hybrid approach, combining analysis of internal application flows with scanning and black-box testing. IAST is most beneficial in its ability to connect source code with DAST-like findings. But this also makes IAST both programming-language dependent (as it needs to scan source code), and restricts it to being performed later in the CI/CD pipeline. The SCA vs SAST comparison is somewhat of an apples to oranges analogy. Software composition analysis (SCA) focuses on the application’s third-party code dependencies. In contrast, SCA tools discover all software components including all direct and indirect dependencies and supporting libraries. SCA is very useful for applications that use many open-source libraries. In general, there are several steps to implement SAST: Customize the process to identify new security flaws or reduce false positives by revising old rules or creating new ones. Prioritize results based on factors such as severity of threat, compliance issues, CWE, responsibility, risk level, or vulnerability. Experience the speed, scale, and security that only Noname can provide. You’ll never look at APIs the same way again.
<urn:uuid:eb8ce291-665f-445d-97f5-04ba1f6918f1>
CC-MAIN-2024-38
https://nonamesecurity.com/learn/what-is-sast/
2024-09-08T00:13:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00391.warc.gz
en
0.918957
980
2.765625
3
OpenStack was born at the nexus of high performance computing at NASA and the cloud at Rackspace Hosting, but it might be the phone companies of the world that help it go mainstream. It is safe to say that most of us probably think that our data plans and voice services for our mobile phones are way too expensive, and as it turns out, that is exactly how our mobile phone operators feel about the racks and rows and datacenters that they have full of essentially proprietary network equipment that comprises their wired and wireless networks. And so, all of the big telcos, cable companies, and service providers are making a giant leap from specialized gear straight to virtualized software running on homegrown clouds, and it looks like the open source OpenStack cloud is becoming the de facto standard for building clouds that provide network function virtualization. That the telcos, cable companies, and other service providers would gravitate towards an open source cloud controller to underpin the next generation of their networks is not at all surprise. The Unix open systems revolution began in 1969 at Bell Labs, which gave us the C compiler and the Unix operating system that was written in it, and the telcos were among the first builders of large scale distributed systems. They were, in fact, the hyperscalers of their time, but were probably more on the kiloscale to be honest. The phone companies operate slightly different kinds of compute farms and networks than the typical hyperscaler or cloud builder. They tend to be a lot more distributed, with multiple tiers of gear spreading out from central datacenters and getting closer and closer to the users in aggregation centers and points of presence, as the most local facilities are called. (We discussed this architecture of telco clouds recently with the folks at commercial OpenStack distributor Mirantis.) When you add all of the nodes up, they are deploying thousands to tens of thousands of OpenStack as they virtualize their network functions. Interestingly, right now the big telcos are not so much worried about containers – they seem to be happy to get their network functions virtualized and running on generic X86 iron. Once that is accomplished, you can bet that they will be working to get the monolithic software behind those network services busted up into microservices and running in containers, but for now, they are mostly happy just getting their software virtualized and off of expensive appliances. At the OpenStack Summit this week, the big operators were on hand, including AT&T, Verizon, China Mobile, Comcast, TimeWarner Cable, and Swisscom, to name a few. AT&T and Verizon, two of the largest telcos in the world, gave out specifics of their own OpenStack implementations and showed how they were using the open source cloud controller at scale. Sorabh Saxena, senior vice president of software development and engineering at AT&T, gave a presentation about the company’s AT&T Integrated Cloud, and said that the company expected to be able to move 75 percent of the network applications that are currently running on specialized appliance gear to OpenStack clouds by 2020. One of the reasons that AT&T wants to do this is the same one that compelled hyperscalers to build their own systems, switches, network and server operating systems, and other components, and that was so they could scale the infrastructure faster than the cost of that infrastructure. The infrastructure scale is daunting for the mobile phone operators. Saxena said that the mobile data on the AT&T network grew by 150,000 percent between 2007, before the age of smartphones, and 2015, when these devices are the norm, and that the network was shuffling 114 PB of data daily across AT&T’s network backbone and, worse still, expected for that data traffic to grow by a factor of 10X between now and 2020. That is not too many years away, and you can see now why AT&T and its telco peers want to shift to commodity iron and open source software to support network functions. Not only will the individual datacenters have to scale up as data volumes grow, but AT&T also has to span nearly 1,000 zones around the globe. “The economic gravity of this reality says that we must transform our approach to building networks,” Saxena explained. “And out answer to the challenge is to transition from purpose-built network appliances to open, whitebox commodity hardware that is virtualized and controlled by AIC. We are also liberating the network functions from the same purpose-built network appliances into standalone software components and managing the full application lifecycle with both local and global controllers. Taking this approach prevents vendor lock in and allows us to have an open, flexible, modular architecture that serves the business purposes of scaling to meet the explosive growth at lower cost, increasing speed of feature delivery, and providing much greater agility.” AT&T did a lot of work to modify OpenStack and extend it so it could be used as the foundation for its new global network. Here are the ten core OpenStack components that are used in AIC, and the three more that are being added to the mix later this year: One interesting aspect of the AIC stack that AT&T has developed is that it uses the same exact code base for both enterprise workloads (which run the AT&T business) and carrier grade workloads (which run the network). Usually, the carrier grade variant of a software stack is more ruggedized and tends to evolve slower, but AT&T is essentially standardizing on the carrier grade version throughout its organization and making sure it performs well enough to do both jobs. Saxena highlighted the Murano application catalog, which allows for the automation of onboarding of a network functions across multiple AT&T zones, and Fuel, which is being used to automate the deployment of zones themselves, as being key components of the stack. (Both of these were created by Mirantis and contributed back to the community, by the way.) AT&T developed its own components as well, including a Resource Creation Gateway, which deploys applications to particular zones through OpenStack, and the Region Discovery Service, which acts as a reservation system to show the network applications running on regions on the AT&T network. Remember, there will be nearly 1,000 OpenStack clusters, so finding stuff will not be trivial, and every site will not be a cookie cutter datacenter with exactly the same hardware and applications. But perhaps the most important thing that the AIC configuration tools do is get network engineers out of the habit of using Excel spreadsheets and cabling diagrams to manually configure the network. This is all done through a web interface, and all done virtually on the network, and the software is even being extended so AT&T’s customers can use a self-service portal to reconfigure their own network services, running on local POPs, on the fly as they need to. To make this work, AT&T had to come up with a local controller that could run inside of a zone and integrate with the Neutron networking APIs inside of OpenStack. Then, AIC required a centralized management controller that worked at a global level, integrated all of those OpenStack clusters with a single set of ordering, tech support ticketing, and monitoring tools that the telco already had in place. This global controller is called Enhanced Global Control Orchestration Management Policy, or ECOMP for short. (It is not clear if AT&T will be open sourcing ECOMP, but probably not.)_ The upshot of switching to OpenStack and NFV is that AT&T can provide these services from within its datacenter rather than putting five or six or more devices at the site of customers subscribing to its network services, and it also means that it can reconfigure the network for customers only the fly rather than taking weeks or months to do it as its mobile network requires based on load. The NFV approach also means that AT&T can grow its network much faster. It took ten months for AT&T to deploy its first 20 OpenStack clusters before all of this automation was created, and it just recently added 54 zones in less than two months. That is more than an order of magnitude faster, which matches the expected data growth AT&T is experiencing. The company did not say how much less costly this approach will be than using network appliances from various vendors, but it has to be pretty substantial to warrant all of this engineering. Red On Red Ahead of the summit this week in Austin, we caught wind that Verizon was also going to be discussing its NFV efforts on top of OpenStack. So we spoke to Chris Emmons, director of software defined network planning and implementation at Verizon, about its OpenStack setup. “We have a plan over the next three years to virtualize all of the direction network elements for Verizon infrastructure for both wireless and wireline services,” Emmons said, adding that he believes this will be the largest NFV implementation in the world, running across tens of thousands of servers by the time Verizon is done. “We want our datacenters to look like everybody else’s datacenters because that is the best way to get the cost points and operational efficiencies that we are trying to get to.” Like AT&T, Verizon will be rolling out OpenStack to its aggregation and edge sites, too, as well as in its core datacenters where the switching gets done and many services reside. Rather than roll its own OpenStack code, which Verizon is perfectly capable of doing, it has chosen Red Hat’s OpenStack Platform (at the Liberty release from last fall), and Emmons says that the company specifically wanted to go with an off-the-shelf implementation and did not want to fork the code in any way. Verizon looked at the OpenStack implementations from Canonical, Hewlett Packard Enterprise, and Mirantis as well, but chose Red Hat because of its long history of supporting Linux in commercial settings. As you might expect, Verizon is using KVM as its hypervisor on the OpenStack cloud, and while it is keeping an eye on Mesos, Kubernetes, and other container services, for the most part the network applications have been running as monolithic Linux code for a long time and it is not yet appropriate to try to refactor these apps to run them in containers. Verizon has some homegrown code to run its wireless network, but a lot of it comes from Alcatel Lucent and Ericsson, and in the wireline business there are a slew of appliance providers who supply that code. Just getting everything on a common hardware and software substrate will be such a big improvement that this is what Verizon – like other telcos – is focused on. Verizon’s OpenStack is being deployed on plain vanilla PowerEdge servers from Dell at the moment, but Emmons says that the company is looking at Open Compute vendors and custom gear from Dell’s Datacenter Scalable Solutions unit or various original design manufacturers for further down the road when it is scaled up. The OpenStack clusters are implemented using a Clos network architecture (like Google, Facebook, and Microsoft use for their infrastructure), and for now Verizon is using whitebox switches from Dell, network operating systems from Big Switch Networks, and hooking them all into OpenStack through the Neutron virtual networking module. Verizon is at the beginning of its rollout and has OpenStack clusters deployed in five datacenters and several aggregation sites now, with more to come this year and beyond until the network functions are virtualized three years hence. Emmons, like Saxena, is not able to comment on the TCO benefits of this NFV approach using OpenStack, but did confirm that it is not just about saving on capital expenses, but also about gaining operational efficiencies and being able to scale up the network and add new features and functions more quickly.
<urn:uuid:196bee28-a995-4d21-88b0-eb30f0579eef>
CC-MAIN-2024-38
https://www.nextplatform.com/2016/04/27/telcos-dial-openstack-mainstream/
2024-09-07T23:56:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00391.warc.gz
en
0.961697
2,450
2.515625
3
Using biodegradable materials and increased recycling potential, packaging made possible by nanotechnology can contribute to a global decrease in carbon footprints and plastic waste. The manufacturing of ecologically friendly packaging that reduces negative effects on the environment without reducing product quality is made possible by technological advancements in nanotechnology, which are opening avenues for ethical businesses and consumers alike. The market for nano-enabled packaging is seeing a sharp increase in active packaging technologies. Because active packaging may interact with contents to prolong shelf life and preserve product quality, it performs better than typical passive packing. For example, packaging infused with nanoparticles can absorb excess moisture and gasses and release antimicrobial agents. In the food and pharmaceutical industries, where keeping freshness and preventing rotting are crucial, this dynamic packaging approach is gaining popularity. Because of its greater utility and added value, active packaging solutions are anticipated to be adopted by manufacturers and consumers. Another noteworthy development in packaging technology is the incorporation of Internet of Things (IoT) technologies. Product tracking and supply chain management can be improved with real-time data gathering and monitoring offered by smart packaging systems that include IoT capabilities. Enhancing transparency and trust, this technology may provide customers with comprehensive information on the state of the food, including freshness indicators and temperature histories. IoT and nanotechnology together are going to change the packaging business by providing previously unheard-of levels of control and information, just as the need for intelligent and connected packaging continues to rise. Finally, the market for nano-enabled packaging is seeing an increasing trend toward customized packaging options. Packaging can now be customized to fit specific needs, such as targeted antibacterial effects or specialized barrier qualities for distinct products, thanks to advancements in nanotechnology. This degree of customization can assist producers in better packaging for a range of products, enhancing shelf life and product protection. Offering personalized packaging options becomes a crucial competitive advantage as customers look for items that more and more meet their unique needs and preferences. This promotes innovation and market expansion.
<urn:uuid:8e52df52-e7c7-43a1-89c5-251deb861da1>
CC-MAIN-2024-38
https://www.gminsights.com/industry-analysis/nano-enabled-packaging-market/market-trends
2024-09-09T04:31:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00291.warc.gz
en
0.930836
401
3.15625
3
When you copy something on your PC (Ctrl + C), it’s automatically copied to your clipboard for you to paste (Ctrl + V). This lets you save time by copying text or images that might be longer to do by hand to retrieve information. Clipboard History is available on Windows devices running Windows 10 can paste multiple items from your history. You can save time copying images and text of various sizes, while pinning the items you tend to use all the time. With a Microsoft account on some managed devices, you can sync your clipboard history to the cloud and have it available on other modern Windows device. - On your Windows 10 device, right click the Start icon and open Settings. - Inside Windows Settings, select System and choose Clipboard on the left hand side. - Turn on Clipboard history. - Optionally, turn on "Paste text on your other devices..." under Sync across devices. NOTE: This feature is tied to your Microsoft account or your work account, so remember to use the same login information on all your devices. - To get to your clipboard history menu at any time, press Windows key ( )+ V. Before selecting the pasted information, remember to select the field the information should go! - You can also find more options inside each item by selecting the three dots beside it: - Pin any frequently used items by choosing an individual item from your clipboard menu. - Clear all to delete all items from your clipboard history except pinned items
<urn:uuid:0783b3fb-a00e-4bee-b14e-f8a94852cd0f>
CC-MAIN-2024-38
https://support.mobile-mentor.com/hc/en-gb/articles/360060290853-Clipboard-History
2024-09-12T18:07:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00891.warc.gz
en
0.833887
308
2.546875
3
It’s time to talk about ‘smishing,’ the less well-known brother of phishing. Though many people can spot email phishing attacks, they are much more vulnerable to the threat of smishing. A big part of this increased threat is the lack of education among the general public. So, what is smishing? And why do people need to be aware of it? This article delves into the meaning of Smishing and how to defend against it by providing 10 facts everyone should know about this unknown cyber threat. What is Smishing? Smishing is a type of cyber attack where the criminal tries to get sensitive information by texting people. Scammers send messages that seem innocent, but they trick people into clicking bad links or downloading harmful content. According to research, 97 per cent of Americans own a cell phone. This means that the vast majority of people are at risk of being defrauded by smishing. Unlike many forms of fraudulent activity, this attack affects all demographics. Of course, it isn’t just the general public affected by smishing. Many businesses also need education. Smaller companies, in particular, are often victims of cyberattacks. As such, raising awareness is of utmost importance. In this article, we’ll tell you ten key facts about this relatively unheard-of concept and the impact it can have. First of all, though, let’s look a little into its history. The origins of smishing and its key characteristics ‘Smishing’ is a relatively recent addition to the cybersecurity lexicon. Yet, the concept goes back to the mid-2000s. As people started using cell phones and sharing numbers, cybercriminals began ‘smishing.’ Over time, smishing techniques have become more advanced, however, with hackers today using new technologies like generative AI to make attacks more convincing. The SMS messages received look genuine and innocent. They look like they could come from a real organization like Amazon, PayPal, delivery services, and more. Psychological manipulation is commonplace. These cybercriminals use social engineering to prey on people’s emotions. As useful concepts like preview dialling are introduced, cybercriminals soon latch on and use them for malicious intent. To make their smishing attempts more believable and successful, they learn about their potential victims. How to defend against smishing The threat of smishing is going nowhere. And phone users need to be proactive to remain one step ahead of hackers. The many ways that smishing tricks people make it hard to know what will happen next. One hopeful defence is the role of technology. By using the power of artificial intelligence, we can better detect and prevent smishing attacks. Organizations can use AI to make mobile apps and messaging platforms smarter and more secure. AI can also analyze and identify suspicious patterns and behaviours to keep users safe. Technology alone isn’t enough, however. Both organizations and individuals need to stay up to date on the latest scams. Awareness and education on smishing remain key to fighting this evolving and pervasive threat. 10 Facts about Smishing Without further ado, here are ten important facts you need to know about smishing if you want to defend yourself against this emerging threat. Smishing is a relatively unknown as a concept With awareness being so limited, it’s no surprise that people don’t know how to differentiate between legitimate messages and smishing attempts. Although knowledge is growing year on year, it’s still a surprise that only 23% of baby boomers knew about it in 2020. Don’t blame age, though–only one-third of millennials knew about it. To recognize smishing attempts, individuals need to know about the red flags. These include: - Urgent requests - a sense of urgency makes people act without thinking things over. - Generic greetings - they’ll address you as ‘dear customer’ rather than by your name. - Poor use of language - if you see misspellings or grammatical errors, be wary. - Requests for personal information - legitimate organizations don’t ask for this via text message. - Unfamiliar links - links within text messages might not be genuine. Smishing can be costly Smishing has the potential to cause large financial losses should people fall victim. With the clever techniques used, victims are encouraged to divulge information like their social security numbers or their account credentials. They then use this data to gain unauthorized access to IT networks and systems, steal identities, and commit data breaches. Financial losses can also occur without individuals giving away information. These malicious messages can deliver malware to phones and, therefore, any network that connects to it. This can cause substantial losses for big companies as well as individuals. Besides financial losses, smishing can lead to stolen identities and data breaches. According to research, 90% of data breaches use social engineering components like those seen in smishing attacks. In the next two years, the global cyber insurance market is projected to reach 22 billion USD. This shows how important IT security risk management is. Smishing is on the rise Though it was relatively unknown in its early years, statistics show that smishing is more prevalent than ever. The term itself was introduced in 2006 off the back of phishing, but it hasn’t yet become a part of the general lexicon outside IT circles. Statistics on smishing are alarming. In the 3rd quarter of 2020, Proofpoint reported a staggering 328% increase from Q2. This figure highlights the need for greater awareness–and preventative action alongside it. Sadly, when new technologies, like contact center technologies, are adopted, cybercriminals also take advantage of them. COVID-19 worsened smishing The COVID-19 pandemic had consequences beyond the obvious threats to public health. As governments grappled with uncertainty, new rules, and lockdowns, they began to relay key information via SMS. Contact tracing, vaccinations, and lockdown information began arriving via text messages. Never before had government leaders communicated with the population in this way. This shift in communication created a wave of smishing. During the pandemic, people were more vulnerable to smishing due to increased anxiety, urgency, and new practices. For instance, the Wireless Emergency Alerts (WEA) system was set up to send important emergency alerts to cell phones. State and local governments and public health agencies also got in on the action and sent text messages with updates and guidance. Though these communications were part of broader efforts, they paved the way for fraudsters to reach vulnerable people. This led to numerous data breaches. Fake 2FA messages are common Fraudsters are taking advantage of two-factor authentication messages as more people secure their online accounts in this way. This widely adopted security measure means users need to provide extra verification alongside their passwords. Though 2FA has been around as a concept for some time, it only began to be adopted widely in the 2010s. Methods like time-based One-Time Passwords (OTP), and facial and fingerprint recognition are now common. Cybercriminals use smishing to trick people into giving away sensitive data by sending fake 2FA messages. Essentially, these exploit our natural tendency to trust security-related requests, and so we respond promptly without question. Hackers will use fake, local numbers A cunning tactic among hackers is to use fake local numbers. When a number seems to be from your local area, it creates a sense of authenticity. People are more likely to respond to things that are familiar to them. This psychological ploy makes people less suspicious as they are more likely to trust a local source. Once again, hackers use legitimate tools like VoIP telephone systems for fraudulent gain. The ease of setting up virtual numbers with VoIP technology has made it simpler for smishers to deceive recipients. Unfortunately, we can’t prevent great technology from being exploited in this way. Smishing is the most common type of phishing for mobile users Smishing is the most common way for mobile users to be ‘phished.’ In 2022, more than 30% of users were exposed to attacks every quarter, according to the Global State of Mobile Phishing Report. It was the highest rate ever recorded. After email, this is now one of the most common phishing techniques. The current threat landscape is indicative of our increasing reliance on mobile technology. Cybercriminals are simply exploiting this trend. As technology continues to advance, users must be vigilant. Understanding the risks and the latest security developments is crucial. Businesses should try to introduce robust cybersecurity programs where they can. Don’t presume secure apps are protected Many phone users prefer messaging apps like WhatsApp, Facebook Messenger, and Signal to stay in touch. But just because these apps utilize forward-thinking features like AI in UX design doesn’t mean they are invulnerable. Anyone can use them, including hackers. Any software could be a victim of malware. More and more businesses are working remotely and using technology like remote desktop iPad use, for example. This allows users to access their networks wherever they are. However, this convenience brings about headaches too. Hackers can use remote desktop apps and mobile devices to access screens and control them. This creates a pathway for smishing attacks to infiltrate a supposedly secure environment. Smishing can be reported An important fact to finish with is that anyone can report smishing attempts. All mobile U.S. mobile carriers have come together in the fight against fraud. Anyone who receives a suspicious message can forward it to SPAM or 7726 to report it. When you do this, you get a message back asking you to provide the number that carried out the smishing attempt. It’s also possible to report messages to Google and Apple from your phone As the sender won’t be in your contacts, the ‘report junk’ option will be visible underneath the message. You can tap this to report it. Tax and fake delivery notifications are the most common types of smishing Two of the most widespread types of smishing attacks include messages about taxes and delivery notifications. It’s easy to understand why. A text message alerting you to some tax-related issue is likely to get your attention and prompt quick action from you. Similarly, if you are used to shopping online often and are currently expecting a parcel to reach your address, receiving a delivery notification wouldn’t seem that suspicious at first glance. However, you can easily spot signs of smishing by checking your messages a bit more closely. More often than not, you’ll be able to identify parts of the text that are either incorrect or that sound downright “scammy”.
<urn:uuid:c375e561-75f9-4ac8-af3a-726caa754a12>
CC-MAIN-2024-38
https://em360tech.com/top-10/facts-about-smishing
2024-09-16T13:46:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00591.warc.gz
en
0.949876
2,219
2.984375
3
Information technology in the field of modern dentistry plays an increasingly important role, in addition to contributing to the advanced quality and effectiveness of oral health care for patients. There are many forms of IT used in the dentistry industry, including the following: In healthcare, practice management software handles the business aspects, such as scheduling patients, billing, and monthly reports. Essentially, practice management software keeps the dental office running smoothly. Often, the term ‘Electronic Health Record’ refers to a particular information system that utilizes various technologies, standards, and interfaces in order to create, manage, store, and share information associated with patients’ electronic health records. For patients, electronic dental health records offer improved treatment with fewer errors in their personal health information. Healthcare and dentist professionals use electronic materials management to track and manage the inventory of medical supplies, medication, and other various materials. Electronic materials management is similar to enterprise resource planning systems, often used within other industries outside of healthcare. Backing up your data involves making a copy of your important files and associated data, then storing the copy in a safe and secure place. It’s important for dentists to implement a best-practice electronic backup system. In fact, it’s essential to the financial well-being of any dental practice. Furthermore, HIPAA security standards require a contingency plan, which includes data backup and disaster recovery. The Potential of IT for Dental Professionals The rapid development of information technology and the wide availability of personal computers combined with email, the Internet, and medical literature retrieval applications have altered the way dentists are able to learn and practice within the field of dentistry. The future potential of the dental professional is vast and endless. Technological advancements can be applied to many aspects of the industry, providing benefits to employees and patients alike. However, dental offices’ must be aware and learn about the innovative information technology available, in order to reap the benefits. To arrange for an assessment of your current IT infrastructure and get more information about IT for the field of dentistry, give us a call at (613) 828-1384 or email us at email@example.com today. Fuelled Networks is your trusted IT team for your dental practice.
<urn:uuid:d0bc9934-b034-4277-976e-11359a1bc636>
CC-MAIN-2024-38
https://www.fuellednetworks.com/your-dentist-practices-needs-top-it-services/
2024-09-16T12:04:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00591.warc.gz
en
0.928214
460
2.71875
3
As a consequence, operators are beginning to rethink their cooling practices, incorporating fresh possibilities to meet increased demand and a changing landscape. Asserting itself as a viable alternative to air, liquid has become an increasingly sought-after method of cooling. With heat absorption rates of liquids far greater than that of air, this method can transport heat more effectively, reducing (or in some cases elminiating) the requirement for mechanical cooling plants. While there are time and resource investments required to deploy this technology, it can have a profound impact on maintaining a more sustainable and efficient facility. However, determining which liquid cooling solution is suitable for your data center depends on a number of factors. These include rack densities, the local environmental conditions, space constraints, water usage restrictions, as well as whether the building is retrofitted or newly built. Another major decision revolves around how close you want to bring the liquid cooling to the electronics. Some operators prefer the use of rear cabinet door heat exchangers, allowing continued use of traditional air cooled IT inside. Whilst a more progressive approach is to pump coolants directly to the chips within the IT chassis. With such a breadth of variables, many organisations need a verifiable way to test this technology before committing to it. Often they’ll want to understand whether there is any way to compare air cooled and liquid cooled data centers? In order to answer such questions, let’s take a look at the role a digital twin can play in anticipating unforeseen challenges in liquid cooling deployment. Understanding liquid cooling deployment with data driven digital twins A 3D replica of a physical system or object, digital twins can be studied, altered and trialled to assess the impact of changes to its real-life counterpart. Crucially, ideas can be safely tested in the digital realm before they’re introduced into the real world. For example – thanks to their built-in Computational Fluid Dynamics (CFD) engines – digital twins can be used to accurately understand and model liquid cooling implementation to the data centre. For many considering liquid cooling deployments, this is a step into the unknown. This is especially true when considering upgrading existing air-cooled facilities to accommodate some elements of liquid cooling. Without a digital twin in place, engineers may rely on vendor promises or consider time consuming experimental testing, resulting in project delays and increased costs. In stark contrast, by implementing a digital twin to assess the optimum set-up of a liquid cooling system, engineers are able to understand the consequences of deployment within the framework of the legacy air cooled infrastructure. This enables businesses to make an informed decision based on the science as to whether they should go liquid or not. A perfect partnership is formed between the ability to both project outcomes, and then effectively manage systems on an ongoing basis. By implementing a digital twin, businesses can use technologies such as liquid cooling more effectively, allowing them to capitalise on financial and environmental opportunities alike. Unlocking the value of liquid cooling with digital twins Changing infrastructure is an ever present challenge in the data center industry. And change has never historically been easy to handle. Fortunately, as business demands on technology accelerate, data center CFD simulation has proven itself as a sure-fire way to try out new technologies prior to any commitment. Giving greater flexibility in turbulent times, digital twins not only support the commercial strategy of the business, they also help deliver a greener and more sustainable data center strategy. More from DCD Sponsored Digitally transforming the data center While it is easy to state the importance of starting a digital transformation journey, actually doing it is much harder As perceptions change and technologies evolve, what is coming next in data center automation? Download this free report today for exclusive insights into what our global community view as sustainability challenges and priorities, brought to you in partnership with Honeywell
<urn:uuid:13f5650b-f23f-485d-a069-08ca74b0d770>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/opinions/a-superior-partnership-digital-twins-and-liquid-cooling/
2024-09-19T01:19:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00391.warc.gz
en
0.933806
778
2.84375
3
Sometimes The Best Edtech PD Isn’t About Tech At All By investing in smart professional development, schools and districts can dramatically increase their educators’ confidence with educational technology - while better ensuring that their investments in such tools will boost student outcomes. A version of this piece originally appeared in eSchool News online publication. As the school year begins, teachers are undoubtedly beginning implementation of a great new technology to support classroom learning that was introduced during the back to school professional development meetings. These teachers – enthused to get to know the new faces sitting in the chairs before them – must also balance incorporating these new technologies in to the classroom environment. In order to support this transition for educators, administrators are increasingly in considering alternative strategies that can bring together innovation – preparing students for the 21st century – while ensuring educators have the support necessary to implement these new technologies. Schools and districts are spending billions on educational technology, even while questions continue to swirl around whether such investments yield solid returns. Few companies can reliably ensure the educational outcomes that teachers and administrators expect, and according to one estimate, just 35 percent of edtech tools purchased are actually being implemented. Barriers to successful implementation often have little to do with the technology itself or teachers’ comfort with technology overall. Instead, success is impeded by a lack of strategy on how to integrate the technology into the classroom. Even as they spend up to $18,000 per teacher per year on professional development, schools and districts have underinvested in quality professional development that focuses on the skills and know-how educators need to make educational technology effective in the classroom. It’s not from a lack of demand, though—research nearly always suggests that educators are asking for more and better training. Districts and schools must meet this demand and provide the very best educational technology professional development—focusing less on the technology itself and more on fundamental pedagogical strategies that can bridge the divide between investments, implementation, and outcomes. Focused Instructional Decisions The promise of educational technology stems, in part, from its ability to generate data that can inform instructional strategies. Data can inform small group instruction, help teachers pair students, identify gaps early, and even challenge conventional wisdom about how and why learners construct knowledge.Whether that means using AnswerGarden to collaboratively build a word cloud to assess how a class is absorbing material or using Perusall to review a group of students’ “confusion report,” there are plenty of tools teachers can leverage to make data-informed decisions about their instruction. Effective professional development should share best practices and tools that will support teachers in maximizing their instructional time by using the information they get from edtech tools to become laser focused on students’ specific needs. The Collaboration Conundrum Education can be an isolating profession. Teacher-innovators often feel like they are working in a vacuum that offers few opportunities to engage with and learn from the experiences of their peers. That’s not surprising when so much of their professional development seems to ignore the value of collaboration. Just 9 percent of professional learning opportunities offered to teachers have collaborative formats.Effective professional development should provide teachers with opportunities to learn and engage in meaningful collaboration. Collaboration is at the core of the professional development services offered by AVID. Participants have opportunities to work with one another, ask questions, share ideas, and challenge thinking in every activity. Relationships are carefully developed throughout the training to produce a safe, trusting environment where participants experience rigorous, hands-on activities that can be taken directly back to the classroom. This sort of interaction also lays the foundation for conversations that challenge existing views and pedagogy—allowing teachers to consider the more innovative and inclusive teaching practices afforded by digital tools. Match Outcomes and Strategy Effective professional development should provide teachers with instructional strategies that go beyond explicitly teaching a new technology. The focus should be on learning goals first, and digital tools second. Tech-savvy educators approach instruction by defining the content students need to learn and creating the context to ignite their curiosity. Only then do they determine how learning will occur and which digital tools might support and enhance that learning process. During a professional development session, for example, educators could take part in what AVID calls a digital jigsaw, researching best practices for digital organization and sharing their findings on Padlet or other real-time collaboration tools. The use of Padlet allows for group members to take notes collaboratively and have focused discussions within the tool. This emphasis on note-taking in a digital environment helps educators support students in their construction of meaning using tools that match individual learning styles—digital ink, links to relevant resources to reinforce cognitive connections, meta-tags, graphic organizers, video, and sound. Individually, these are disparate tools, but taken together, they form a toolbox that can be accessed with a larger goal in mind. These strategies would be much more difficult to accomplish without the use of Padlet or a similar technology, but learning how to use the technology is should not be the only goal. The professional learning experience should be one where learners collaboratively gather and discuss notes in a way that encourages them to process information in a more meaningful, deeper, and efficient way. By investing in smart professional development, schools and districts can dramatically increase their educators’ confidence with educational technology—while better ensuring that their investments in such tools will boost student outcomes. Thuan Nguyen, a former school district assistant superintendent and CIO, is executive vice president for AVID, where he oversees technical operations, products, and services and is responsible for AVID’s digital strategy. Interested in learning more? Please join us at the AVID National Conference where District Superintendents and College Presidents receive a FREE registration.
<urn:uuid:7652f245-878d-453c-b4de-457fb812508b>
CC-MAIN-2024-38
https://www.govtech.com/education/k-12/sometimes-the-best-edtech-pd-isnt-about-tech-at-all.html
2024-09-19T02:10:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00391.warc.gz
en
0.95116
1,177
2.703125
3
Video can be a great way to share information because it allows you to show someone else what you are trying to explain, how to accomplish something specific, how to build something, or just share information in a way where people can see what you are doing. However, editing videos so users understand the focus of what you are trying to share is an important step. Two ways to best accomplish this is to remove sounds from areas where it is not needed, and to modify the video speed in relevant areas. This post discusses how to remove sound from certain areas in the video as well as how to modify video speeds using Camtasia. How to Remove Sound & Modify Video Speeds Using Camtasia Videos can be extremely effective tools for learning, training, demonstrating, and even sharing visual information or items with others. Creating videos does not take very long, but editing them so they can be more useful is important and can take time. Luckily, there are easy ways to remove sound/audio and modify the video speed to better represent the information you are sharing. While editing a video in Camtasia, it is fairly easy to remove sound from a particular clip or to modify the video speed in any clip. First, you will want to start by defining what section in the video you want to either remove sound from, or modify the video speed. A clip is a portion of the video within the larger video that you have created. The clip can begin or end at any place, but it is defined as only being a portion of the entire video. Creating clips within a larger video allows you to apply modifications to just those sections of the video, such as removing the audio and modifying the video speed. Creating a clip - Open Camtasia and add a video into the Media Bin. - Once the video has been loaded into Camtasia, drag the video to Track 1. - Find a location in the video where you want to either remove the audio or modify the video speed. - Make sure the video tracker is stopped on the location in the video where you want to start your clip. - Click on the video in the track to highlight it. Once selected, the video will be highlighted in yellow to show it is active. This step is required to be able to clip the video. - With the video highlighted, click on the split button. This will put a line between the two sections in the video and effectively split them apart. The easiest way to confirm this is by looking for the name of the video in the newly separated section. - Jump to the section in the video where you want the clip to end. - Click on the video if it is not already highlighted and the click the split button again. Removing the sound/audio from a clip Removing the audio from a clip can be helpful anytime sound was picked up that you don't want to include, but where you do not want to remove the video portion. - With a clip created, click on the clip you want to remove the sound from. - Right-click on the clip and select "Silence Audio", or use the keyboard shortcut Shift+S. Once this has been done, you can visually see the audio has been silenced. NOTE: You can also separate the audio and video or edit the audio by choosing these from the pop-up menu. Modifying the video speed Modifying the video speed can be helpful for many reasons, including slowing down to show something really specific or speeding up items that do not need to be in real time. - With a clip created click on the clip where you want to modify the video speed. - Right-click on the clip and select "Add Clip Speed". - This brings up the clip speed menu. Enter a clip speed into the box next to "Speed:". You can choose to increase the speed by adding a higher number, or slowing the video speed beyond the original speed by entering a number less than 1. Once the clip speed has been updated, the video clip will represent how much time it represents on the track. Once you have adjusted a video clip speed, it is always a good idea to watch the clip and verify the speed fits the need for why you changed the video speed. Videos can be a great way to share content including demonstrating how to do certain tasks, or even talk about something you are knowledgeable about. Editing videos to remove unnecessary audio, and/or modify the speed of certain clips within the video, can better represent the content you most want users to see. As always, the more careful you are with the content, the more likely users will understand what you are trying to explain!
<urn:uuid:a4a57324-3088-4480-9fca-8c0236a3119f>
CC-MAIN-2024-38
https://blogs.eyonic.com/how-to-remove-sound-modify-video-speeds-using-camtasia/
2024-09-20T06:21:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00291.warc.gz
en
0.928566
966
2.796875
3
Artificial Intelligence (AI) is revolutionizing various sectors, promising unprecedented advancements. Whether it’s machine learning algorithms enhancing healthcare diagnostics or autonomous systems streamlining supply chain operations, AI’s impact is far-reaching and transformative. However, amid this widespread adoption and technological growth, an increasingly pressing concern is emerging: the environmental cost associated with AI’s rapid development. From skyrocketing energy consumption and rising carbon emissions to extensive water usage, the challenges that accompany AI’s expansion are complex and multifaceted, warranting a closer, more nuanced examination. While the technological benefits of AI are celebrated, its environmental footprint often remains underpublicized. The advent of AI represents a significant leap in computational requirements, leading to an exponential increase in the resource demands of data centers that power these intelligent systems. As AI continues to evolve, the sustainability of its operations becomes a critical topic of concern, highlighting the need for a balanced approach that ensures technological advancement does not come at an unsustainable environmental cost. The Accelerating Pace of AI Adoption The launch of ChatGPT in November 2022 marked a watershed moment in the AI landscape, significantly accelerating the adoption and investment in AI technologies. Following this milestone, the computational power required for AI models has been doubling approximately every 100 days, reflecting an unprecedented pace of technological growth. This rapid expansion has not only fostered innovation across various sectors but has also triggered a wide range of reactions from global economic and social institutions. European regulators, for example, have imposed restrictions on the training of AI models using social media data, while financial bodies like the Bank of International Settlements have expressed concerns regarding AI’s potential influence on inflation dynamics. Despite the diverse responses from regulators and financial institutions, one critical issue often remains underdiscussed: the environmental impact of AI’s exponential growth. As AI technologies evolve and scale, the underlying operations necessitate enormous amounts of computational resources, significantly magnifying their energy consumption and environmental impact. This growing environmental footprint underscores the urgency of addressing AI’s sustainability challenges, ensuring that the pursuit of technological advancement does not sideline critical environmental considerations. Energy Consumption and Carbon Footprint AI operations are fundamentally reliant on data centers, which are themselves significant consumers of energy. As of 2023, data centers represented between 1% and 1.5% of global electricity usage and were responsible for approximately 1% of worldwide CO₂ emissions. These figures are poised to rise in tandem with the growing deployment of AI technologies, amplifying the environmental impact of data center operations. Major tech companies have reported substantial increases in their emissions correlating with AI expansion. For example, Microsoft’s emissions surged by 40% between 2020 and 2023, Meta’s emissions rose by 65% from 2020 to 2022, and Google’s emissions increased by nearly 50% from 2019 to 2023. These significant hikes in emissions illustrate the immense challenge of managing AI’s environmental sustainability. AI-powered applications require considerably more energy than traditional digital operations, which exacerbates the carbon footprint associated with their deployment. As AI continues to permeate various sectors, the demand for robust, energy-efficient data center solutions becomes increasingly critical. Addressing these energy consumption and carbon emission challenges is vital for balancing the benefits of AI with the necessity for maintaining environmental integrity. Water Usage and Sustainability Another significant environmental concern linked to AI-driven data centers is their substantial water usage, particularly for cooling purposes. In the United States, data centers consume approximately 7,100 liters of water per megawatt-hour of energy. This high level of water consumption is particularly problematic in regions experiencing severe droughts, such as California, where water scarcity poses prominent sustainability challenges. The heavy reliance on water for cooling data centers in drought-prone areas underscores the need for more sustainable water management practices within the tech industry. In response to these challenges, tech giants like Google, Amazon, and Meta have initiated projects aimed at offsetting their water consumption and achieving water positivity by 2030. These initiatives include the development of resilient watershed landscapes and the promotion of community water conservation efforts. Despite these ambitious goals, the long-term effectiveness and scalability of such projects remain to be fully realized. The industry’s commitment to sustainable water usage practices will play a crucial role in mitigating the environmental impact of AI-driven data centers, necessitating continuous innovation and stringent sustainability measures. Climate Risks and Resource Competition The placement of data centers near urban areas introduces additional complexities, particularly concerning resource competition and climate risks. As climate change increases the frequency and intensity of extreme heat events, both public infrastructure and data centers face heightened demands for cooling power. This increased demand can strain local power grids, jeopardizing power stability and posing serious risks to public health and infrastructure. Studies have shown that even a 1°C rise in temperature can correlate with higher mortality and morbidity rates, emphasizing the critical need for data centers to adapt to changing climatic conditions. The concentration of data centers in vulnerable regions further exacerbates these environmental and resource challenges. As these regions face evolving climate threats, the additional burden of resource-intensive data center operations can significantly hinder local efforts to mitigate environmental stresses. Effective management of these challenges requires a strategic approach that considers regional climate vulnerabilities and prioritizes sustainable resource allocation. By implementing adaptive measures, data centers can better align their operations with broader environmental and public health objectives, ensuring that technological advancement does not come at the expense of regional sustainability. Survey Insights on Sustainability Perception Insights from a 2023 survey of Australian sustainability professionals reveal a concerning lack of transparency in data center operations, with only 6% believing that data center operators provide comprehensive sustainability data. This transparency deficit highlights the critical need for improved reporting and accountability in the tech industry, ensuring stakeholders have access to accurate and thorough sustainability metrics. Enhanced transparency can foster greater trust and collaboration between tech companies and environmental advocates, ultimately driving more effective sustainability initiatives. Additional survey data from IT managers in Australia and New Zealand illustrates widespread adoption of AI technologies, with around 72% of respondents adopting or piloting AI applications. Despite their enthusiasm for AI, 68% of IT managers expressed significant concern regarding the increase in energy consumption driven by AI operations. This disparity between technological adoption and sustainability awareness underscores a notable skill gap within the industry, highlighting the need for targeted education and training programs. Addressing this skill gap is essential for equipping IT managers with the knowledge and tools necessary to mitigate AI’s environmental impact effectively. The Need for Improved Education and Transparency Artificial Intelligence (AI) is transforming various industries, offering unprecedented advancements. From machine learning algorithms that improve healthcare diagnostics to autonomous systems that streamline supply chain operations, AI’s impact is extensive and revolutionary. However, amid its widespread adoption and technological progress, a significant concern is emerging: the environmental cost of AI’s rapid development. The challenges include skyrocketing energy consumption, increased carbon emissions, and high water usage, necessitating a closer, more nuanced examination. While the technological benefits of AI are widely celebrated, its environmental footprint often goes unnoticed. AI’s rise represents a significant leap in computational demands, leading to a sharp increase in the resources required by data centers that fuel these intelligent systems. As AI continues to evolve, the sustainability of its operations becomes a critical issue, emphasizing the need for a balanced approach. Ensuring that technological advancements do not come at an unsustainable environmental cost is imperative. This balance is crucial for the long-term viability of both AI technologies and our planet.
<urn:uuid:21cbb1ea-1d9b-4877-8a0a-69c0d80dc6f5>
CC-MAIN-2024-38
https://energycurated.com/environmental-and-regulations/is-ai-advancing-at-the-cost-of-earths-sustainability/
2024-09-13T01:16:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00091.warc.gz
en
0.923356
1,536
3.203125
3
What is change management? Change management is a collective term used to describe the different approaches that are adopted in order to prepare, help and support individual employees, teams, departments and organizations to make an organizational change. As far as IT is concerned, change management involves the implementation of practices that can help minimize temporary disruption of IT services when any changes are made to critical systems and services. What are the objectives of change management? The major objective of change management is to reduce incidents and comply with regulatory standards. Change management practices are designed to ensure the prompt and efficient handling of any changes being made to the IT infrastructure. Regardless of whether you are resolving any problems in the code, managing existing services or rolling out new ones, change management helps minimize risk, avoid bottlenecks, provide context, maintain transparency and break down silos. Why is change management important? According to a Forbes article, change management is the key to guiding businesses towards the “new normal” in 2021. Change management is important for a host of reasons. It helps organizations: - Stay abreast of new technological advancements and market evolution - Engage individual employees with the change plan, thus laying the groundwork for later success - Overcome resistance to radical change initiatives - Improve overall performance and productivity and drive innovation - Reduce waste and consequently reduce costs How does change management work? Change management involves the application of a set of tools and a structured process to implement a change in order to achieve the desired outcome. A Harvard Business Review research indicates that a persistent set of small, orchestrated changes is the best approach to drive large and lasting change at an organization. However, when it comes to change management, there is no “one size fits all.” There is no single change management model or process that one can follow in all situations. However, we will be discussing some basic types and essential principles that will apply to nearly all change management activities. What are the three levels of change management? Change management occurs on three different levels, as explained below: 1. Individual change: As the name suggests, individual change management focuses on understanding how individual employees will experience change and what they will need to successfully adapt to the change. Individual change management also involves the knowledge of how an organization can help its employees make the transition. It involves the use of disciplines like neuroscience and psychology to better understand human reactions to change. 2. Organizational change: Organizational change management provides us with the steps and actions to take at the project or initiative level to support the hundreds or thousands of individuals who are impacted by a project. Organizational change is further segregated into three categories: - Developmental change: Developmental change is geared towards improving existing processes, conditions, methods, skills or performance standards. Examples include work process improvements, team development efforts, interpersonal communication training, and increasing quality or sales. - Transitional change: Transitional change is usually implemented in order to replace an existing process with a new one. Some common examples of transitional change include mergers and acquisitions and corporate restructures. - Transformational change: Transformational change refers to a radical change that creates a need for changes in the behaviors or adoption of new behaviors for the organizational employees. It involves major cultural or strategic changes, implementing large-scale operational changes, adoption of radically different technologies and more. 3. Enterprise change: The final leg of change management, enterprise change is referred to as the systematic deployment of change management processes, tools and skills across the organization. Enterprise change management enables organizations to adapt quickly to market changes and continue to improve into a better version. The entire organization collectively embraces the strategic initiatives being taken to adopt new technologies and stand out from the competition. What are the 7 R’s of change management? The 7 R’s framework of change management represents the seven most important points to be considered while implementing the change management process. Here are seven questions that you must ask before undertaking the change management process: - Raised — Who RAISED the change request? - Reason — What is the REASON behind the change? - Return — What RETURN is required from the change? - Risks — What RISKS are involved in the change? - Responsible — Who is RESPONSIBLE for creating, testing and implementing the change? - Resource — What RESOURCES are required to deliver the change? - Relationship — What is the RELATIONSHIP between the suggested change and other changes? What are the five steps of change management? There are five steps to the change management process, also known as the change management life cycle: - Identification – Identify and be aware that a change is needed. Proceed by making a change request. - Analysis – Evaluate and approve the change request. Perform risk assessment, determine technical feasibility, identify desired outcomes, and analyze benefits and associated costs. - Planning – Analyze the impact of the change request, build an implementation plan for the change and design how it will work. - Implementation – Put your plan into action and communicate the change. - Review – Assess the implementation, report, review and so on. What is change management methodology? Once you have decided to implement a change, you must determine the best steps you need to take in order to successfully execute the change. And this is where change management methodology comes into the picture. The change management methodologies offer a set of specific guidelines that organizations must follow to execute the planning and implementation of the change in the most efficient manner. What are the main change management models? In this section, we’ll provide you with an understanding of what the key change management methodologies or models look like: Kotter’s 8-Step Change Model Developed by Harvard professor and change management expert John Kotter, the Kotter 8-step change model focuses primarily on the people involved in large-scale organization changes and the psychological impact on them. The eight steps are: - Create a sense of urgency to motivate people - Build a strong coalition - Define your strategic vision for what you want to accomplish - Get everyone on board and make sure they know their role - Identify and remove roadblocks - Create short-term goals - Sustain the momentum - Institute the changes McKinsey 7S Model Developed by Thomas J. Peters and Robert H. Waterman at the McKinsey consulting firm in the 1970s, this model focuses on evaluating how the different parts of an organization work together. This model comprises seven fundamental elements that organizations must be aware of when implementing change: - Change strategy - Structure of the organization - Business systems and processes - Shared values and culture - Style or manner in which the change is implemented - Staff involved - Skills your employees have Developed by the founder of Prosci, Jeff Hiatt, the ADKAR model provides five key goals to base your change management process on. These include: - Awareness – Make sure that everyone in your organization understands the need for change - Desire – Make sure that everyone involved wants the change - Knowledge – Provide the information on how to accomplish the change - Ability – Make sure all employees have the skills and training to incorporate the change on a regular basis - Reinforcement – Make sure to keep the change implemented and reinforced later as well Lewin’s Change Model One of the most popular and widely accepted change management models, Lewin’s Change Management Model enables organizations to better understand structured and organizational change. Developed by Kurt Lewin in the 1950s, this model categorizes the change process into three distinct steps: - Unfreeze – The preparation stage that aims to overcome employee resistance to the change - Change – The stage at which the change is implemented. - Refreeze – The stage at which the change has been accepted and the employees return to their routine. IT Infrastructure Library The ITIL or Information Technology Infrastructure Library offers a set of best practices that organizations must follow in order to streamline the process of change management and make it easier for the IT team to prioritize and implement changes efficiently without causing any adverse impact on agreed-upon service levels. Change management support with Kaseya Kaseya provides the logic and tools necessary to guide a company into ITIL – the most widely used standard for the efficient operation of an IT organization. Contact us to learn more.
<urn:uuid:8d864ca8-cb98-4794-9c46-df0d23c9e268>
CC-MAIN-2024-38
https://www.kaseya.com/blog/change-management/
2024-09-13T00:55:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00091.warc.gz
en
0.92619
1,742
2.953125
3
In this video from the HiPEAC 2019, Koen Bertels from Delft University of Technology introduces the audience to quantum computing, explaining the potential power of quantum and delving into the quantum computing stack developed at Delft. Quantum computers will revolutionize the way current computer machines operate and will open a completely new paradigm of computation. By exploiting quantum phenomena, these computers will be able to solve problems that are currently intractable even for the most powerful supercomputers. Since 1982, when Richard Feyman formulated the idea of a quantum computer, a lot of progress has been made. However, we are not yet at the commercial implementation of such computing systems. Currently, several research groups and also some companies such as IBM, Microsoft, Intel, Alibaba and Google are very active in this domain and are in the race for achieving ‘quantum supremacy’, when quantum computers outperform classical ones. On top of that, I believe that quantum computers will be on the market like accelerator technologies, just like GPUs and FPGAs currently are. In my talk, I will introduce what quantum computers are but also how they can be used as a quantum accelerator. I will discuss why a quantum computer can be more powerful than any classical computer and what the components are of its system architecture. In this context, I will talk about our current research topics on quantum computing, what the main challenges are and what is available to our community. Finally, I will introduce the accelerator idea and give an example for quantum genome sequencing. Quantum computing has always been dominated by the quantum physics community. We are now reaching the phase where a real quantum computing system can be built, which is why it is very important that our community starts being involved in this quantum initiative. Koen Bertels is head of the Quantum & Computer Engineering Department and head of the Quantum Computer Architectures Lab. He currently focuses on Quantum Computing and more specifically on the overall system design and architecture aspects. In this respect, He is a principal investigator in Qutech where I collaborate with experimental physicists on building prototype quantum processors. In the past, his research interests spanned two of the three research pillars of the Lab, namely Multi and Many-core architectures and Electronic System Level Design. More specifically, he was responsible for the Delft Workbench project that aims to provide semi-automatic support for designing heterogeneous multicore platforms where reconfigurable technology offers plenty of possibilities to generate application specific hardware at runtime.
<urn:uuid:fdb76d29-caaa-42ef-ba29-f8713d1272d0>
CC-MAIN-2024-38
https://insidehpc.com/2019/02/quantum-computing-from-qubits-to-quantum-accelerators/
2024-09-17T21:46:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00591.warc.gz
en
0.943442
497
2.78125
3
Each OpenESQL preprocessor option - ODBC and JDBC - requires that you create a connection to the database using either an explicit or an implicit connection. Important: You must have the appropriate drivers and/or data providers installed and data source names created before you can establish Explicit Connections (recommended) An explicit connection is one established within your program code using the CONNECT embedded SQL statement, and disconnected using the DISCONNECT embedded SQL statement. This enables you to make connections to multiple databases at runtime on an as Use explicit connections when your program accesses multiple data sources or databases. DISCONNECT embedded SQL topics for details. Note: For ODBC connections only, you can specify an explicit disconnect and rollback to execute automatically if the program terminates abnormally. See the INIT compiler directive option topic for more information. An implicit connection is one defined by way of SQL compiler directive options at compile time. This method establishes a single connection to one database. Use this method only when your program accesses only one database. Use a combination of the INIT, DB, and possibly PASS compiler directive options to create the connection. - INIT - to identify the data source name - DB - to identify the database - PASS - to provide a user ID and password if required OpenESQL automatically disconnects from the data source when the program terminates.
<urn:uuid:2d56ded7-14eb-43d1-9a34-64684e82dd5c>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/visual-cobol/vc50/EclUNIX/GUID-CFEF4211-2D16-4784-BBAB-56E7EE23FA14.html
2024-09-19T05:13:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00491.warc.gz
en
0.788351
293
2.75
3
What Is Perfect Forward Secrecy? ...and what does it mean for you? August 9, 2017 Explain It Like I'm Five Perfect Forward Secrecy (PFS) is a style of encryption—like Diffie-Hellman or ephemeral Diffie-Hellman key exchanges—that enables short-term, completely private key exchanges between clients and servers: the cyber security Cone of Silence. Normally, servers have special encryption keys they use to keep communication sessions private and secure. Whenever Cindy the Client wants to chat with Stan the Server, Cindy comes up with a secret (the "pre-master secret") and encrypts it using Stan's special key. They use this encrypted pre-master secret to continue encrypting the rest of their conversation. The only people who can decrypt what Stan and Cindy talk about are the ones who know Stan's original key, like his trusty Network team. The Network team is responsible for tracking down the source of any bugs that muck up Stan's system, so it's important for them to know what Stan talks about and with whom. Trouble is, Stan uses the same key to encrypt every pre-master secret with every client—which means if a hacker were to figure out that single encryption key (via brute force or other attack techniques), they could spy on all of Stan's conversations without anybody knowing. Sara the Server, on the other hand, uses Perfect Forward Secrecy (PFS) to secure her conversations. When Cindy the Client starts a conversation with Sara, Cindy and Sara huddle to come up with a unique encryption key—their pre-master secret—that is completely private and will only last for that particular conversation. This is where the Cone of Silence comes in: Without involving Sara's long-term key, Sara and Cindy decide their encryption key behind closed doors. No one, not even Sara's own Network team, can see or hear how they decide their unique key. This way, if a hacker got their hands on Sara's long-term key, they still wouldn't be able to decrypt any secure conversations. Even if they stole a unique PFS encryption key, only Sara's communications with Cindy would be vulnerable. Why Is Perfect Forward Secrecy So Hot Right Now? Two big things happened in the last five years to throw more PFS schemes into the cyber security ring: First Edward Snowden showed us just how much network traffic has secretly been collected by the United States government—and if one group could run a mass surveillance program, so could others. For the first time in human history, global secret surveillance was not only a possibility but a reality. That said, the IT community had lived with an inherent degree of risk for years. The longer you keep a secret, the more time you give bad guys to figure it out. Luckily, long-term SSL keys were secure enough that this danger seemed manageable. Then the Heartbleed vulnerability proved how simple an OpenSSL attack could really be. After years of putting up with long-term SSL keys and still reeling from the Snowden revelations, the community rumbled louder for a more transient method of key exchange. Fast forward a few years? Apple has decided all App Store apps must use PFS encryption—and in March 2018, the Internet Engineering Task Force finalized the new TLS 1.3 standard, which mandates perfect forward encryption for all TLS sessions. Unfortunately, the beauty of Perfect Forward Secrecy is also its biggest problem: hackers can't decrypt your data … but unless they utilize one of two very specific decryption methods, neither can your own team. How to Decrypt Perfect Forward Secrecy The only ways to decrypt PFS sessions are to route traffic through a set of TLS inspection devices, or to install an agent on the server. One of these doors leads to safety and the other leads to death. (Or at least, to a higher potential of performance issues.) Door #1: TLS inspection devices establish themselves as false endpoints between a client and server, thereby tricking both. That leads to two distinct TLS sessions with the inspection device smack dab in the middle. More TLS sessions means more resource requirements, and the fact that an inspection device must actively finagle its own session with each legitimate endpoint means you risk breaking TLS authentication and causing other problems. Door #2: Installing an agent on the server means integrating a third party solution with your server so they can grab those encryption keys from the inside and provide visibility without breaking each individual TLS session in half. There's a wide variety of agents capable of carrying out this process, and how much pressure this method puts on your resources depends entirely on how lightweight that agent ends up being. ExtraHop Reveal(x) Decrypts PFS in Real Time We won't lie, we hope the fact that you read this far means you might be on the hunt for a better way to support the TLS 1.3 standard while providing your IT Ops and Security teams the visibility they need to actually keep doing their jobs. ExtraHop Reveal(x) is the only network detection and response product that gives you the ability to decrypt Perfect Forward Secrecy in real time (and with "need-to-know" control over exactly which packets you decrypt and who's allowed to see them), and it's completely out-of-band so won't impact network performance in the slightest. Reveal(x) uses cloud-scale ML to auto-detect and correlate threats inside your enterprise, then gives you the fastest, most efficient path to root cause. I feel blind without [PFS decryption with Reveal(x)]. Every day I get asked about what happened on this web server family or that, and without decryption I can't see that a specific URI is taking forever or throwing 500 errors, all while the SSL and TCP stats are perfect. - Fortune 500 Transportation Company Check out the live, interactive online demo to see how Reveal(x) helps SecOps—and the rest of your org—support modern encryption and still act with confidence and speed! Explore related articles Decrypt Perfect Forward Secrecy with F5 BIG-IP and ExtraHop November 15, 2018 Learn how to use F5 BIG-IP to capture perfect forward secrecy session keys and forward them to ExtraHop in order to passively analyze PFS-encrypted network traffic.
<urn:uuid:00b96b20-5b6e-4ac2-ba69-23fcc93343e6>
CC-MAIN-2024-38
https://www.extrahop.com/blog/what-is-perfect-forward-secrecy
2024-09-20T09:59:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00391.warc.gz
en
0.936534
1,309
2.765625
3