text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Research has shown that kids are using mobile devices far more than they were two years ago. A recent report from Common Sense Media, entitled “Zero to Eight: Children’s Media Use in America” has revealed that mobile gadgets are being used by kids at a much greater rate than they had been only two years ago. The child advocacy group’s 2013 report has shown that small screen popularity is exploding in young age groups. This research comes just at a time in which doctors are cautioning that too much time in front of digital screens might be quite unhealthy for kids. The biannual survey of American parents that was conducted by Common Sense Media showed that there has been an increase by 89 percent in the number of children between the ages of zero and eight years who have used mobile gadgets. This is a massive increase when compared to the 2011 data, when only 38 percent of kids in that age group were using those devices considering that 72 percent have done so, this year. Even among children younger than two years, 38 percent have used mobile gadgets for media in 2013. In 2011, that figure had been only 10 percent. Furthermore, the amount of time that children are spending using those gadgets has tripled. It had been 5 minutes per day in 2011, but it has risen to 15 minutes, this year. This report came at nearly the exact same time that the American Academy of Pediatrics underscored its previous cautions regarding the exposure of children to screens, including mobile gadgets and televisions. That organization advised parents to limit the “total entertainment screen time to less than one to two hours per day” and for children younger than two years, they should “discourage screen media exposure.” The founder of Common Sense Media, Jim Steyer, has said that these gadgets are – to a growing degree – replacing everything from televisions to storybooks and even babysitters. Tablets have especially changed the way that devices play a role within families, as there has been a five-fold increase in the number of families who own them and of children who have access to them.
<urn:uuid:13a1c2c9-c452-4d5f-a038-9d2491353f61>
CC-MAIN-2022-40
https://www.mobilecommercepress.com/gadgets-use-children-increases-89-percent/859407/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00731.warc.gz
en
0.977862
430
2.59375
3
Passphrase vs. Password Passwords are something you use every day, whether you are checking your email, logging into your online bank account, placing an order for a product or simply accessing your mobile device or computer. Passwords are also your weaknesses. Your emails can be read, personal files accessed, identity stolen, money transferred, and contacts exposed. A strong password is absolutely essential for protecting yourself and possibly your employer if you use company-owned devices. Maintaining and managing passwords can be a burden and it can be frustrating to try and remember a complex password. That is where passphrases come in. Most people are aware that a password should contain a capital letter, a number, and a symbol. A passphrase is a much stronger password, while still easy to create and remember. What is the difference between a password and passphrase? Passphrase: Iwant2go2theBeach! Or try using spaces… I want 2 go 2 the Beach! In this passphrase example, you could also use symbols or numbers to replace letters. Such as ‘@’ for the ‘a’ and a zero for the ‘o’ – I w@nt 2 g0 2 the Be@ch! Why Use a Passphrase? Hacker algorithms quickly run through variations of “Beach2020,” but will have more difficulty cracking longer phrases. Passphrases are much harder to break into and can be much easier to remember, especially if it relates to something you like; something that naturally reflects your personality. Be careful of websites that require you to answer personal questions as it’s unclear if and how that company is securing that data. When answering questions that will be used if you forget your password/passphrase, be sure to choose questions and answers that can’t easily be figured out through social media platforms. For example, “What is your dog’s name?” or “What city were you born in?” Your Facebook page may clearly state that your dog’s name is Mr. Hotdog and that you were born in Nashville. Most people know to avoid public computers, such as those located in a hotel, as they could easily be infected with malware that will capture not only your password and passphrase entries but all keystrokes and browsing history. Avoid public computers at all costs unless you are just using it for Internet research. Never enter any personal information. It is ideal to use a different passphrase for each account and device. Find a password management solution that works for you and use it. It’s too difficult to remember many passphrases, especially if you are updating them every 30 – 60 days, and you wouldn’t want to forget and have to reset your accounts constantly. If you have any questions about passphrases and best practices when it comes to securing your accounts, please let us know. We love talking about different ways to protect your data.
<urn:uuid:e9366800-2268-4f60-9744-78d662ce24e9>
CC-MAIN-2022-40
https://parachute.cloud/it-consulting-passphrase-vs-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00731.warc.gz
en
0.945306
625
3.015625
3
These days, the tendency is to treat software development as a semi-custom build job. Some parts are prefabricated and come from other sources. The rest is custom-built, in-house or under contract, to provide specific functionality or to capture and enshrine key insights and competitive advantages in executable form. When prefabricated elements are incorporated into software projects, they will most often be open source. They might involve certain widely used frameworks, such as Bootstrap, Angular JS, Apache, OpenStack and so forth. They may incorporate open source code projects, such as JSON. One might even see various kinds of containers used as wrappers, including familiar names such as Docker and Kubernetes. Most modern programmers reach for these kinds of things the same way a mechanic or plumber reaches for a hammer or a wrench. They’re simply familiar tools, well understood and fit for a variety of purposes that programmers understand extremely well. But what programmers, IT management, executive staff, and even shareholders may not understand is that such conveniences also carry risks. And, just like other risks, those that open-source frameworks, libraries, and code may pose must be identified, understood and carefully managed. Surveying the Code: What’ve You Got? The first step in managing risks of any kind is taking stock of the risks you face and understanding what kinds of threats they can pose. Good tools can automate the discovery of open source components used in an application, producing a comprehensive list and providing information about risks and exposures they bring along with them. Through careful review of such findings, exposure to certain risks can then be remediated. This might involve upgrading from an out-of-date or obsolete version of open source code that includes well-known vulnerabilities, with exploits to match. It might involve patching or updating a current version to make sure it incorporates all security fixes available to address known vulnerabilities. It should also include licensing checks, to make sure the organization using the open-source components is doing so validly and legally. Working from Certain Knowledge Some software analysis tools include composition analysis as part of their capabilities. This kind of analysis examines a code base and documents all open source components it finds. It should also report on known vulnerabilities in such components from the NIST National Vulnerability Database, Mitre’s Common Vulnerabilities and Exploits (CVE) database and so forth. Most important, a composition analysis should produce an inventory of open source components. This provides important code management insights so that organizations can better manage their code libraries. This lets them check for updates, track versions, and receive automatic obsolescence reports. In short, proper code and risk analysis of open source components not only identify sources of risk, but also help organizations take steps to avoid or mitigate such risks. Putting Open-Source Insights to Work Proper source code analysis provides results from its scans within minutes of starting work. Such tools can be run on a one-shot basis as pay-per-use, or they can plug into your integrated development environment (IDE) for continuous scanning and code security with annual or periodic subscription fees. This does more than protect you from open source code risks, though it will cover them quite nicely; it also applies to your custom codebase. Given that the right insights from code analysis help prevent and mitigate risk, they also support improved integrity and manageability for open source code components. Because organizations can use such tools to identify and manage threats and vulnerabilities, compliance issues and operation risks, those tools offer the means to limit those risks and avoid unpleasant surprises. Automation also plays a key role in ensuring open-source code security and managing associated risks. If code components are continuously checked and scanned for vulnerabilities and exposures, organizations can take action to mitigate and remediate them as soon as possible. Where remediation advice can be automated, the codebase can take care of itself to a certain extent. Give Kiuwan Insights a Try Kiuwan’s Insights source code analysis (SCA) tool provides the foregoing capabilities and more. It also requires no configuration in advance. Further, it’s fully customizable, both visually and conceptually, so your developers and security professionals can make it relevant and productive as they put it to work. Request a free trial, or learn more at Kiuwan’s Insights page.
<urn:uuid:ed556119-2e52-4cd7-9da6-29962e26d91f>
CC-MAIN-2022-40
https://www.kiuwan.com/understand-manage-open-source-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00731.warc.gz
en
0.929499
905
2.59375
3
Submerging oneself in cold water is a fantastic method to cool down, as anybody who has ever plunged into a body of water during summer heat will attest. Mechanical and thermal experts should not overlook this common-sense observation. Given that the power utilized by computers has a 1:1 heat-to-electricity ratio IBM invented a method for cooling computer components by immersing them in dielectric fluid. This took place when the computer industry was still in its infancy. Immersion cooling techniques have progressed since those early days. Let’s go over the basics of immersion cooling and the many methods used in practice. What Is Immersion Cooling? Air has been the most popular way of cooling IT equipment to date. The fans, ducts, and HVAC systems associated with it take up a lot of room and energy. On the other hand, fluid cooling for IT systems, such as cutting-edge gaming rigs necessitates the use of pipes, pumps, and a large amount of space. Because immersion cooling saves energy and space. The interest in the technique has increased over time as technology has advanced. But what is immersion cooling? Immersion cooling entails submerging system components. This includes the motherboard and other computer parts in a dielectric fluid. The procedure necessitates the use of fluids that will not harm IT components. Oils and designed fluids, such as 3M’s Novec or Fluorinert lines, are the two types of dielectric fluids utilized for immersion cooling today. Single-Phase Immersion Cooling vs. Two-Phase Immersion Cooling The first technique is called single-phase immersion cooling. In this method, the components are completely submerged in a dielectric fluid, usually oil or an engineered fluid. The fluid absorbs the heat created by the IT components. Then pumps and circulates the fluid around an enclosure, chassis, or tank to remove the heat. Pumping the hot fluid out of the immersion bath to be cooled by a secondary air-to-liquid or liquid-to-liquid heat exchanger and then pumping the cooled oil/fluid back in. The second method makes use of the heat that is expelled as a result of a phase shift. The IT components are submerged in a dielectric fluid in a single phase. However, this time the fluid is constructed to have a boiling point lower than the temperature of heat-emitting IT components. This includes CPU, GPU, ASIC, power supply, DC converters, and more. When the IT equipment is turned on, the heat is evacuated through a liquid-to-gas phase transition. The fascinating aspect of phase shift is after the boiling point is achieved, the dielectric fluid itself does not become any hotter. Heat is rejected when the vapor gas comes into contact with a constructed vapor-to-liquid heat exchanger. There is no need for a separate heat exchanger or pumping system because the heat exchange takes place inside the tank. This removes a possible point of failure and reduces the immersion cooling system’s complexity and expense. The amazing thing about 2-phase immersion cooling is that it requires very little energy to operate. The rising gas generated by the phase shift within the tank condenses on the tubes inside the tank’s top, then transforms back to a liquid in the form of tiny droplets, which fall back into the liquid bath, and the process starts all over again. Immersion Cooling Implementation Methods Two (2) basic methods for implementing immersion cooling. The first method employs an enclosed IT chassis. The customized chassis provides containment of the dielectric fluid and oftentimes less fluid is needed as a result. Part of the appeal of this method is that it allows self-contained, immersion-cooled chassis to be installed in conventional server racks. A Coolant Distribution Unit (CDU) can likewise be used to manage the coolant across many chassis. A major challenge of this approach is that the immersive chassis must be replaced with every IT refresh. This will be around an average of 10 complete replacements over the usable life of the IT rack. The second method employs an enclosed IT tank. The tanks provide containment of the dielectric fluid. These tanks are designed to accommodate IT gear that would otherwise be mounted in 19”, 21” or OCP style racks for example. Since the tanks accommodate almost all types of IT gear there is no need to replace the tank for an IT refresh. This makes the flexibility, cost, and total cost of ownership (TCO) appealing for tanks. This is against the chassis approach whereas the approach enjoys the benefit of dielectric fluid volume savings. Immersion Cooling and the Sustainable Data Center Let’s have a short overview of the differences between single-phase immersion and two-phase immersion. First, dielectric fluid is sent to a secondary heat exchanger and pumping system. Then the coolant distribution unit (CDU) will reject the heat to the building’s principal water heat rejection loop. One-phase immersion systems have heat exchangers built into a developed IT chassis to eliminate the need for a CDU. The process does not enjoy the increased heat rejection capacity of phase change, the dielectric fluid must push large IT heat sinks on the server boards in either scenario. If you ever use synthetic petrochemicals, you’ll have to clean up a lot of mess. If you ever need to switch out IT equipment, mineral oil or synthetic petrochemicals may be hard to clean up, so it’s a no-go for certain potential users. Furthermore, most of these fluids have a flashpoint, which means they can catch fire, posing a hazard and risk in data center operations. Lastly, pumps that circulate and cool the oil consumes too much energy that it diminishes the energy savings gained by switching from air to immersion cooling. That isn’t to argue that 1-phase immersion cooling isn’t useful; it saves energy and reduces IT heat load densities as compared to air cooling. Benefits of One-Phase Immersion Cooling Include: - Better energy efficiency than air cooling - About 10x heat rejection capacity vs. air cooling - Mineral oil is less expensive than 2-phase engineered dielectric fluids - Oils generally do not evaporate (however, ‘oil blooms’ are generally experienced within a 1-2 meter radius of most single-phase immersive tanks/enclosures) - Lower CAPEX than air cooling in some cases - Less space required versus air cooling - Better TCO than air cooling in some cases - Lowers or eliminates the use of water for outside heat rejection - Quiet operation Benefits of Two-Phase Immersion Cooling Include: - Best known efficiency in any form of cooling - 2X (or greater) heat rejection capacity vs. one-phase - Half the space requirement versus one-phase (no bulky heat sinks or CDUs) - Lower CAPEX than air cooling (per kW) - Lower TCO than air cooling (per kW) - Waste heat can be re-used for hot water, district heating, or energy generation - Dielectric fluids are clean and make servicing or replacement of IT gear simple - Faster builds than air-cooled data centers - Lowers or eliminates the use of water for outside heat rejection - Silent operation Liquid Immersion Monitoring To further provide cooling peace of mind, AKCP can monitor for leaks and provides real-time information on water and coolant temperatures, as well as pressure, power consumption. To maintain consistent cooling without the data center manager intervening, the internal logic adjusts pump speed to maintain optimal performance with the least possible power use. Internal logic can also assess system health and provides early fault detection. Power Monitoring Sensor The AKCP Power Monitor Sensor gives vital information and allows you to remotely monitor power eliminating the need for manual power audits as well as providing immediate alerts to potential problems. It has been integrated into the base unit web interface with its own “Power Management” menu, allowing up to six three-phase and fourteen single phase Power Monitor Sensors to be set up on a single securityProbe or SPX+. More PMS can be connected to a single base unit depending on what readings are required. Rope Water Sensor With the ropeWater sensor, you can protect your essential equipment from potentially harmful water damage. Designed and manufactured by AKCP it forms an integral part of your solution for data center monitoring. The Rope Water sensor covers a large area and can be combined with any of our remote monitoring base units, or wireless tunnel sensors, it will give you advance notice of any water leaks or flooding. The sensor will retain its error condition until it is read via an SNMP. Therefore if the sensor encounters a critical condition at any time it will report that condition before it returns to a normal state. Wireless Valve Control Monitoring of pipe pressures and flow, controlling valves in buildings is important for the proper functioning of water distribution systems, prevent overpressure, and prolong the life of pipes and pumps. With Wireless Tunnel pipe pressure sensors, flow meters, valve controllers and variable frequency drives for pumps you can do all this, and more with our centralized monitoring platform, AKCPro Server
<urn:uuid:77e2d1d2-e72e-48e0-8608-f11fa0880d2d>
CC-MAIN-2022-40
https://www.akcp.com/articles/different-approaches-to-immersion-cooling/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00731.warc.gz
en
0.913755
2,019
3.515625
4
In the United States, the average email address is associated with no fewer than 130 different accounts on the internet. How many accounts do you use on a daily basis? Chances are there are accounts out there you haven’t seen or thought about in decades. Many people report having more password protected accounts than they can recall, and while you might not be using all of the accounts currently they may be giving hackers access to those accounts you do use regularly because of one common habit: password reuse. Millennials, though they are digital natives and have grown up being told the proper password safety procedures, are shockingly the most likely group to reuse passwords. Instead of leading by example as the technologically advanced digital natives they are, Millennials are making things less secure for everyone. More than three quarters of younger Millennials report reusing passwords, compared to 58 per cent of older Millennials, 61 per cent of Gen X-ers, 56 per cent of Baby Boomers, and 62 per cent of the Silent Generation. Overall 61 per cent of people admit to using the same password across multiple websites, but somehow 89 per cent of people feel that their password habits are secure. Unfortunately this does not seem to be the case. What does it actually take to have a secure password? It’s a lot more complicated than you might think, and this may be a leading factor in why people are reusing passwords to begin with. Secure passwords use the following precautions: - Never use the same password for different websites - Use a complex password or passphrase with letters, numbers, and symbols - Update passwords regularly, especially if you are notified of a breach - Use multifactor identification for sensitive accounts - Use a secure password manager if you have trouble remembering your passwords Why should you care about your password habits? Well, as it turns out it may be a boring problem but the effects can cascade until your life is completely out of control, says Digital Guardian’s Dennis Fisher: “Attackers know that people use the same password over and over, so if they’re able to get a user’s credentials for one site or service, their next move is to see if the password works on email, Facebook, Twitter, a banking site, or other high-value targets. That can start a chain reaction that leads to the victim’s entire online life being compromised. These are all things that security researchers and professionals have known for a long time. Password reuse is a well-understood problem, but it’s still a problem, albeit a boring one. And the thing about boring problems is that they’re boring. People don’t get super excited to work on those.”There are several different ways that this boring problem leads people to unknowingly put their digital lives in jeopardy. When people have difficulty remembering their passwords because of so many different accounts, in addition to reusing passwords they may write them down on paper, store them in plain text on their computer or mobile device, or even store them in a cloud-based dropbox that also requires an additional password. The only secure way to manage your passwords is to use a secure password manager. If you’re not, you could be putting yourself and even your company into serious jeopardy. Even though the problem has been identified and awareness has been raised, at the end of the day many people just have too much on their plates to effectively manage multiple passwords across multiple accounts that need to be changed frequently. Let’s be honest here - most people aren’t going to remember lkj345$ per cent and weorub$$3 and oewo09!!hf4, let alone strings of random characters for each of the 130 accounts they have. Most people will do away with things that add what they consider to be unnecessary complication to their lives, so passwords are often the first concession they will make in the pursuit of a less complicated lifestyle. It’s hard to convince people to complicate their lives with crazy passwords and decompress with a little afternoon yoga instead. Another problem most people face is that they just don’t change their passwords enough. - 11 per cent of people never change their passwords - 31 per cent of people change their passwords once or twice a year - 17 per cent of people change their passwords three to four times a year - 22 per cent of people change their passwords five or more times a year - 18.5 per cent of people only change their passwords when they are notified of an issue While it is encouraging that 70 per cent of people report changing their passwords at least once a year, it’s also important to remember that that figure is self-reported and 29 per cent of people report having more password protected accounts than they can remember. It is more likely that people are regularly changing the passwords to the accounts they remember and use frequently rather than every single account they have ever opened, which can still leave them vulnerable if they have even reused just one password. Stopping hackers can be challenging for a multitude of reasons, but since user error is the single biggest factor in hacking threats making security user-friendly for even the least trained person using it can bridge a huge security gap. Unfortunately it is easier to get an information security person to work on a new type of encryption or on detecting the latest phishing campaigns than it is to get them to come up with a way to get non-technical users to understand the need for and to use better password hygiene. In spite of decades of advances in computer and information security, the biggest problem is still with the fundamentals - the end user. If you don’t have end users who are using good password hygiene practices, the base of your security pyramid will crumble. Fortunately it doesn’t have to be this way - advances in security technology have come up with a multitude of solutions. Learn more about password habits by generation as well as threats associated with improper password handling from this infographic (opens in new tab) from Digital Guardian. Could your organisation use a refresher course on password hygiene? Information security starts at the bottom. Brian Wallace is the Founder and President of NowSourcing (opens in new tab) Image Credit: Rawpixel.com / Shutterstock
<urn:uuid:b7fd7e7f-d31a-47fd-a8cb-657908ec0fb9>
CC-MAIN-2022-40
https://www.itproportal.com/features/why-password-overload-is-giving-hackers-exactly-what-they-want/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00731.warc.gz
en
0.949453
1,279
2.859375
3
Description: Space-Based Quantum Key Distribution (QKD) (Medium) This explanation of Space Based QKD is from Crafts Prospect, a NewSpace R&D company concentrating on neural networks, quantum encryption. QKD is a cryptography method which uses the fundamental quantum states of particles such as photons to encrypt information securely. At the core of QKD is the fundamental principle that, generally, quantum states cannot be copied exactly and become altered in the process of attempting to copy them. This makes QKD more lucrative and advantageous than current classical key services. Unlike the latter, which can be compromised by quantum computers, QKD is provably secure forever from any attacks in the future, thus making it highly desirable. In theory, this is all well and good, but in practice, the transmission of photons large distances across the Earth faces numerous challenges. Optical fibres, as well as terrestrial atmosphere, lead to loss of photons during transmission. This reduces the long-distance feasibility of QKD and limits QKD to a few hundred kilometres only (max: 400–500 km). This is too low a limit to be useful if we want to establish a worldwide QKD network. But what if we take the quantum source into space? Say onboard a satellite which is on a low Earth orbit (LEO), typically around 500 km above the Earth. Photons mostly travel through empty space before they enter the last few hundred kilometres of atmosphere. The nature of space allows negligible photon loss and decoherence which prevents loss of transmission. And voilà! Quantum mechanics and space exploration have combined together to take us forward into a new era of space-enabled quantum technology to improve our lives. **Crafts Prospects is participating in ROKS — Responsive Operations and Key Services: ROKS will be a proof of concept mission, targeted to be launched by 2021 by Craft Prospect. It will demonstrate satellite-to-Earth Quantum Key Distribution (QKD) for augmentation of future encryption services. ROKS will also carry a responsive operations payload which will demonstrate neural networks working in-orbit for cloud detection and decision making without always relying on ground station to send commands to the satellite. ROKS will pave the way for groundbreaking changes in data protection and encryption and onboard automation.
<urn:uuid:23879c70-5dad-410e-bb65-b32bd169c516>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/space-based-quantum-key-distribution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00731.warc.gz
en
0.909931
475
3.140625
3
CGI and eCGI The CGI or Cell Global Identification is used in GSM/UMTS standard and it is defined as the concatenation of the MCC (Mobile Country Code), MNC (Mobile Network Code), LAC (Location Area Code) , and the CI (Cell Identity). Cell Identity shall be unique within a location area. The eCGI or extendend CGI, is used in LTE/LTE-A standards and it defined as the concatenation of MCC, MNC and the eCI. The MCC and MNC are the same as included in the CGI, while the eCI is build by a concatenation of the eNodeB and the CI Both lengths are 15 decimal digit code and for 2G, 3G & 4G Networks the first 5 digits are always the MCC (Mobile Country Code) and the MNC (Mobile Network Code) The CGI structure is shown in the following figure. For 2G and 3G networks the next 5 digits are the LAC (Location Area Code) and the last five the Cell ID within the LAC. Note that the LAC/CI can be represented by 4 hexadecimal bytes (eventually BCD converted), but in some systems they are separated out into two 5-digit decimal numbers. This will give a very different result depending on which method is used. Any user should first know exactly which presentation is being used. For 4G Networks the CGI is named eCGI and, while first 5 digits are still MCC & MNC, the last digits are split in eNB-ID (6 digits) and CI (max 3 digits). The binary size of the eNB-ID is 20 bit while that of CI is 8 bit. The point is that some systems or applications works with the whole CGI or eCGI and other systems works with the separate blocks: MCC-MNC-LAC-CID or MCC-MNC-eNB. In order to translate the 15 digit code received into blocks it is a need to know the right cell technology to correctly split the digits.
<urn:uuid:b68c6257-6f4a-4212-a138-fe89ebbe43c4>
CC-MAIN-2022-40
https://arimas.com/2016/10/24/cgi-ecgi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00131.warc.gz
en
0.90931
446
3.46875
3
In 2020, Ryuk Ransomware operators shut down Universal Health Services by exploiting the zerologon vulnerability to gain control of domain controllers. In mid-2021, cybercriminals exploited an old, unpatched memory corruption vulnerability in Microsoft Office that allowed them to remotely execute code on vulnerable devices. This vulnerability was disclosed in 2017 and found to be one of the most exploited by nation-state hackers. The above-mentioned cases illustrate the importance of patching software vulnerabilities immediately, especially those that have already been compromised. In this blog, we’ll discuss patch management policy best practices and explain how they contribute to a better patching environment for large and small organizations alike. What is a patch management policy? Patch management involves identifying, sourcing, testing, deploying and installing patches for all systems and applications in an organization. Patches are applied to improve the efficiency and functionality of a system as well as to mitigate security vulnerabilities. Since unpatched vulnerabilities create weak links in a company’s IT infrastructure, cybercriminals target them frequently. Modern IT environments are intricately structured, resulting in patching becoming a far more complex and time-consuming task than in the past. It takes about 200 days to apply a patch to a regular vulnerability and 256 days to fix a severe vulnerability. That’s not all though. It takes 15 days on average to patch a vulnerability that is being used in active attacks, according to data collected by Google’s Project Zero. The challenge is even more daunting for smaller companies, which are always strapped for resources and talent. The result is that hackers manage to discover and exploit vulnerabilities before they can be patched. This is where patch management policies come into play. The policies define the steps, procedures and best practices to follow, especially when patching vulnerabilities that pose a security risk. The goal is to produce a standardized patching process so that technicians can make informed decisions during any stage of the patching process, including when correcting mistakes and handling contingencies. In the absence of a patch management policy, businesses may have difficulty identifying critical patches. Moreover, without a process to follow, patches can be installed incorrectly, resulting in the shutdown of applications and devices, leading to business disruption. What is the importance of a patch management policy? Unpatched vulnerabilities are the cause of one in three breaches around the world. Having an effective patch management policy can help minimize the risk of cyberthreats and business downtime caused by improper patching practices. The Australian Cyber Security Centre (ACSC) describes patching as one of its eight essential strategies to mitigate cyber incidents and ensure security. Let’s look at the benefits of having a patch management policy. - A patch management policy ensures risks are managed promptly so companies can avoid falling prey to cyberattacks. - Managing patches can be a colossal task that often hinders the work process and leading to clashes between departments over patch timing. When resolving a crisis, time is of the essence. An effective patch management policy anticipates scheduling conflicts and gives guidance on how to resolve them so that work downtime is kept to a minimum. - A good patch management policy helps ensure that all patching work is completed on time and that the process is well documented. Patching is one of many compliance requirements, and failing to do so can lead to audits, fines and even denial of insurance claims in the case of a breach. - A company that sells technology should provide timely patches for its solutions in order to manage vulnerabilities. Addressing software bugs quickly helps maintain serviceability and boosts customer satisfaction. - Patching plays a vital role in enhancing company revenue and reputation by driving product innovation and upgrades. What should a patch management policy include? A patch management policy is unique to every company and their systems and processes, but at its heart, it must include the following components to be effective. Asset tracking and inventory The security of any device, be it a laptop, a server or a network endpoint, can be compromised if left unpatched. To keep tabs on endpoints that connect to an organization’s network, the IT department should use an automated IT asset discovery tool. The first step in developing a successful patch management policy is to take inventory of your IT assets. It becomes even more important in remote and hybrid environments where employees connect to the corporate network using various devices and locations. There is no doubt that as the line between personal and business devices blurs, corporate networks will become vulnerable to grave threats. Teams, roles and responsibilities Patching is a multistage process that should flow smoothly. Therefore, all stakeholders’ roles and responsibilities should be clearly defined. To make patch management ideal, each step of the process, from identifying vulnerabilities to applying patches, should be handled by a dedicated team. It is also important for management to be actively involved in the patching process and escalate issues when patches aren’t applied on schedule. Even though patching may seem simple, it should not be handled by employees, but rather only by IT experts who follow set guidelines. Risk classification and prioritization Besides the routine patches, IT technicians must also identify patches for critical software vulnerabilities on a regular basis. Since patches must be applied to several applications and systems, technicians should learn to prioritize and classify patches according to their vulnerability risk and impact on business continuity. Take the example of a company whose servers are vulnerable to cross-site scripting. In this case, servers that host business-critical data must be patched before servers that host internal websites and less critical business applications. Classification and prioritization of assets and patches helps technicians approach patch management in a systematic manner and ensure that critical assets can always remain operational. Patching process and schedule The previous sections provide the framework for establishing an enterprise-wide patch management policy. Patching and scheduling outline how to execute the patching process. Patching is a multistep procedure. It includes: - Monitoring for new patches and vulnerabilities: Monitoring applications, software and devices that require patching or are at risk because of software vulnerabilities. Patch management policies should specify when and how often this task should be performed. - Patch sourcing: Once the patch is released, you need to obtain it from the vendor. There should be a dedicated person or team for the task since a delay in obtaining patches that fix critical vulnerabilities can spell big security problems for the company. - Patch testing: The patch should also be tested in an environment very similar to the original IT infrastructure of the company. There are times when patches will not work in certain IT environments. Test environments allow you to study the impact of a patch before applying it to the entire environment. It is crucial that IT managers take backups of their systems prior to applying patches so the old system can be rolled out in case of a problem. - Configuration management: The goal of this step is to document every change that will occur when the patch is applied. This helps identify devices that don’t respond correctly to the patch or show an anomaly. - Patch roll out, monitoring and auditing: After a patch is applied to the entire IT infrastructure, its results are monitored to ensure that everything works as expected. Audit your patching process to identify any failed or pending patches, and keep an eye out for unexpected performance issues or incompatibilities. - Reporting: Update all relevant documentation after a patch is applied. There should be a detailed and in-depth report of every patching session and step. This report can be used for compliance audits, insurance claims and even to demonstrate value to clients. What are the benefits of a patch management policy? By having a defined and documented patch management policy, you will be able to improve the process and ensure that it gives the desired and required results. This will also help you identify the best practices. Check out some of the advantages of implementing a patch management policy. A clearly defined chain of accountability will help mitigate problems faster if there is a breach due to a software vulnerability or a problem during the patching process. A common theme that emerged in the wake of Equifax’s 2017 data breach, which was the result of a security flaw the company should have patched weeks earlier, was lack of accountability. The absence of accountability was also a factor in the company’s lax security posture. Documented processes and expectations When the patching process is well documented, it is easier for new and long-time employees alike to follow it carefully. An absence of a written process can cause confusion on how to proceed and too many ideas can make matters worse. Ensures security and compliance Government agencies are cracking down on companies to ensure that they comply with all security requirements as cyberattacks become more common. Integrating security and compliance standards into your patch management policy will help you stay compliant with the rulebook and keep you on the good side of everyone from the government to the cyber insurers. Supports uptime and SLAs Following the wrong patching process can wreak havoc on your operations, cause system downtime and damage your SLAs with your clients. Patch policies detail the steps that need to be followed even when a patching session goes awry. Patching policies translate to a more accurate and efficient patching system at work, more support uptime and happier customers. Provides a framework to build upon A documented patch management process reduces ambiguity and makes day-to-day operations easier to follow. This can also be an effective way to identify best practices while ensuring that employees are not left in the dark when they assume responsibility for various patching tasks. Patch management policy best practices Each company will have its own patch management policies, and the process will change as technology and business change. However, the following are considered best practices within the industry and should be taken into account when creating a policy at work. Update systems regularly A company’s IT systems and assets need to be updated on a regular basis for them to function smoothly. Any disruption can severely impact revenue, profitability or customer service. With a sound and updated IT infrastructure, a company is better positioned to capture opportunities and growth while remaining safe from regulatory fines and cyberattacks. Track common vulnerabilities Being proactive is the key to keeping your IT environment secure. Documenting your patching process means you will have a record of all vulnerabilities your company encounters. This information can be used to plan security setups, strengthen your IT infrastructure and derive great learnings for the future. Document security configurations A configuration management record should document all the details about patches, tests and configuration changes. Using these documents, one can determine whether immediate action is necessary to mitigate a vulnerability. Stay current with third-party vendors Every company, no matter how large or small, uses a variety of third-party software. As the name implies, third-party patching consists of applying patches to third-party applications that are installed on one or more of your endpoints, such as a server, desktop or laptop. Many organizations are proactive in patching their OS software but aren’t as diligent when it comes to patching and updating their third-party software. Therefore, third-party applications have emerged as a popular attack vector for a variety of cyberattacks including malware. According to IBM’s Cost of a Data Breach Report 2021, it takes 210 days to identify a breach caused by a vulnerability in third-party software, and 76 days to contain it. Thus, it is imperative for businesses to embrace third-party patching to minimize the attack surface for cybercriminals. Take a comprehensive approach Your patch management policy should cover all aspects of your IT infrastructure and not just software and operating systems. You should take an inventory of all of your software and hardware, including servers, applications and network devices, as well as operating systems, databases and security systems. Monitor and assess continuously The process of patching is continuous, and with each patch, you will learn something new. By documenting each step of the process, you will be better able to identify trends, challenges and opportunities that can further enhance your policy outline. The result will be streamlined business operations and enhanced security. Automate when possible The old-fashioned method of manual patching gives you a slim chance of identifying and installing all the patches you need. It is simpler and more efficient to automate all steps in the patch process. The asset inventory process should be easy to repeat regularly, so automating it helps ensure that every new device and piece of software is quickly discovered and patched. The automation tool should gather all required patches and install them based on the specified policies and priorities. To avoid software conflicts, you may want to test the patch before deploying, and this should also be automated through acceptance testing and the ability to roll back. Build a strong patch management policy with Kaseya You can easily address the difficulties associated with patch management by automating the entire process using Kaseya VSA. The tool gives you the ability to review and override patches and see patch history. What’s more? This scalable, secure and highly customizable policy-driven approach is location-independent and bandwidth-friendly. With VSA, you can also automate the deployment and installation of software and patches for both on- and off-network devices. Patching your software and devices is, without question, necessary. We’ve put together a checklist that will help you optimize your patch management policy and build a robust security stance for your IT environment. Ready to automate your patching? Request a VSA demo today!
<urn:uuid:5357a3ef-efb3-413d-a822-2a4cd38c3ff2>
CC-MAIN-2022-40
https://www.kaseya.com/blog/2022/02/22/patch-management-policy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00131.warc.gz
en
0.937562
2,771
2.53125
3
HTTPS has been around for a while, but is it the right time to make the switch from HTTP? Considering that online presence is a “must” for all serious businesses, the performance of their online assets has also become increasingly important. Having that in mind, one of the main concerns about web performance is how to make a user’s online experience pleasant but safe at the same time. Only being present online is just not enough anymore and a business web presence needs to add value for its customers. Beside quality content and user experience, online security is a key factor along with delivery speed. A great and safe online experience requires trusted third parties and a good encryption, which is the main reason why HyperText Transfer Protocol Secured (HTTPS) was implemented. For quite some time HTTPS was considered the “slow but safe” way of delivering online services. Times have changed, and in order to fully understand the new advantages of HTTPS it’s important to know the differences between HTTP and HTTPS. HTTP: The Protocol At the start of the Internet era, network administrators wanted to find a simple way to share information they put online. They agreed on a procedure for exchanging information called HyperText Transfer Protocol (HTTP). Now, HTTP is an “application layer protocol,” which means that it focuses on presenting the information to the user and doesn’t really care how the data gets from sender to receiver. Also, it’s “stateless” which means it doesn’t attempt to remember anything about previous web sessions. The main benefit to being stateless it that there is less data to send, which results in increased speed. HTTP is most commonly used to access HTML pages, but other resources too can be utilized through HTTP access. Usually, websites which don’t handle confidential information would setup their websites like that. HTTPS: The Safe Protocol If you visit an online merchant you’ll notice the address bar says HTTPS instead of HTTP. It means the website is using a HyperText Transfer Protocol Secure (HTTPS) instead of a usual HTTP. The “secure http”, was developed to allow authorization and secured transactions. Exchanging confidential information requires safe procedures in order to prevent unauthorized access, and this is where HTTPS jumps in. It works in conjunction with another protocol, the Secure Sockets Layer (SSL), to transport data safely. By doing so, the computers agree on a “code” between them, and then encrypt the messages using that “code” so nobody between can access them. They use the “code” on a Secure Sockets Layer (SSL), sometimes referred as Transport Layer Security (TLS), to exchange information. This keeps information safe from intruders. In many ways, HTTPS is identical to HTTP as it follows the same basic protocols. However, if a client (e.g. Web browser) establishes a connection to a server on a standard port, HTTPS offers an extra layer of security because it uses SSL. To get a bit more detailed, data sent using HTTPS and secured via TLS provides three key layers of protection: - Encryption of data to keep it secure - Data Integrity as it cannot be modified or corrupted during the transfer without being detected - Authentication which makes sure users communicate safely with the intended website Both HTTP and HTTPS don’t really care how the data gets delivered. While, on the other hand, SSL doesn’t care what the data looks like. That is why HTTPS combines the best of both segments. It gives importance to what the user gets visual access, but also provides an extra layer of security when moving data from sender to receiver. Is HTTPS Faster Than Ever? A recent tweet and a follow-up blog post, claimed a massive HTTPS speed advantage over classic HTTP. It used the httpvshttps.com site to compare the two protocols which pointed out that HTTPS was actually 80% faster than it’s “non-safe” counterpart (we got 90% when we ran the test on the site). Although the results are staggering, further reading points out how it’s mostly due to HTTP/2, the updated version of HTTP over which the HTTPS operates, that allow such high level performances (the site compared an old HTTP protocol to the new HTTPS operating over HTTP/2). It’s true that HTTPS will only be faster when using HTTP/2, but on the other hand you cannot use HTTP/2 without using HTTPS. In the past, there might have been some increased latencies due to HTTPS requiring more procedures to be executed. However, with best practices in place like early termination, cache-control and HTTP/2, factors such as the latency of the TLS handshake and additional roundtrips are becoming things of the past. Newer protocols, better hardware, and faster connections are making up for the delays and enabled delivery of high speed website performances over HTTPS thus making the “slow but safe” description completely obsolete. To break it down, in the past HTTPS’ security component was a traffic “bottleneck” which caused delays. Today the TLS segment is able to follow through all the speed requirements and enables the HTTPS protocol to deliver high-end performance. It’s safe to claim that HTTPS got faster than ever but it’s mainly because of the underlying HTTP/2 and notable TLS performance improvements over the years. Improving HTTPS Performance There are a few things to do in order to cancel out delays and improve HTTPS performance. Mainly it’s about implementing early termination, caching and utilizing HTTP/2. Here’s a list of possible HTTPS improvements: - HTTP Strict Transport Security - Early Termination - OCSP Stapling - HPACK Compression Also, this article by KeyCDN can give you deeper insights into HTTPS performance improvements. The earlier mentioned blog post described a way to reach HTTP/2 speed even if the origin website works with an older protocol (e.g. HTTP/1.1) by “wrapping” the site with a content delivery network. Then, if requests are directed to a domain which can’t talk HTTP/2, the protocol can be returned as “h2” (which is the identifier for HTTPS over HTTP/2), because all requests get routed through a CDN which can talk “h2”. Of course if the CDN needs to pull content from an origin that doesn’t talk “h2” then there might still be a minor bottleneck in the connection, but many requests won’t come from the origin anyway as most traffic gets served directly from cache. The ability of a CDN to deliver over “h2” makes a significant impact to speeds even when the origin is deployed with older protocols. HTTPS is here, and it’s here to stay. The SSL performance impact is not as important as it used to be. The web is definitely moving in a new direction and TLS handshakes and certificates are no longer slowing down web performance. The are lots of methods to even further improve your HTTPS performance and reduce overhead. Also, since recently price was a big factor when it came to considering migration to HTTPS. But today the costs associated with purchasing SSL certificates have significantly dropped. For instance, KeyCDN’s recent integration with Let’s Encrypt allows customers to deploy HTTPS with a custom zonealias for free (as they pointed out in a recent blog post about HTTPS performance). It’s safe to say that this trend is taking the price factor out of the equation for most customers. Transition to HTTPS is also recommended as SEO practice and in order to keep up with Google as it gives a slight boost to your site. To sum up, if you wanna go fast, serve content over HTTPS using HTTP/2. Of course we always recommend adequate testing considering your needs as different setups and environments can vary. For any further questions on the topic, feel free to contact our experts at GlobalDots as they can help you boost your web assets performances.
<urn:uuid:446c774b-7105-4fcb-bc9a-b04acbfe0fa8>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/https-the-speed-advantage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00131.warc.gz
en
0.934343
1,706
2.734375
3
Every insidious and pervasive menace plaguing society has to begin somewhere. As more and more devices are connected to networks and information is shunted to the cloud, industrial cyberattacks continue to rise. Sophos’ recent State of Ransomware 2022 report showed that incidents of ransomware were up 78% from 2020 to 2021. But ransomware is far from the only cyber threat to industrial systems. A recent article from endpoint protection company Crowdstrike stated that the fourth most common type of cyberattacks are denial-of-service (DoS) or distributed-denial-of-service (DDoS) attacks. DDoS attacks have impacted everyone from private companies such as tech giant Google (2020) and Amazon Web Services (2020); to critical infrastructure and sovereign nations. A massive DDoS strike hit Israel earlier this year, taking down several key government websites. But while these types of attacks have made headlines in recent years, they’ve actually been around for a surprisingly long time. Some date the initial DDoS attack as far back as 1974, but the first major strike came in August 1999, courtesy of a tool called Trinoo. Both of these early attacks went after prominent Big Ten universities. Since then, they have grown to become some of the most persistent and damaging intrusions in the cybersecurity universe. What are DDoS attacks? According to Dr. James Stanger, chief technology evangelist at CompTIA, a DDoS attack “is a malicious attempt to sabotage a network by overwhelming its ability to process legitimate traffic and requests. In turn, this activity denies the victim of a service, while causing downtime and costly setbacks. A DDoS attack is a network-based attack; it exploits network-based internet services like routers, domain name service (DNS) and network time protocol (NTP), and is aimed at disrupting network devices that connect your organization to the internet. Such devices include routers (traditional WAN, as well as ISP edge routers), load balancers and firewalls.” The difference between a DoS attack and a DDoS attack comes down to that first “D,” meaning distributed. A DoS attack targets a particular resource, such as an industrial control system (ICS), whereas a DDoS attack goes after the devices that provide access and connectivity. DoS attacks tend to come from a single source, while DDoS attacks come from a large network of devices, or a botnet. According to an article from the Info Security Group, DDoS attacks surged in 2020, in large part due to the COVID-19 pandemic, which accelerated digital transformation and saw more people — including those in manufacturing — moving to work from home. DDoS attacks are not particularly difficult to execute and can cause massive disruptions. Though the most high-profile attacks have targeted private industry, ICSs are also vulnerable, especially with the emergence of the Industrial Internet of Things (IIoT). The earliest DDoS attack What is widely considered the first-ever DDoS attack came from an unlikely source at an unlikely time. It happened in 1974 — well before the modern computer era — and was perpetrated by a 13-year-old Illinois resident named David Dennis. The young teenager was a student at University High School, located across the street from the Computer-Based Education Research Laboratory (CERL) at the University of Illinois Urbana-Champaign. According to an article in Radware, “David recently learned about a new command that could be run on CERL’s PLATO terminals. PLATO was one of the first computerized shared learning systems, and a forerunner of many future multi-user computing systems. Called ‘external’ or ‘ext,’ the command was meant to allow for interaction with external devices connected to the terminals. However, when run on a terminal with no external devices attached it would cause the terminal to lock up — requiring a shutdown and power-on to regain functionality.” Dennis wondered if he could cause a room full of users to be locked out simultaneously, so he created a program that would send the “ext” command to several PLATO terminals at once. When he tested his program at CERL, it forced 31 users to power off at the same time. The next evolution of DDoS DDoS attacks didn’t go mainstream or prove the extent of the damage they could unleash for another 25 years, until August 1999. The first well-known DDoS attack utilized a tool called Trinoo, also known as trin00, to wreak havoc at the University of Minnesota. The attack lasted two days, was deployed in at least 227 systems and managed to disable the university’s computer network. The same Radware article laid out how Trinoo operated: “Trinoo consisted of a network of compromised machines called ‘Masters’ and ‘Daemons,’ allowing an attacker to send a DoS instruction to a few Masters, which then forwarded instructions to the hundreds of Daemons to commence a UDP flood against the target IP address. The tool made no effort to hide the Daemons’ IP addresses, so the owners of the attacking systems were contacted and had no idea that their systems had been compromised and were being used in a DDoS attack.” The problem grows As the new millennium dawned, DDoS attacks became much more pervasive. By 2000, this technique had been used to infiltrate businesses, financial institutions and government agencies, shining a very public light on DDoS and DoS attacks. As previously mentioned, DDoS attacks are on the rise again, thanks to the explosion of connected devices and the expansion of the Internet of Things. Unfortunately for manufacturers and businesses, many IoT and IIoT devices are older and were not designed with security in mind, making them extremely susceptible to something like a botnet. These days, the attack surface is larger than ever, leaving a ripe environment for DDoS and DoS threat actors.
<urn:uuid:54d96ee5-2c53-4294-aa0d-756cb2100799>
CC-MAIN-2022-40
https://www.industrialcybersecuritypulse.com/threats-vulnerabilities/throwback-attack-ddos-attacks-are-born-in-the-big-ten/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00131.warc.gz
en
0.9638
1,237
2.953125
3
Back to the basics with Cyber Essentials Four years is a long time in cyber security; a lot can change in that time. But surprisingly, a lot also stays the same. Back in 2016, the National Cyber Security Centre released a white paper on Common Cyber Attacks: Reducing the Impact. The paper described what a common cyber-attack looked like and advised that organisations should implement basic security controls to protect themselves. NCSC’s report directed businesses to implement schemes such as Cyber Essentials, which had been introduced to allow businesses of any size to implement a standard set of technical controls to provide protection against commodity attacks. Another report, released in the same year by the UK Government’s Department for Digital, Culture, Media & Sport (DCMS) – The Cyber Security Breaches Survey 2016 listed the top 3 breaches experienced by businesses as: - Viruses, spyware, and malware. - Others impersonating organisation in emails or online. - Denial of service attacks. Fast forward to the present day and the attacks still follow the same pattern. The 2020 version of the Cyber Security Breaches Survey lists the top 3 attacks as: - Fraudulent emails or being directed to fraudulent websites. - Others impersonating organisations in emails or online. - Viruses, spyware, and malware. So, in a four-year period, two of the top 3 attacks remain the same. Why might this be? Attackers will only continue to use methods that are effective and get the required results. The report also provides another concerning statistic. In 2016, the number of businesses reporting a cyber breach in the previous 12-month period was 24%; in 2020, that number had almost doubled to 46%. What has changed? Every business is susceptible to a cyber-attack. It is no longer if, but when. Attacks are not solely made against large corporations. Businesses of any size can be a target. All businesses have something of value to an attacker, whether that be confidential company data, financial information, or intellectual property – information assets of any kind are of use to someone. If your business fails to protect its information assets by implementing even the simplest of controls, it is only a matter of time before you too fall victim to a cyber-attack. So, what can we do? The odds don’t have to be stacked in the attacker’s favour though. There are simple and cost-effective measures that businesses of any size can implement. The advice provided by NCSC in 2016 still stands. By implementing the following controls in the Cyber Essentials standard, businesses can mitigate a significant percentage of commodity attacks. What does it involve? Cyber Essentials covers 5 areas of controls that should be implemented: - Firewalls – ensure you have adequate protection at your network perimeter. Make sure your firewall policies are effective and only allow network traffic required for your business. - Malware protection – ensure all your devices have malware protection installed and that this is kept up to date on a regular basis. - Patch management – patching your software to the latest version will prevent cyber attackers attempting to exploit known vulnerabilities and gain access to your information assets - Secure configuration – ensure your devices have any unused functionality removed; this includes the removal of unused accounts and software - Access control – ensure that all the user accounts on your network operate on the principle of “least privilege.” This means that your users only have enough permissions to carry out the duties they are assigned. These controls apply to the areas of your business which you determine to be in scope of the assessment. One consideration, especially with the increase in remote working experienced in the UK at the current time, is that your home workers extend the boundaries of your network, therefore it is imperative that the controls you implement as part of Cyber Essentials include the equipment your users have at home. Surely, I need to do more to protect my business? Of course, there are a large number of controls you may wish to implement to protect your business. However, it is of utmost importance that you get the basics right and starting with Cyber Essentials is certainly a step in the right direction. Cyber Essentials can be applied to any business size, from micro up to global corporations. And this is an example of what can happen: Cathay Pacific were fined £500,000 by the UK’s Information Commissioner’s Office for security lapses which exposed around 9.4 million customer details. The ICO carried out a full investigation into the breach and discovered a catalogue of errors with some serious deficiencies in the expected processes. Included in the ICO’s statement of the breach “At its most basic, the airline failed to satisfy four out of the five basic Cyber Essentials guidance”. How Nexor can help Nexor assists our clients by providing a full range of cyber security services including the implementation of standards ranging from Cyber Essentials to ISO 27001. Our approach starts and ends with people. Our People, Process and Technology methodology ensures business outcomes support the organisational objectives in a way that builds a trust culture and provides a technological environment to help people to succeed. Our consultants will work with you to ascertain the risks to your business, and to determine the best course of action by offering an individual tailored and bespoke service to protect your key information assets. Author Bio – Sarah Knowles Sarah Knowles is Nexor’s Senior Security Consultant. She is a NCSC certified Security and information Risk Adviser, an ISC2 Certified Information Systems Security Professional (CISSP), ISO 27001 Lead Auditor and a Cyber Essentials and IASME Governance Assessor. She has a technical background primarily in Microsoft technologies and has provided security governance, compliance and risk management to both HMG and private sector clients. Be the first to know about developments in secure information exchange
<urn:uuid:c8ec329d-b1db-4f8c-8709-72880e76ca44>
CC-MAIN-2022-40
https://www.nexor.com/back-to-the-basics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00131.warc.gz
en
0.932413
1,212
2.578125
3
Thank you for Subscribing to CIO Applications Weekly Brief How Artificial Intelligence is Impacting Aerospace Industry Improved supply chain proficiency makes maintaining the equipment and its routine repairs much easier than doing it manually, and it also saves money and reduces downtime because it is known in advance when to perform the fixing tasks. Fremont, CA: Artificial intelligence plays a significant role in reducing costs, shortening the design process, reproducing, prototyping, enhancing, supporting, manufacturing, and updating items, and it is poised to drive numerous improvements in the aerospace industry over the next 15 years. AI advancements could aid aerospace companies in improving their manufacturing processes. Nevertheless, there is limited adoption of AI methods in the aerospace industry, and the primary reasons for this are a lack of access to high-quality data, a greater reliance on simple models when compared to complex models, and a lack of skilled workforce and partners to successfully deploy it. However, with the right partner, AI has the potential to be a disruptive innovation that affects the productivity, efficiency, speed, and development of aerospace organizations. Let's take a look at some of the areas where artificial intelligence is proving to be disruptive in the aerospace industry. Efficient Supply Chain Management By incorporating AI into the supply chain, activities in the aeronautics industry are becoming increasingly streamlined. Improved supply chain proficiency makes maintaining the equipment and its routine repairs much easier than doing it manually, and it also saves money and reduces downtime because it is known in advance when to perform the fixing tasks. The use of automated data collection makes it simple to improve supply chain management proficiency. To improve pilot training, artificial intelligence can be used. Artificial intelligence simulators in conjunction with virtual reality frameworks can be used to provide pilots with a more realistic simulation experience. Artificial intelligence-enabled simulators can also be used to collect and analyze training data, such as biometrics to create customized training patterns based on a student's performance. The next significant application of AI will be to assist pilots during flights. Artificial intelligence-enabled cockpit arrangements can gradually improve a flight path by evaluating and alerting about the fuel level, framework status, weather conditions, and other critical parameters. Later, aircraft could be outfitted with brilliant cameras powered by computer vision algorithms, extending pilots' visual fields and thus supporting their safety performance. In the aerospace industry, lightweight and strong parts are always best for an aircraft. Manufacturers can use generative structure in conjunction with AI algorithms to create such parts. Generative design is an iterative process in which engineers or architects use design objectives as input alongside constraints and parameters such as materials, available assets, and assigned budget to create an ideal product design. When combined with AI, generative design programming can enable product designers to evaluate multiple design alternatives in a short period of time. Designers can use this innovation to create a new lightweight and cost-effective products. Artificial intelligence-enabled generative design combined with 3D printing can be used to deliver various aircraft parts such as turbines and wings. As a result, the use of AI in aerospace organizations can help to streamline design and manufacturing processes.
<urn:uuid:283cf709-ac8c-4a0d-bb75-cb818e41e415>
CC-MAIN-2022-40
https://www.cioapplications.com/news/how-artificial-intelligence-is-impacting-aerospace-industry-nid-9748.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00131.warc.gz
en
0.930569
639
2.546875
3
As the weather gets colder, that means more time stuck inside, and more screen time for the kids. Today’s parents face a difficult challenge as they keep their kids safe while they’re online, and teach them how to behave appropriately on the internet. Follow these tips to help your kids surf safely. Internet Safety Tips: Make sure your child knows the rules when it comes to the internet – how much time they can spend on it, what sites are allowed, and what they should and should not share. Consider signing a contract within your family that regulates internet use. The Family Online Safety Institute offers a sample on its website, and you can customize it as you see fit. Make sure you supervise children under age 10 while they are using the internet. Make sure the computer, tablet, or phone can only be used in a common area where it’s easy to check in on them. Pay attention to everything they’re doing online. As they get older, you don’t need to be staring at the screen with them constantly, but always check in. Be Open to Communication On the internet, it can be easy even for seasoned users to accidentally end up on the wrong site. Make sure your kids know they should tell you as soon as they find something that makes them uncomfortable. They should know it is not their fault, and that you won’t be angry with them if they end up on those sites. When they feel comfortable talking to you about what they might find, or what might be happening to them online, they’ll turn to you when they need help. Add Parental Controls Programs like Net Nanny give a pretty substantial parental control across Android, iOS, Mac and Windows devices. Set up filters, block inappropriate websites, and set timers that keep your kids safe. You can even view usage reports to see where your kids are visiting. As your kids get older, it’s important to tell them why you’re blocking certain sites. Warn Children of Predators Make sure your children understand that not everyone tells the truth online. Just because someone says they’re a child the same age as them, doesn’t mean that’s the case. It is essential that they tell you about any new people that they have met or have contacted them online. Hopefully you’ve set the standard and opened communication in a way that lets your child know you’re there to protect them online. Internet Etiquette Tips: Setting Up Accounts Allow your child to set up their own email or other online account, and coach them through aspects like creating a strong password, and keeping their identity safe. Instead of adding their own picture, help them find a favorite cartoon or avatar to use. Their username can be fun, but should not reveal their identity, such as their school, age or full name. What Not to Share Your child likely will not understand the consequences of sharing their personal information online. But they should know: - Never to give their name, phone number, email address, password, postal address, school, or picture without your permission - Not to open e-mail or messages from people they don’t know - Not to respond to hurtful or disturbing messages - Not to get together with anyone they “meet” online Keeping Behavior Appropriate Remind your kids that whatever they post online, doesn’t necessarily ever go away. Make sure they know that comments or messages should be appropriate, and they should double-check what they’re sending. Anything can be screen-shot and sent to someone else. As kids get older and start using social media sites, they need to know the importance of being kind to others online Set A Good Example If you spend all your time on your phone or computer, your kids will want to do the same. Teach them proper etiquette by leaving your phone elsewhere when you sit down to eat dinner, or putting it down 30 minutes before bed. These devices are addicting, and you can help them (and yourself) by limiting time and focusing on the loved ones around you.
<urn:uuid:ceae017e-cd81-4c7a-a008-93a0e70263e7>
CC-MAIN-2022-40
https://www2.lammtech.com/internet-safety-holiday-season-tips-parents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00131.warc.gz
en
0.947992
864
2.875
3
Home security systems put people at risk, according to a recent F.B.I. announcement. The U.S. Federal Bureau of Investigation mentioned that attackers hack these systems and set them up to place fake emergency calls. The method is called “swatting.” What is swatting and how does it work with smart devices? According to the F.B.I., the devices that have cameras and also voice skills need very strong unique passwords and a multi-factor authentication. So, F.B.I. advised users to change their passwords, in order to protect from “swatting” attacks. According to the security experts, the swatting is possible by using stolen credentials of the victims. Swatting describes a hoax call, made to emergency services. It basically tries to make S.W.A.T. teams react as if there was a threat to human life. The method is usually a form as harassment or a prank, although it is a serious crime. Home smart devices are to blame According to the investigation, offenders use victims’ home security smart devices, including the video and audio ones, in order to execute the attacks. Before the attack, hackers steal email passwords. Thus, they are able to log into the smart security devices and hijack their features. Then, they call emergency services and report crimes in the victims’ house. Some of the threat actors also choose to live stream such events and share them on the online community platforms. How to avoid swatting? So, the F.B.I experts came with some protection measures users have to take in order to avoid such attacks: - Users should make sure they have a complex and strong password set for their online accounts to which home security systems are linked. - Also, they should never use the same password for different online accounts. Moreover, users should change their passwords regularly. - The two-factor authentication would complete the system to have a better barrier in front of the villains. Still, the F.B.I. is investigating further in order to discover any other breaches and also encourages users to immediately make a police report if they feel they have become victims.
<urn:uuid:9cf73eca-abf4-487a-95c3-ec0c1f1b04c4>
CC-MAIN-2022-40
https://blog.bit-guardian.com/home-security-systems-fbi-warnings-about-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00131.warc.gz
en
0.965168
456
2.6875
3
by Ethan Shackelford, Associate Security Consultant at IOActive Fault injection, also known as glitching, is a technique where some form of interference or invalid state is intentionally introduced into a system in order to alter the behavior of that system. In the context of embedded hardware and electronics generally, there are a number of forms this interference might take. Common methods for fault injection in electronics include: Clock glitching (errant clock edges are forced onto the input clock line of an IC) Voltage fault injection (applying voltages higher or lower than the expected voltage to IC power lines) Electromagnetic glitching (Introducing EM interference) This article will focus on voltage fault injection, specifically, the introduction of momentary voltages outside of normal operating conditions on the target device's power rails. These momentary pulses or drops in input voltage (glitches) can affect device operation, and are directed with the intention of achieving a particular effect. Commonly desired effects include "corrupting" instructions or memory in the processor and skipping instructions. Previous research has shown that these effects can be predictably achieved , as well has provided some explanation as to the EM effects (caused by the glitch) which might be responsible for the various behaviors . However, a gap in published research exists in correlating glitches (and associated EM effects) with concrete changes in state at the processor level (i.e. what exactly occurs in the processor at the moment of a glitch that causes an instruction to be corrupted or skipped, an incorrect branch to be taken, etc.). This article seeks to quantify and qualify the state of a processor before, during, and after an injected fault, and describe discrete changes in markers such as registers including general registers as well as control registers such as $pc and $lr, memory, and others. Past Research and Thanks Special thanks to the folks at Toothless Consulting, whose excellent series of blog posts were my introduction to fault injection, and the inspiration for this project. Additional thanks to Chris Gerlinsky, whose research into embedded device security and in particular his talk on breaking CRP on the LPC family of chips was an invaluable resource during this project. The target device chosen for testing was the NXP LPC1343, an ARM Cortex-M3 microcontroller. In order to control the input target voltage and coordinate glitches, the Digilent Arty A7 development board was used, built around the Xilinx Artix 7 FPGA. Custom gateware was developed for the Arty board, in order to facilitate control and triggering of glitches based on a variety of factors. For the purposes of this article, the two main triggers used are a GPIO line which goes high/low synchronized to certain device operations, and SWD signals corresponding to a "step" event. The source code for the FPGA gateware is available here. In order to switch between the standard voltage level (Vdd) and the glitch voltage level (Vglitch), a Maxim MAX4617 Multiplexer IC was used. It is capable of switching between inputs in as little as 10ns, and is thus suitable for producing a glitch waveform on the LPC 1343 power rails with sufficient accuracy and timing. As illustrated in the image above, the Arty A7 monitors a “trigger” line, either a GPIO output from the target or the SWD lines between the target and the debugger, depending on the mode of operation. When the expected condition is met, the A7 will drive the “glitch out” according to a provided waveform specifier, triggering a switch between Vdd and Vglitch via the Power Mux Circuit and feeding that to the target Vcore voltage line. A Segger J-Link was used to provide debug access to the target, and the SWD lines are also fed to the A7 for triggering. order to facilitate triggering on arbitrary SWD commands, a barebones SWD receiver was implemented on the A7. The receiver parses SWD transactions sniffed from the bus, and outputs the deserialized header and transaction data, values which can then be compared with a pre-configured target value. This allows for triggering of the glitchOut line based on any SWD data – for example, the S and RESUME transactions, providing a means of timing glitches for Prior to any direct testing of glitches performed while single-stepping instructions, observing glitches during normal operation and the effects they cause is helpful to provide a base understanding, as well as to provide a platform for making assumptions which can be tested later on. To provide an environment for observing the results of glitches of varied form and duration, program execution consists of a simple loop, incrementing and decrementing two variables. At each iteration, the value of each variable is checked against a known target value, and execution will break out of the loop when either one of the conditions is met. Outside of the loop, the values are checked against expected values and those values are transmitted via UART to the attacking PC if they differ. Binary Ninja reverse engineering software was used to provide a visual representation of the compiled C. Because the assembly presented represents the machine code produced after compiling and linking, we can be sure that it matches the behavior of the processor exactly (ignoring concepts like parallel execution, pipelining etc. for now), and lean on that information when making assumptions about timing and processor behavior with regard to injecting faults. Though simple, this environment provides a number of interesting targets for fault injection. Contained in the loop are memory access instructions (LDR, STR), arithmetic operations (ADDS, SUBS), comparisons, and branching operations. Additionally, the pulse of PIO2_6 provides a trigger for the glitchOut signal from the FPGA – depending on the delay applied to that signal, different areas/instructions in the overall loop may be targeted. By tracing the power consumption of the ARM core with a shunt resistor and transmission line probe, execution can be visualized. The following waveform shows the GPIO trigger line (blue), and the power trace coming from the LPC (purple). The GPIO line goes high for one cycle then low, signaling the start of the loop. What follows is a pattern which repeats 16 times, representing the 16 iterations of the loop. This is bounded on either side by the power trace corresponding to the code responsible for writing data to the UART, and branching back to the start of the main loop, which is fairly uniform. We now have: - A reference of the actual instructions being executed by the processor (the disassembly via Binary Ninja) - A visual representation of that execution, viewable in real time as the processor executes (via the power trace) - A means of taking action within the system under test which can be calibrated based on the behavior of the processor (the FPGA Using the above information, it is possible to vary the offset of the glitch from the trigger, and (roughly) correlate that timing to a given instruction or group of instructions being executed. For example, by triggering a glitch sometime during the sixth repetition of the pattern on the power trace, we can observe that that portion of the power trace appears to be cut off early, and the values reported over UART by the target reflect some kind of misbehavior or corruption during the sixth iteration of the loop. So far, the methodology employed has been in line with traditional fault injection parameter search techniques – optimize for visibility into a system to determine the most effective timing and glitch duration using some behavior baked into device operation (here, a GPIO line pulsing). While this provides coarse insight into the effects of a successfully injected fault (for the above example we can make the assumption that an operation at some point during the sixth iteration of the loop was altered, any more specificity is just speculation), it may have been a skipped load instruction, a corrupted store, or a flipped compare among many other possibilities. To illustrate this point, the following is the parsed, sorted, and counted output of the UART traffic from the target device, after running the glitch for a few thousand iterations of the outer loop. The glitch delay and duration remained constant, but resulted in a fairly wide spread of discreet effects on the state of the variables at the end of the loop. Some entries are easy to reason about, such as the first and most common result: B is the expected value after six iterations (16 - 6 = 10), but A is 16, and thus a skipped LDR or STR instruction may have left the value 16 in the register placed there by previous operations. However, other results are harder to reason about, such as the entries containing ascii text, or entries where the variable with the incorrect value doesn't appear to correlate to the iteration number of the loop. This level of vagueness is acceptable in some applications of fault injection, such as breaking out of an infinite loop as is sometimes seen in secure boot bypass techniques. However, for more complex attacks where a particular operation needs to be corrupted in just the right way greater specificity, and thus a more granular understanding, is a necessity. And so what follows is the novel portion of the research conducted for this article: creating a methodology for targeting fault injection attacks to single instructions, leveraging debug interfaces such as SWD/JTAG for instruction isolation and timing. In addition to the research value offered by this work, the developed methodology may also have practical applications under certain, not uncommon circumstances regarding devices in the wild as well, which will be discussed in a later section. A (Very) Quick Rundown of the SWD protocol SWD is a debugging protocol developed by ARM and used for debugging many devices, including the Cortex-M3 core in the LPC 1343 target board. From the ARM Debug Interface Architecture Specification ADIv5.0 to ADIv5.2 The Arm SWD interface uses a single bidirectional data connection and a separate clock to transfer data synchronously. An operation on the wire consists of two or three phases: packet request, acknowledgement response, and data transfer. Of course, there's more to it than that, but for the purposes of this article all we're really interested in is the data transfer, thanks to a quirk of Cortex-M3 debugging registers: halting, stepping, and continuing execution are all managed by writes to the Debug Halting Control and Status Register (DHCSR). Additionally, writes to this register are always prefixed with 0xA05F, and only the low 4 bits are used to control the debug state -- [MASKINTS, STEP, HALT, DEBUGEN] from high to low. So we can track STEP and RESUME actions by looking for SWD write transaction with the data 0xA05F0001 (RESUME) and 0xA05F000D (STEP). Because of the aforementioned bidirectionality of the protocol, it isn't as easy as just matching a bit pattern: based on whether a read or write transaction is taking place, and which phase is currently underway, data may be valid on either clock edge. Beyond that, there are also turnaround periods that may or may not be inserted between phases, depending on the transaction. The simplest solution turned out to be just implementing half of the protocol, and discarding the irrelevant portions keeping only the data for comparison. The following is a Vivado ILA trace of the-little-SWD-implementation-that-could successfully parsing the STEP transaction sniffed from the SWD lines. So, by single stepping an instruction and sniffing the SWD lines from the A7, it is possible to trigger a glitch the instant (or very close to, within 10ns) the data is latched by the target board's debug machinery. Importantly, because the target requires a few trailing SWCLK cycles to complete whatever actions the debug probe requires of it, there is plenty of wiggle room between the data being latched and the actual execution of the instruction. And indeed, thanks to the power trace, there is a clear indication of the start of processor activity after the SWD transaction completes. As can be seen above, there is a delay of somewhere in the neighborhood of 4us, an eternity at the 100MHz of the A7. By delaying the glitch to various offsets into the "bump" corresponding to instruction execution, we can finally do what we came here to do: glitch a single-stepping processor. In order to produce a result more interesting than "look, it works!" a simple script was written to manage the behavior of the debugger/processor via OpenOCD. The script has two modes: a "fast" mode, which single steps as fast as the debugger can keep up with used for finding the correct timing and waveform for glitches, and a (painfully) "slow" mode, which inspects registers and the stack before and after each glitch event, highlighting any unexpected behavior for perusal. Almost immediately, we can see some interesting results glitching a load register instruction in the middle of the innermost loop -- in this case a LDR r3, [sp] which loads the previous value of the A variable into r3, to be incremented in the next instruction. We can see that nothing has changed, suggesting that the operations simply didn't occur or finish -- a skipped instruction. This reliably leads to an off-by-one discrepancy in the UART output from the device: either A/B ends up 1 less/greater than it should be at the end of the loop, because one of the inc/dec operations was acting on data which is not actually associated with the state of the A variable. Interestingly, this research shows that the effectiveness of fault injection is not limited only to instructions which access memory (LDR, STR, etc.), but can also be used to affect the execution of arithmetic operations, such as ADDS and CMP, or even branch instructions (though whether the instructions themselves are being corrupted or if the corruption is occurring on the ASPR by which branches are decided requires further study). In fact, no instruction tested for this article proved impervious to single-step-glitching, though the rate of success did vary depending on the instruction. We see here the CMP instruction which determines whether or not A matches the expected 0x10 being targeted. We see that the xPSR is not updated (meaning the zero flag is not set and as far as the processor is concerned, the CMP'd values did not match, and so the values of A and B are sent via UART. However, because it was the CMP instruction itself being glitched, the reported values are the correct 0x10 and 0. Interestingly, we see that r1 has been updated to 0x10, the same immediate value used in the original CMP. Referring to the ARMv7 Architecture Reference Manual, the machine code for CMP r3, 0x10 should be 0x102b. Considering possible explanations for the observed behavior, one might consider an instruction like LDR or MOVS, which could have moved the value into the r1 register. And as it turns out, the machine code for MOVS r1, 0x10 is 0x1021, not too many bits away from the original 0x102b! While that isn't the definitive answer as to cause for the observed behavior, its a guess well beyond the level of information available via power trace analysis and similar techniques alone. And if it is correct, we not only know what generally occurred to cause this behavior, but can even see which bits specifically in the instruction were flipped for a given glitch delay/duration. Including all the script output for every instruction type in this article is a bit impractical, but for the curious the logs detailing registers/stack before and after each successful glitch for each instruction type will be made available in the git repo hosting the glitcher code. I know what you're thinking. "If you have access to a device via JTAG/SWD debugger, why fuss with all the fault injection stuff? You can make the device do anything you want! In fact, I recently read a great blog post where I learned how to take advantage of an open JTAG interface!" However, there is a very common configuration for embedded devices in the wild to which the research presented here could prove useful. Many devices, including the STM32 series (such as the DUT for this article), implement a sort of "high but not the highest possible" security mode, which allows for limited debugging capabilities, but prevents reads and writes to certain areas of memory, rendering the bulk of techniques for leveraging an open JTAG connection ineffective. This is chosen over the more secure option of disabling debugging entirely because the latter leaves no option for fixing or updating device firmware (without a custom bootloader), and many OEMs may choose to err towards serviceability rather than security. In most such implementations though, single stepping is still permitted! In such a scenario, aided by a copy of device firmware, a probing setup analogous the one described here, or both, it may be possible to render an otherwise time-consuming and tedious attack nearly trivial, stripping away all the calibration and timing parameterization normally required for fault injection attacks. Need to bypass secure boot on a partially locked down device? No problem, just break on the CMP that checks the return value of is_secureboot_enabled(). Further research is required to really categorize the applicability of this methodology during live testing, but the initial results do seems promising. Further testing will likely be performed on more realistic/practical device firmware, such as the previously mentioned secure boot scenario. Additionally and more immediately, part two of this series of blog posts will continue to focus on developing a better understanding of what happens within an integrated circuit, and in particular a complex IC such as a CPU, when subjected to fault injection attacks. I have been putting together an 8-bit CPU out of 74 series discreet components in my spare time over the last few months and once complete it will make the perfect target for this research: the clock is controllable/steppable externally, and each individual module (the bus, ALU, registers, etc.) are accessible by standard oscilloscope probes and other equipment. This should allow for incredibly close examination of system state under a variety of conditions, and make transitory issues caused by faults which are otherwise difficult to observe (for example an injected fault interfering with the input lines of the ALU but not the actual input registers) quite clear to see. J. Gratchoff, "Proving the wild jungle jump," University of Amsterdam, Jul. 2015 Y. Lu, "Injecting Software Vulnerabilities with Voltage Glitching," Feb. 2019 D. Nedospasov, "NXP LPC1343 Bootloader Bypass," Breaking Code Read Protection on the NXP LPC-family Microcontrollers," Jan. 2017, https://recon.cx/2017/brussels/talks/breaking_crp_on_nxp.html A. Barenghi, G. Bertoni, E. Parrinello, G. Pelosi, "Low Voltage Fault Attacks on the RSA Cryptosystem," 2009
<urn:uuid:e37ed6b7-32bb-4a00-b01b-a91c7c12673f>
CC-MAIN-2022-40
https://labs.ioactive.com/2021/04/watch-your-step-research-into-concrete.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00131.warc.gz
en
0.934092
4,056
3.265625
3
The standard hard disk drive (HDD) has been the predominant storage device for desktop computers and laptops for a long time. However, computers with solid-state drive (SSD) technology are quickly becoming the norm. If you’re looking to upgrade your hardware, you need to know the difference between HDD and SSD. This blog post is intended to help you to understand the differences between an HDD and an SSD. An HDD is basically a storage device in a computer that consists of metal platters with magnetic coating, a spindle, and various moving parts to process and store data. The common size for laptop hard drives is the 2.5” model, while a larger 3.5” model is usually found in desktop computers. An SSD is another type of data storage that performs the same job as an HDD. But instead of storing data in a magnetic coating on top of platters, an SSD uses flash memory chips and an embedded processor to store, retrieve, and cache data. It is roughly about the same size as a typical HDD and resembles smartphone batteries. The differences in capabilities between the two storage devices can be grouped into six categories: Despite the high costs and low capacity, however, SSDs are the clear winner over HDDs in terms of performance. While you’re paying more for less memory with an SSD, you’re investing in a faster and far more durable data storage option in the long run. We recommend using an SSD as the primary storage for your operating system, applications, and most-used programs. Many laptops and computers also allow you to install additional SSDs to upgrade as required if your storage needs continue to grow. Implementing HDD as a secondary storage unit is another great idea, especially if you need a place to store documents and pictures because they don’t need to leverage the incredible access times and speeds of an SSD. Looking to invest in some new hardware for your business? Talk with our experts before you make a decision. We can provide sound advice and help guide you in the right direction.
<urn:uuid:6c8db50a-dff1-4c36-a061-a043f289c0ca>
CC-MAIN-2022-40
https://www.dcsny.com/technology-blog/what-is-the-difference-between-HHD-SSD/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00131.warc.gz
en
0.944002
422
2.890625
3
Busting 7 common cybersecurity myths May 12, 2021 While cybercrime has skyrocketed over the last decade, many people still think they will never fall victim to hackers. But the internet is not safe by default; there are plenty of criminals looking for ways to scam you. In this article, we’ll bust some of the most frequent cybersecurity myths. 1: I never browse anything inappropriate, so I’m not at any risk. It's a common misconception that you need to wander into the more illicit corners of the internet to put yourself at risk. However, cybercriminals have a variety of different tactics for malware delivery and data-theft: Email attacks: Using a strategy called phishing, hackers often send emails in which they pretend to be a legitimate organization, like a bank or payment service. They use social engineering to convince you that their emails are genuine, and urge you to click a link in the message. The link can then trigger a malware download. Wi-Fi spying: If you use public Wi-Fi, you’re putting yourself at risk. Hackers can create fake hotspots, masquerading as legitimate Wi-Fi providers (a nearby cafe, for example). Once you’ve logged on, they’ll be able to view all of your traffic and steal any personal information you disclose. Use a VPN to encrypt your traffic when using public Wi-Fi. Malvertising: Criminals sometimes create online adverts to lure victims into their traps; this is called malvertising. It might look like a normal banner add, but clicking on it could take you to a new page where malware can be quietly installed on your device. In recent years, these adverts have been smuggled onto reputable websites, including Spotify and The New York Times. As long as you're connected to the internet, you could be at risk. That's why it's always best to take precautions. 2: I'm safe because I only use my smartphone. Any device that has an operating system can be hacked. Be it your phone, laptop, router, or even your smart home system. Surprisingly, there are dozens of malicious apps that reside in the official app stores. You might think that you’re downloading a new game to your smartphone or installing a harmless photo editor, but you could be infecting your device with malware. Hackers often use the success of famous apps, creating convincing copycats. These fake apps are designed to steal your personal information, credit card details, and passwords. 3: I use antivirus software, so I don’t need to worry. It’s true that antivirus software protects your computer and smartphone from viruses. However, it’s not enough. Hackers always try to find new security flaws and antivirus can fail to recognize evolving threats. And antivirus won't protect you from subtler manipulations; many hackers, instead of using viruses, will try to trick you into volunteering private information and passwords. Let’s say you're searching for a new pair of sneakers. You find a nice deal online with a recognisable retailer, and continue to the payment page. A hacker could have actually built a fake website that looks exactly like the original, just to steal your sensitive data. These scams are more common than you might think. You have to be cautious when shopping online, using banking services, and making payments. If you’re not being careful, antivirus protection will only go so far. 4: It’s only a work laptop; I don’t keep anything important on there. 75% of corporate data breaches happen because of a careless company insider. If one person falls victim to a hacker’s ploy, they could expose the whole company’s network. There are several ways in which a work laptop hack could endanger the whole organization: Spreading malware. If the hacker can access your work email, they can send infected links to other employees. In this way, they can spread malware and hack other devices where more sensitive data could be stored. Grand Theft Autofill. Your laptop may have a number of browser passwords saved, ready to autofill. Saved passwords make life easier — for you and the hacker. Taking advantage of that, a criminal could use your device as a backdoor into private databases elsewhere in the network. Gone Phishing. Perhaps your email is still secure, even after the hack. Many companies now use internal messaging services, and employees usually stay logged in on these apps. The criminal who’s broken into your laptop could use such a service to ask your coworkers for password information or privileges, operating under your name. Imagine if a company is storing the data of millions of customers. Credit card details, names, purchase histories, emails, home addresses, phone numbers — this information would be highly valued on the dark web. If this data is leaked, it could put all those people in danger and destroy your company’s reputation. The average cost of a data breach for a company is $3.86 million, and depending on the size of an organization, that number could be even higher. As many people work from home now, it’s important to use a VPN and protect your digital identity, for your own sake and that of your employer. 5: I know my computer and would notice if it had a virus. Some viruses can reside on a computer for months before a user unwittingly activates them, while others start doing their work in the background immediately. Modern viruses are hard to notice: your system might be running smoothly and everything could seem fine…until it’s too late. If you have downloaded a virus by accident, it might take only a couple minutes to scrape your personal details. Imagine what it can do if left to its own devices for days, or even months. 6: I have nothing to hide. Why should I protect myself? You probably wouldn’t hand your online banking and social media passwords to a stranger. Since much of our lives revolve around digital services, every account you have increases the chances of getting hacked. For example, almost everyone would be impacted negatively by a ransomware attack. Hackers can infect you with a special piece of malware which allows them to encrypt your hard drive, essentially locking you out of your files and your system. Unless you pay the ransom money, you won’t be able to access your computer anymore. And even if you pay, you can never be sure that perpetrators will release your files from captivity. In 2017, a ransomware called WannaCry infected more than 200,000 computers across 150 countries and demanded users to make payments in Bitcoin. As of this day, WannaCry is still active and spreading. 7: A strong password is all I need. A strong password is important, but the odds of you coming up with a suitably secure one on your own are slim. Hackers use credential stuffing software to cycle through all the words in the dictionary, along with common numerical sequences, until one matches your password. It could take milliseconds to crack simple combinations like “iloveyou” or “123456”, and hours or even days to crack something more complex. However, even a strong password alone is not enough. We recommend using two-factor authentication (also known as 2FA) as an extra layer of security. After typing your password, you would also have to authenticate yourself via a separate app, SMS, or token. Even if wrongdoers have stolen your password, they won’t be able to bypass the 2FA. Get ahead of the hackers With a VPN enabled, your device can be protected from Wi-Fi spying and man-in-the-middle attacks. Combined with some common-sense and a security-first approach, this technology goes a long way to lowering the risks that everyone now faces online. NordVPN enhances privacy and security, allowing you to combat hackers preemptively. It redirects your traffic through an encrypted tunnel, and ensures that your data is for your eyes only. You can find the original blog post here at NordVPN.
<urn:uuid:064b93ed-b4f8-4157-be16-e3bd286a367d>
CC-MAIN-2022-40
https://nordsecurity.com/blog/cybersecurity-myths
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00331.warc.gz
en
0.934466
1,693
2.828125
3
Businesses use authentication and authorization solutions to positively identify users and control access to applications and IT systems. Authentication refers to the process of validating a user’s identity. Usernames and passwords are the most basic and familiar forms of authentication. Authorization refers to the process of granting a user permission to access specific resources or capabilities once their identity is verified. For example, a system administrator might be granted root-level or superuser privileges to a resource, while an ordinary business user might be granted restricted access or no access at all to the same resource. Most identity and access management (IAM) solutions provide both authentication and authorization functionality and can be used to tightly control access to on-premises and cloud-based applications, services and IT infrastructure. Access management solutions help ensure the right users have access to the right resources at the right times for the right reasons. Basic authentication methods that require only username and password combinations are inherently vulnerable. Threat actors can carry out phishing attacks or other schemes to harvest credentials and pose as legitimate users to steal data or perpetrate attacks. Most IAM solutions support Multi-Factor Authentication (MFA) functionality to protect against credential theft and user impersonation. With MFA, a user must present multiple forms of evidence to gain access to an application or system—for example, a password and a one-time, short-lived SMS code. Authentication factors include: - Knowledge factors – something the user knows, such as a password or an answer to a security question - Possession factors – something the user has, such as a mobile device or proximity badge - Inherence factors – something biologically unique to the user, such as a fingerprint or facial characteristics - Location factors – the user’s geographic position Many modern IAM solutions support adaptive authentication methods, using contextual information (location, time-of-day, IP address, device type, etc.) and business rules to determine which authentication factors to apply to a particular user in a particular situation. Adaptive authentication balances security with user experience. Many IAM solutions support Single Sign-On (SSO) capabilities that allow users to access all their applications and services with a single set of credentials. SSO improves user experiences by eliminating password fatigue and strengthens security by eliminating risky user behaviors like writing passwords on paper or using the same password for all applications. Many IAM solutions support standards-based identity management protocols such as SAML, Oauth and OpenID Connect to enable SSO federation and peering. Most IAM solutions provide administrative tools for onboarding employees and managing access privileges throughout the employee lifecycle, including separation and the offboarding process. Many of these solutions support role-based access controls (RBACs) to align a user’s privileges with their job duties. RBACs help prevent privilege creep and simplify administration when employees change jobs or leave an organization. Many IAM solutions also support self-service portals and automated approval workflows that let employees request access rights and update account information without help desk intervention.
<urn:uuid:f2981ef8-156f-4fe2-8240-1c60e7964242>
CC-MAIN-2022-40
https://www.cyberark.com/ja/what-is/authentication-authorization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00331.warc.gz
en
0.894215
638
3.09375
3
Virtual personal assistants (VPA), also known as smart assistants like Amazon’s Alexa and Google’s Assistant, are in the spotlight for vulnerabilities to attack. Take, for example, that incident about an Oregon couple’s Echo smart speaker inadvertently recording their conversation and sending it to a random contact. Or that time when the Alexa started laughing out of the blue. Indeed, something has to be done about these hacks, whether they’re by accident or not. Earlier this month, researchers from Indiana University, the Chinese Academy of Sciences, and the University of Virginia found exploitable weaknesses in the VPAs above. Researchers dubbed the techniques they used to reveal these weaknesses as voice squatting and voice masquerading. Both take advantage of the way smart assistants process voice commands. Unsurprisingly, these also exploit users’ misconceptions about how such devices work. How smart assistants work VPA services used in smart speakers can do what they’re created to do with the use of apps called “skills” (by Amazon) or “actions” (by Google). A skill or an action provides a VPA additional features. Users can interact with a smart assistant via a virtual user interface (VUI), allowing them to run a skill or action using their voice. Entrepreneurs, with the help of developers, are already taking advantage of creating their own voice assistant (VA) apps to cater to client needs, making their services accessible in the voice platform, or merely introducing an enjoyable experience to users. As of this writing, the smart assistant apps market is booming. Alexa skills alone already has tens of thousands, thanks to the Alexa Skill Kit. Furthermore, Amazon has recently released Alexa Skill Blueprints, making skills creation easy for the person who has little to no knowledge of coding. Unfortunately, the availability of such a kit to the public has made abuse by potential threat actors possible, making the VPA realm an entirely new attack vector. If an attack is successful—and the study researchers conducted proved that it can be—a significant number of users could be affected. They concluded that remote, large-scale attacks are “indeed realistic.” Squatters and masqueraders Voice squatting is a method wherein a threat actor takes advantage or abuses the way a skill or action is invoked. Let’s take an example used from the researchers’ white paper. If a user says, “Alexa, open Capital One” to run the Capital One skill, a threat actor can potentially create a malicious app with a similarly pronounced name, such as Capital Won. The command meant for the Capital One skill is then hijacked to run the malicious Capital Won skill instead. Also, as Amazon is now rewarding kids for saying “please” when commanding Alexa, a similar hijacking can occur if a threat actor uses a paraphrased name like Capital One please or Capital One Police. “Please” and “police” may mean two totally different things to us, but for current smart assistants, these words are the same, as they cannot correctly recognize one invocation name over another similar-sounding one. Suffice to say, VPAs are not great at handling homophones. Voice masquerading, on the other hand, is a method wherein a malicious skill impersonates a legitimate one to either trick users into giving out their personal information and account credentials or eavesdrop on conversations without user awareness. Researchers identified two ways this attack can be made: in-communication skill switch and faking termination. The former takes advantage of the false assumption that smart assistants readily switch from one skill to another once users invoke a new one. Going back to our previous example, if Capital Won is already running and the user decides to ask "Alexa, what'll the weather be like today?", Capital Won then pretends to hand over control to the Weather skill in response to the invocation when, in fact, it is still Capital Won running but this time impersonating the Weather skill. As for the latter, faking termination abuses volunteer skill termination, a feature wherein skills can self-terminate after delivering a voice response such as “Goodbye!” to users. A malicious skill can be programmed to say “Goodbye!” but remain running and listening in the background for a given length of time. But…I like my smart assistant! No need to box up your smart speakers and send them back if these vulnerabilities worry you. But it is essential for users to really get to know how their voice assistant works. We believe that doing so can make a significant difference in maintaining one’s privacy and protection from attack. “Making devices, such as Alexa, responsible for important systems and controls around the house is concerning, especially when evidence emerges that it's able to turn a simple mistake into a potentially serious consequence,” our very own Malware Intelligence Analyst Chris Boyd said in an interview with Forbes. Smart assistants and IoT, in general, are still fairly new tech, so we expect improvements in the AI, and the security and privacy efforts within this sector. Both Amazon and Google have claimed they already have protections against voice squatting and voice masquerading. While it is true that the researchers had already met with both firms to help them understand these threats further and offer them mitigating steps, they remain skeptical about whether the protections put in place are indeed adequate. Only time will tell.
<urn:uuid:7017ad68-d0a1-4757-aa9d-45af20ad8e13>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2018/05/security-vulnerabilities-smart-assistants
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00331.warc.gz
en
0.93154
1,121
2.65625
3
How often do you see smart technology in headlines? The term is thrown around a lot, and it’s because there are countless products circulating in both the consumer and business environments. These connected devices range from smart forks to smart cars, so you can imagine that there are a fair number of useful (as well as useless) applications of this technology. How has it changed in recent months, and what will smart technology look like in the future? For Your Person Smart products are produced for a variety of reasons with one of the more practical ones being monitoring your physical wellbeing. One of the best and most recognizable devices for this purpose is Fitbit, which has ushered other companies dedicated to health monitoring devices through the door. Health bracelets, watches, and smart fabrics all contribute to this trend of connected devices encouraging people to care more about their health using technology. These devices contain microprocessors, sensors, and energy sources, and they contain the hopes and dreams of engineers who have worked tirelessly to create some of the most compact technology solutions on the market today. These devices are so useful that there were 125 million wearables shipped in 2017 alone. Here are some of the most popular smart technologies on the market today: - Fitness trackers: The most simple fitness trackers can count your steps and estimate calorie loss, as well as distance traveled, sleep quality, and speed. - Smart watches: Some smart watches can track fitness information, as well as other features that work with other apps on your mobile device. - Smart clothes: As you might imagine, most smart clothes are typically designed for use with fitness. There are self-cooling shirts that react to your body temperature, as well as yoga outfits designed to help your yoga posture. Automobiles have also made great strides in smart technology. You’ll see cars that offer everything from heads-up displays to screens that can showcase all kinds of content. Some cars are even capable of driving themselves, though this is a technology that’s still developing. The concept, however, is that connected technology is fueling future transportation initiatives. It’s thought that in the future, this will be a major part of the automobile industry. Technically, smart technology in cars has been being used since 1996. Every single car that has been manufactured since has a built-in on-board diagnostic system. This helps mechanics understand some of the intricacies of vehicles by accessing data stored by the computerized system. You may have even used some of these diagnostic tools yourself when you see your Check Engine light turn on. Some smart enhancements go beyond the practicality of diagnostics and simply make the user experience better. There are navigation systems designed to help users make their way to their destination, as well as control interfaces for temperature, media, and gear status. There are cameras that activate while your vehicle is in reverse to help drivers safely back up, as well as side-sensors to detect when something is a little too close for comfort. Augmented reality could also make a move on smart cars. Windshields and rear-view mirrors with augmented reality can (and likely will) become standard features on new automobiles. Google and Apple have begun to design devices for integration with these smart car features, allowing for a consistent connection for use with music or other media. For Your Home Most smart devices are designed of use in the home. There are some that have no business being connected, but there are others that are designed to save on energy costs and other practical uses. The most popular smart devices include Amazon Echo and Google Home. Since they can perform multiple different roles and control certain devices, they are quite helpful. Other devices used in the smart home include locks, cameras, lights, thermostats, and anything else that can be controlled digitally through a smartphone app. Several of today’s most used appliances come with smart technology installed, including refrigerators, ranges, faucets, washers, dryers, dishwashers, vacuums, and so much more. Questions have arisen about the practical use of many of these smart appliances, but their major draw is that they get smarter and more efficient depending on how they are used. They can lead to lower energy costs in the long run and effectively offer value for a longer length of time than a normal device. Just about all smart technology has this principle in mind. While smart technology generally comes with a higher price tag than usual, this is only because it hasn’t become mainstream yet. Once more of these devices flood the market, the price of it will drop considerably. How will your organization leverage smart technology in the future? To find out, reach out to the IT professionals at COMPANYNAME.
<urn:uuid:556e3ed5-9806-4b02-bd9c-a4b50b81fe5d>
CC-MAIN-2022-40
https://www.activeco.com/varieties-of-smart-tech-to-consider/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00331.warc.gz
en
0.963874
967
2.6875
3
By: Andrew Fray While technology has evolved relentlessly over the last 20 years, the advances that sit on the horizon are set to far outstrip this recent surge. As the pace of development accelerates, consequently increasing the complexity of technology, businesses will become ever more reliant on a robust, reliable infrastructure. Twenty years ago, we were just about seeing dial-up internet enter UK homes, while fixed broadband has only been a luxury we’ve enjoyed over the past decade. But in those ten years the level of technology change has picked up speed exponentially. This started with the spike in mobile broadband and the rise of GPS through to the boom in cloud computing, the growth of the Internet of Things and the increasing use of Artificial Intelligence tools like Siri, and Alexa in our everyday lives. Technology innovation set to accelerate The pace of innovation over the next couple of decades is likely to well surpass these recent advances. The IoT is only going to expand as the number of devices in circulation continuously rises. Furthermore, AI and AR are likely to be used far more intelligently as businesses implement techniques to better understand customer behaviour and enhance product development, while blockchain adoption is predicted to increase as organisations continue to innovate business models and processes. Wider afield, the potential of innovations like delivery drones and driverless cars, which have been speculated about for many years, are a realistic possibility in the very near future. More advanced technologies such as communications satellites are more forward-looking but have the potential to revolutionise our daily lives by providing a viable alternative to fibre broadband. For example, the likes of Starlink and Samsung are leading the way in this with around 16,000 communications satellite launches planned between them by 2025. However, like a car without an engine, this level of digital innovation will be rendered useless without the right supporting infrastructure to encourage change. Driverless cars, and any AI or IoT-powered object for that matter, will burn up vast amounts of processing power and create massive amounts of data that will necessitate a robust backbone. Data centres encourage innovation Data centres are the lifeblood of this emerging technology revolution. No matter how exciting or innovative an AI or IoT application is, if it offers a poor user experience then it will have little chance of success. The key to guaranteeing that technology trends flourish will be housing servers and growing levels of data within data centres at the heart of the world’s biggest cities and business markets. This offers close proximity to both consumers and other businesses, guaranteeing the fastest, most secure access to data, the highest levels of connectivity and bandwidth and minimal levels of latency. While high real-estate prices make it hugely expensive for businesses to build their own data centres in these locations, colocation offers a more attractive alternative that can reduce infrastructure costs, increase availability and reduce latency. Carrier and cloud neutral facilities provide highly connected, secure, scalable infrastructure with minimal overheads and maximum convenience. We’re seeing businesses increasingly choose to colocate in central London to reap the benefits of direct access to Europe and beyond. With eight major CDNs present, as well as more than 90 connectivity providers, our London data centre campus ensures the perfect home for enterprises’ data. All this is backed by a resilient and secure environment with strict SLAs for performance and availability – ensuring the best possible user experiences for customers. With the right data centre strategy, businesses can bring tomorrow’s technologies ever closer to reality. Find out how Interxion’s expanding London data centre campus can support your technology goals with unrivalled connectivity in the heart of the capital.
<urn:uuid:bc6aa529-e301-4409-9638-bb3e3b5942f3>
CC-MAIN-2022-40
https://www.interxion.com/ie/blogs/2018/11/how-data-centres-fuel-technology-innovation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00331.warc.gz
en
0.928418
731
2.53125
3
There is much talk about the moral obligation of protecting our planet amidst the climate crisis - particularly in light of the recent United Nations’ Intergovernmental Panel on Climate Change (IPCC) report that climate change is harming the planet faster than humans and nature can adapt. Moral obligations are typically seen as being good stewards for future generations - reduce, reuse, recycle. While all of this is true, we need to think beyond these parameters. The morality of growth One element to add to this discussion is the moral imperative for economic growth. You don’t hear that argument made very often, but you should. Why? Because nearly 10 percent of the world’s population is living in extreme poverty. According to the World Bank, “Global extreme poverty rose in 2020 for the first time in over 20 years as the disruption of the Covid-19 pandemic compounded the forces of conflict and climate change, which were already slowing poverty reduction progress.” Those living in extreme poverty are the people most directly harmed by the effects of climate change. Climate change will drive 68 million to 132 million into poverty by 2030, predominantly in the regions most susceptible to environmental change and where the majority of global poor are concentrated - Sub-Saharan Africa and South Asia. This population also depends disproportionately on resources provided by nature for their survival. As the International Monetary Fund (IMF) noted, “it is the populations of these economies most vulnerable to climate change who contribute the least to the accumulation of greenhouse gases.” IBM founder, Thomas Watson Sr., is famously quoted as saying “world peace through world trade” as part of his hopeful belief in the power of free trade. I am a firm believer in the idea that the more we trade, the more opportunities we create, the healthier we are and the fairer the world becomes. Capitalism can be the tool of change. Behaviors are altered by both the power of the dollar and the threat of withholding it. Don’t get me wrong, unfettered capitalism is bad. We need rules and laws to ensure those who steal a company’s pension fund, for example, are punished. We also need taxes as a form of contribution, but taxes alone can’t solve this problem. If we recognize we are our brother's keeper and embed that into our thinking, the moral imperative for a dream starts to matter. People of Africa deserve to have houses made of stone and concrete that don't get washed away once a year in the monsoons. Children should be able to get an education without having to worry about hunger after long walks to school or sitting under a streetlight to do homework because that is the only location in the village where there is electricity. Do we continue to make choices that consign them to the margins forever or do we help them find a path to becoming middle class one day? We can each make a difference It is important to remember that we each have the power to make a difference. It’s easy to look at the incredible work an organization like the Bill & Melinda Gates Foundation does and admire it, but feel disconnected from our ability to make that same kind of change. While we certainly need more billionaires making these kinds of investments in humanity, we also need to stop thinking there's something uniquely not special about us as individuals. We need to recognize the privilege we carry and the obligation we have to do something more. There is a rising generation of young people who care passionately about both of these issues. They are willing to speak and if we can provide the right tools to do something about it, they will also be empowered to take action. The heroes in this effort will come from unexpected places, and the work required is not expected to be easy. The competitive ability to survive and to keep our planet whole requires progress to continue. We have to encourage economic growth because that is what stops people - and our planet - from dying.
<urn:uuid:e2b6a1d2-ba65-4715-b045-b10591b583eb>
CC-MAIN-2022-40
https://direct.datacenterdynamics.com/en/opinions/the-moral-imperative-for-economic-growth-for-sustainability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00331.warc.gz
en
0.957979
805
2.75
3
Novel Quantum Sensor Provides New Approach to Early Diagnosis Via Imaging (ImagingTechnologyNews.Online) Oxygen is essential for human life, but within the body, certain biological environmental conditions can transform oxygen into aggressively reactive molecules called reactive oxygen species (ROS), which can damage DNA, RNA, and proteins. Since oxidative stress is associated with various serious diseases, its detection within living organs offers a route to early diagnosis and preventive treatment, and is, thus, a matter of considerable interest to scientists working in the field of biomedicine. Recent international collaboration between the Japanese National Institutes for Quantum and Radiological Science and Technology (QST), Bulgarian Academy of Sciences and Sofia University St. Kliment Ohridski in Bulgaria led to a promising technology for this purpose: a novel quantum sensor. According to lead scientist Rumiana Bakalova, M.D., and her colleague Ichio Aoki, M.D., of QST, “the new sensor is appropriate for the early diagnosis of pathologies accompanied by inflammation, such as infectious diseases, cancer, neurodegeneration, atherosclerosis, diabetes, and kidney dysfunction.” This work is in its initial stages and much research is required before these sensors can be ready for medical use. But these findings reveal the potential of such technology. Bakalova noted: “Our sensor is suitable for analyzing even small redox imbalances associated with the overproduction of ROS, via MRI. And while MRI and CT by themselves have been able to diagnose advanced stage kidney damage, they have not yet been able to visualize early stages of dysfunction. The use of our probe could help clinicians identify patients in the early stage of renal damage before they need hemodialysis or kidney transplantation. With further research, our sensor could be the next generation of redox-sensitive contrast probes for early diagnosis of kidney dysfunction, and perhaps, a number of other diseases that are accompanied by inflammation.”
<urn:uuid:ae280742-7f8d-4e9a-b3ff-7e3ec2a6989f>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/novel-quantum-sensor-provides-new-approach-to-early-diagnosis-via-imaging-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00331.warc.gz
en
0.937997
402
2.796875
3
The Berkeley Marvell NanoLab: Powered by Actian X When it comes to pure science excitement, not many places can top the Marvell Nanofabrication Laboratory (NanoLab) at the University of California, Berkeley. It’s a 15,000 square-foot state-of-the-art cleanroom facility designed to facilitate the development of next-generation microelectronics, optoelectronics, superconducting devices, micro- and nano-electromechanical systems, miniaturized integrated sensor platforms, and more. It’s available 24-7 to some 450 researchers from academia, national laboratories, and local industry. And access it they do: During the 2018-2019 academic year, researchers logged more than 60,000 hours in the lab. But therein lies a challenge, too: The technologies available to researchers are in high demand—e-beam nanolithography writers (50kV and 130 kV), step-and-repeat reduction cameras (365nm i-line and 248nm deep-UV), a 375nm direct-write laser patterning system, a 4″/6″ silicon wafer processing area with LPCVD and atmospheric furnaces, process-specific plasma etchers, wet processing stations, and much more—but are not present in high quantities. Two of these, one of those, maybe three of another. Time on each system must be scheduled in advance; calibration and maintenance efforts have to be conducted routinely and at times when the equipment is least in demand. Parts and supplies must be stocked; accounts must be kept; usage must be billed. Management of this sophisticated lab was not going to be left to chance: Early in the development of the NanoLab, the lab leadership team decided to build its own laboratory management system (LMS). For flexibility, reliability, and high performance, developers wanted to build the LMS around a database, and the one they ultimately chose provided a combination of flexibility, reliability, and high performance befitting the sophistication of the laboratory it was intended to support: the database now known as Actian X. The LMS powering the NanoLab can trace its history to the early ’80s when researchers at Berkeley developed the Berkeley Computer Integrated Manufacturing System (BCIMS). This was an integrated set of more than 200 distinct programs all running on a UNIX operating system with a terminal-based interface called The Wand. All these programs stored their activities in the underlying Ingres relational database—a well-regarded database platform also developed at Berkeley. The Wand and Ingres powered the predecessor of the NanoLab, the Berkeley Microlab, for more than 20 years. In the late ‘90s, it was time to plan for the next generation laboratory and it would need a next generation LMS. Berkeley developers started collaborating on a system with their counterparts in labs run by Stanford and MIT. Over time, though, the developers at Berkeley chose to move in a different direction. “Berkeley elected not to continue with this group effort,” says Dr. Bill Flounders, Executive Director of the NanoLab. “At the time, there were no turnkey laboratory management software solutions available, but we considered the target product that Stanford and MIT were developing to be too generic for best management of the Berkeley facility.” Utilizing the best tools and techniques available, the developers at Berkeley went on to create their own LMS, Mercury. Given the developers’ familiarity with the power and flexibility of the Ingres relational database, Ingres was the natural choice for use in this powerful new laboratory management system. While Ingres changed hands and names over the course of time—today it is known as Actian X—its critical role at the heart of Mercury has not changed. The database has been updated regularly—the NanoLab is now running Actian X v11— but there has never been any need or desire to supplant it as it continues to provide the performance, flexibility, and reliability that the NanoLab requires. And that’s been true even as the needs of the NanoLab have themselves evolved. “Actian X is the backbone of Mercury,” says Dr. Flounders. “It controls all aspects of the NanoLab: accounting, chemicals and parts inventories, purchasing, member administration, tracking cleanroom and equipment use, equipment and utility problem reports, online tests/qualifications, cleanroom environment (temperature, humidity, air flow), specialized utilities (deionized water system, specialty gases), monitoring and alerts, even machine shop job assignment, tracking, and billing.” Indeed, what started as a (relatively straightforward) laboratory management system has evolved over time. NanoLab developers extended the original LMS functionality of Mercury to incorporate the functionality required for machine shop management and subsequently extended it again to accommodate the functionality required to support broader utility management. “New modules are being developed and added regularly,” says Dr. Flounders. “This year we added the gas cylinder management program. Next year, we release Mercury X which will be a full overhaul of the graphical user interface, but it will still be Actian X providing the foundation.” The Mercury system itself is written in Java, with a web interface (which researchers can access from anywhere in the world) written using the JavaServer Faces (JSF) application framework. The underlying Actian X database, running on Red Hat Linux, includes two live databases containing more than 300 tables. One table holds more than 11 million rows of data by itself. Mercury also maintains a third database in Actian X for historical data. It is not actively updated but is often consulted when historical insights are needed. Because of Mercury’s central role in the delivery of services within the NanoLab, the Actian X database is crucial to smooth operations. Indeed, every functional aspect of Mercury—from accounting and purchasing to inventories and equipment use to utility systems monitoring, problem reporting and tracking, and job tracking—involves one or more interactions with Actian X, so the database must deliver ongoing reliability and high performance. And that it does. “Actian X provides us with a stable, well-supported relational database management system,” says Computer Systems Manager Olek Proskurowski. “It is powerful, dependable, and easy to administer, so we can essentially start it and forget about it. It just doesn’t need ongoing attention. Even patch management and upgrades to new versions of Actian X do not require a lot of effort.” The benefits of Actian X extend well beyond transactional matters such as scheduling equipment and charging for services. Actian X also enables NanoLab planners to analyze tool usage and utility dependencies throughout the lab. By monitoring for utility consumption spikes, NanoLab engineers have been able to spot malfunctioning equipment quickly. The accounting team can also access detailed historical billing records from the Actian X database which they can use to model the impact of new lab recharge rates. “We can access all the usage data from a previous fiscal year to estimate revenue gain or loss based upon changes to multiple lab recharge rate categories,” says Dr. Flounders. Indeed, as an LMS, the combined power of Mercury and Actian X has proven its value time and again at the NanoLab—so much so that other labs on campus have been making inquiries about the possibility of using the system to manage their operations. The opportunities are not lost on Dr. Flounders and the computer team. Says Dr. Flounders, “we have analyzed Mercury to determine which aspects of the solution are unique to the NanoLab and which are common to other environments—and separated our system into group functionalities accordingly. The core functions would be valuable to a wide range of facilities, and we’re looking at offering a version of Mercury that delivers that functionality. There are specialized functions that are unique to our facility and unlikely to be offered in a separate module, but there are still other functions that we’re planning to incorporate in future versions of Mercury that may well be of general interest and could be added to the module we offered to others. Mercury and Actian X certainly work well for us; we imagine they would work quite well for others, too.”
<urn:uuid:2760f13c-1636-46ba-896c-480c1e90dbec>
CC-MAIN-2022-40
https://www.actian.com/customer-stories/the-berkeley-marvell-nanofabrication-laboratory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00331.warc.gz
en
0.938012
1,718
2.53125
3
CogHMS Supports RFID Cog HMS provides biometric facility through Radio-frequency identification (RFID). RFID is a technology to electronically record the presence of an object using radio signals. It is widely used for identification and tracking of persons, objects, etc. using radio waves. While finger printing, voice recognition have been around for a while, so are the hackers who have time and again proven these systems to be inadequate, while also being costly. RFID tags are relatively cheap, easy to use and secure enough (a 32 bit system uses up to a billion possible combinations making it very difficult to crack). It can be used on doors with sensor to grant or deny entry or as authentication for computer systems. And that’s not all, it can be used elsewhere too! Use of RFID in inventories in hospital supply chains medicines, materials, devices and office supplies – reveals, that the use of RFID technology can help hospitals cut as much as 18 percent in labor costs associated with resupplying. On average, supplies and inventory account for 30 to 40 percent of an average hospital’s budget, according to the research. New RFID technology makes continuous replenishment – when an item runs low – easier. When an item runs low, a signal is sent to the storeroom indicating that replenishment should be considered for that item. RFID tags can form part of a hospital wristband, a blood product label, a biomedical implant or any medical device. They can be tiny or large, immiscible or flexible. Unlike barcodes, tags can also be read from meters away, for example by an interrogator mounted on the ceiling or beside a door. Use of these tags on perishable items like blood bags, helps not only to shortcut the process by storing all information – including a record of ambient temperature over time – on each bag’s re-recordable RFID tag. Staff can quickly find blood bags by scanning up to 400 bags per second and drilling down to see all the information associated with any bag. Because it used to take so long to find the right bag in subarctic temperatures, staff might have ended their search more quickly by sending a 28-day old bag of blood of the correct type. Now the optimal bag – that closest to expiry – is quickly found and put to use, maximizing a precious resource.
<urn:uuid:b4ea8c01-d8e8-43f8-8aa0-071a7016edd0>
CC-MAIN-2022-40
https://cogno-sys.com/hospital-managementcoghms-supports-rfid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00331.warc.gz
en
0.920357
485
2.921875
3
Machine Identity Management Explained We spend considerable time and focus on securing identities used by individuals and groups within our environment. While these are essential activities, we sometimes lose sight of a whole other set of identities, often highly privileged, that are just beneath the surface; those are the machine identities in our environment. Read on to understand what machine identities are, the implications of security risks to these identities and accounts, and best practices for machine identity management across your environment. What are Machine Identities & How are they Used? Let’s get a couple important definitions out of the way. In computing, a “machine” can refer to any non-human entity. So, a machine, and thus, a machine identity, can encompass applications, software robots (such as found in robotic processing automation (RPA) workflows), applications, endpoints (servers, desktops, IoT, etc.), websites, containers, service accounts, and much more. A machine identity can be defined as a mechanism to allow people and other machines, applications, systems, and processes to have confidence the machine with which they are communicating is the one they expect it to be. These identities are used within systems, over LAN/MAN/WAN, via Bluetooth, Wi-Fi, and the internet, to name a few. Just about every communication between two ‘machines’ is identified at some level, whether that’s just allowing certain network addresses to communicate through a firewall, to multifactor authentication (MFA) involving certificates, keys, IP address, and location services. Every website you visit should be using the secure HTTP protocol (HTTPS) and applying TLS 1.2 or 1.3 to encrypt that connection. This ensures the machine identity of the website is correct. Common approaches to authenticating machine identities include: - Secrets – something that the machine has and can present as part of an authentication - Digital certificates – electronic document to prove ownership of a public key (e.g., often used with websites to ensure the web server is really in the domain referenced) - Username and password – just as you would expect to use yourself - IP address – where addresses are assigned by system administrators and are rarely changed Challenges with Managing and Protecting Machine Identities The biggest challenge with machine identities is that the ‘machine’ needs access to the identity to use it. It seems self-evident but it’s worth highlighting – humans have a couple of unique capabilities that we simply can’t give to a machine. The first of the abilities is that humans can remember something. You might argue that machines can also ‘remember’ things – they have RAM and disks where data can be stored. While this is true, unlike a machine, removing your ‘storage device’ and installing it in another ‘system’ does not, currently, result in access to your passwords. While passwords can be coaxed, or bought, from a human, it is highly likely that the person will be aware it has happened. Stealing a machine identity can be done in complete stealth, unless some important cybersecurity controls have been properly implemented. We can try to further secure machine identities by removing them from the machine itself and storing them elsewhere, e.g., in a Hardware Security Module (HSM). This does increase the security of the system where we expect theft of the machine. This approach is valuable in virtualized environments where the theft of a machine can be accomplished through cloning, where the machine remains in place, but a copy is removed for analysis. Unfortunately, this approach does not help when a malicious user is already within your network. The HSM stays accessible to them, and the machine identity used to authenticate the machine to the HSM is, necessarily, on the machine itself – and the cloned machine. How can we improve security of machine identities? There are many ways identity management and security pros try to secure machine identities. Let’s now look at five of the most important security fundamentals to strengthen protection around machine credentials and identities. 1. Vulnerability Management Most attackers will arrive in your environment through a laptop or workstation. These are the endpoints that are accessing external systems across the public internet. They are also the devices most likely to have USB sticks plugged into them, as well as being subject to the lowest levels of control. The users have non-privileged accounts so there is less risk – right? This is true, assuming your users do indeed all have non-privileged accounts and abide by the principle of least privilege (PoLP). However, most successful attacks exploit vulnerabilities in the system and its software to gain access to privileged accounts – user and machine. The first of the essential cybersecurity controls for protecting machine accounts and identities is an effective vulnerability management system (VMS). Most organizations run a VMS, but analysis from cyber breach reports tell us, year after year, that well-known and entirely preventable vulnerabilities continue to be a primary route to privilege. Vulnerabilities are classified in a variety of ways to help us assess the risk associated with each. This assessment started with a simple high, medium, low and informational classification and has added the Common Vulnerability Scoring System (CVSS) across a number of iterations, where various parameters are scored to deliver a 0-10 rating. Perhaps the most valuable piece of information your VMS can give you is the number of known exploits for each vulnerability. Known exploits are the documented attacks that can be launched against the vulnerability. Known exploits are also commonly found in attack ‘kits’ making them extremely easy to use. Always address the known exploits first—this advice cannot be stressed enough. To use an office building analogy, this is closing and locking the doors and windows. 2. Endpoint Privilege Management The next 3 entries--Endpoint Privilege Management, Privileged Password Management, and Secure Remote Access—are three core privileged access management (PAM) solution areas and can be deployed independently, or as a combined solution, depending on the vendor platform. Let’s start with Endpoint Privilege Management. We’ve locked the doors and windows, but some people have keys, bunches of keys, or even all-access passes. If an attacker can find a user with direct access to privilege, they won’t need a vulnerability. Removing direct privileged access from users is the second essential piece of securing machine identities. This is achieved through Endpoint Privilege Management tools, which offer the ability to elevate privilege for specific applications and processes at run-time via tightly controlled policies. The privilege is granted to the process, not the user, and is the least privilege needed to allow the application or process to run appropriately, further reducing the risk introduced into the environment. The ability to add multi-factor authentication prior to elevation further constrains the opportunity for privilege misuse, without adding significant friction to the user’s experience. After just these first two foundational elements of a successful cybersecurity approach, our attacker is left as an unprivileged user with no unlocked access points. A significant step forward. 3. Privileged Password Management The next area the attacker will look typically look to exploit is standard accounts and shared privileged accounts. These include default superuser accounts, and the support team accounts that exist to manage and support the environment. A standard user who occasionally logs into a remote system using a privileged account offers the perfect opportunity for an attacker to harvest those credentials and move onto another system – one potentially with critical data or a critical machine account that will deliver value. Privileged Password Management (PPM) enables you to automatically take control of privileged accounts for both humans and non-humans/machines, and secure them in a system that controls the user’s access to them. All embedded/hard-coded credentials should be replaced with code that uses API calls to the PPM solution. Privileged Password Management solutions periodically change the passwords associated with all types of privileged accounts, or even after every use of the most sensitive accounts. Password management practices such as this eliminate password re-use and brute-force attacks as the password is changing frequently. An attacker is unlikely to try to access the target while the user is active as that will normally kick them off and raise suspicion. They will wait until the user completes their session only to find the credentials have changed. Access into the privileged password management solution should be secured using MFA. One of those benefits of being a human is we can use something we ‘have’ like a mobile phone as a secondary authentication mechanism which an attacker cannot access. Automatic log out on inactivity ensures we do not leave that valuable avenue open on a laptop or workstation left unattended. 4. Secure Remote Access Don’t forget about users outside of your network. Most organizations have many people working for third-party companies connecting directly to our networks to support elements of our environment using privileged accounts. Use PAM/PIM to remove their direct access to privileged accounts and Secure Remote Access tools to remove direct network connectivity into the environment. This can be extended to internal support teams as well. With no direct route to the target systems, we increase the level of difficulty for the attacker. 5. Simplify Security The last point I want to emphasize regarding the fundamentals of protecting machine identities is to keep the model simple. Each of the elements mentioned above can be simple. It takes a little more thought, and the right solution or toolset, but it is absolutely worth it. Simple means easier to design, maintain, manage, update, and, most importantly, respond to when something bad is happening. Important Next Steps for Machine Identity Security Many machine accounts and identities will be effectively secured with the controls above. Storing privileged credentials, keys, and secrets away from the machines will reduce the risk from the theft of the machines themselves – whether physical or virtual. We can use aspects of the fundamental machine identity, such as IP address and/or certificates, to secure access to the privileged accounts that the machine needs. Changing the certificate, key, secret, password, just as we have indicated for privileged identities above, prevents cached information from being exploitable through lateral movement across the network and/or additional privilege elevation. Just as with privileged user identities, privileged machine identities benefit from layered security. By not relying on a single control point, you can also improve the ability to prevent unauthorized access. Access to the HSM is delivered through an identity managed by the privileged password management solution (in high volume, highly automated environments, a DevOps tool). Access to the solution is controlled using aspects of the machine identity that are impossible to change or fake without significant privileged user access. The layers work together, even if not directly integrated, to deliver an opaque, and thus, harder to penetrate, view to the unauthorized. As much as possible, you want to prevent machines from locally storing the identities used in their operation. The layers, each with simple requirements, will quickly add up to a significant obstacle to an attacker. The harder it is for them to move forward, the greater the chance you will see events indicating their presence. And, with this simple model built on a strong foundation of security controls, you’ll be able to respond with speed and confidence. Related Reading on Machine Identity Management Brian Chappell, Chief Security Strategist Brian has more than 30 years of IT and cybersecurity experience in a career that has spanned system integrators, PC and Software vendors, and high-tech multi-nationals. He has held senior roles in both the vendor and the enterprise space in companies such as Amstrad plc, BBC Television, GlaxoSmithKline, and BeyondTrust. At BeyondTrust, Brian has led Sales Engineering across EMEA and APAC, Product Management globally for Privileged Password Management, and now focuses on security strategy both internally and externally. Brian can also be found speaking at conferences, authoring articles and blog posts, as well as providing expert commentary for the world press.
<urn:uuid:895992f9-085c-4e67-8e29-860c92f85fb3>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/securing-machine-non-human-identities
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00331.warc.gz
en
0.9343
2,487
2.859375
3
Read time: 6 minutes, 06 seconds There are two types of attacks that are related to security namely passive and active attacks. In an active attack, an attacker tries to modify the content of the messages. In a passive attack, an attacker observes the messages and copies them. The first type of attack is passive attack. A passive attack can monitor, observe or build use of the system’s data for sure functions. However, it doesn’t have any impact on the system resources, and also, the data can stay unchanged. The victim is difficult to note passive attacks as this sort of attack is conducted in secret. Passive attack aims to achieve data or scan open ports and vulnerabilities of the network. An eavesdropping attack is taken into account as a kind of passive attack. An eavesdropping attack is to steal data transmitted among two devices that area unit connected to the net. Traffic analysis is enclosed in eavesdropping. An eavesdropping attack happens once the attackers insert a software package within the network path to capture future study network traffic. The attackers have to be compelled to get into the network path between the end point and the UC system to capture the network traffic. If their area unit additional network methods and also the network methods area unit longer, it’ll be more comfortable for the offender to insert a software package within the network path. The release of messages is additionally another kind of passive attack. The attackers install a package to the device by using virus or malware to watch the device’s activities like a conversation of messages, emails, or any transferred files that contain personal information and knowledge. The attackers will use the data to compromise the device or network. Some other attacks that have emerged thanks to the exponential interconnection of insecure devices like IoT infrastructure include those that square measure protocol-specific, likewise as wireless device networks-based For example, in associate IoT-based, mostly sensible-home systems, the communication protocol used is also RPL (Routing protocol for low-power and lossy networks). This protocol is employed thanks to its compatibility with resource-constrained IoT devices that cannot use ancient protocols. An active attack could be a network exploit during which the attackers will modify or alter the content and impact the system resource. It’ll cause damages to the victims. The attackers can perform passive attacks to gather info before they begin playacting a vigorous attack. The attackers attempt to disrupt and forced the lock of the system. The victims can get informed concerning the active attack. This sort of attack can threaten their integrity and accessibility. A vigorous attack is tougher to perform compared to a passive attack. Denial-of-Service attacks (DoS) are one in each of the samples of active attack. A denial-of-Service attack happens once the attackers take action to close up a tool or network. This may cause the first user to be unable to access the actual device or network. The attackers can flood the target device or network with traffic till it’s not responding or flaming. The services that are affected are emails, websites, or on-line banking accounts. Dos attacks may be performed merely from any location. As mentioned on top of, DoS attack includes flooding or flaming the device and network. Buffer overflow attack is one in every of the common DoS attacks. This sort of flooding attack sends a lot and a lot of traffic to the network that exceeds the limit that a buffer will handle. Then, it’ll lead to a flaming of the system. What is more, ICMP flood, called ping flood, is additionally a kind of flooding attack. The assaulter can send spoofed packets and flood them with ICMP echo requests. The network is forced to reply to all or any claims. This may cause the device not to be accessible to traditional traffic. Moreover, SYN flood is additionally a kind of flooding attack. The attackers can keep generating SYN packets to all or any of the ports of the server. Faux informatics addresses are usually used. The server that is unaware of the attack can then reply to the SYN-ACK packets. The server can fail to access the shoppers and therefore crash. Applied math approaches may be prone to develop attack detection techniques for attacks like SYN flood. One such technique is projected by authors wherever they need projecting SYN flood attack detection theme supported Bayes calculator for unintended mobile networks. Trojan horse attacks are another example of network attack, the most ordinary sort of that is backdoor trojan. A backdoor trojan permits the attackers that don’t have the authority to realize access to the pc system, network, or code application. As an example, the attackers may hide some malware in an exceedingly explicit link. Once the users click the link, a backdoor is going to be downloaded within the device. Then, the attackers can have basic access to the device. Apart from that, a rootkit is additionally another example of a trojan attack. A rootkit is usually won’t to get hidden privileged access to a system. It’ll give root access to the attackers. The attackers can manage the system; however, the users won’t get informed of it. They will amend any settings of the pc, access any files or photos, and monitor the users’ activities. A number of the favored rootkit examples are Lane Davis and Steven Dake, NTRootKit, philosopher Zeus, Stuxnet, and Flame. Flame a malware that’s established within the year 2012 that is intended to attack Windows OS. It will perform some options like recording audio, screenshotting, and observance network traffic. Moreover, a replay attack is one in every one of the samples of active attack. The attackers can snoop on a specific user before they begin playacting a replay attack. Then, they’re going to send to the victim Associate in Nursing the same message from Associate in Nursing authorized user, and the message is appropriately encrypted. Replay attacks enable the assaulter to possess access to the information and knowledge keep within the compromised device. They can also gain money profit as they’re able to duplicate the group action of the victim. This as a result of the attackers can listen to the frames of this session, mistreatment constant info to perform the attack while not limiting the number of times. There’s another attack referred to as a cut-and-paste attack that is comparable to a replay attack. In a cut-and-paste attack, the assaulter can mix different ciphertext elements and send them to the victim. The assaulter can then get the data they require and use them to compromise the system. Cybersecurity is a big part of our lives today. It is crucial to protect our devices from these malicious activities of attackers. Active and Passive attacks are challenging issues in any organization. Any Advanced Persistent Threat (APT), always chooses passive attack first to gain information about the infrastructure and the network, which can then be used to fabricate a targeted active attack against the said infrastructure, which often can be hard to block or cause catastrophe to the organization.
<urn:uuid:9ff9a41b-760f-4a43-9070-0998766e3102>
CC-MAIN-2022-40
https://www.encryptionconsulting.com/active-and-passive-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00331.warc.gz
en
0.93005
1,464
3.453125
3
The digital world is becoming ever more visual. From webcams and drones to closed-circuit television and high-resolution satellites, the number of images created on a daily basis is increasing and in many cases, these images need to be processed in real- or near-real-time. This is a computationally-demanding task on multiple axes: both computation and memory. Single-machine environments often lack sufficient memory for processing large, high-resolution streams in real time. Multi-machine environments add communication and coordination overhead. Essentially, the issue is that hardware configurations are often optimized on a single axis. This could be computation (enhanced with accelerators like GPGPUs or coprocessors like the Intel Xeon Phi), memory, or storage bandwidth. Real-time processing of streaming video data requires both fast computation and large amounts of in-memory storage. Meeting both needs in a single box can be prohibitively expensive for all but the most well-funded operations. Where hardware cannot meet the need, software steps up. This is the premise of decades of distributed computing projects. At the Seventh International Conference on Computer Science and Information Technology held in Zurich, Switzerland last month, two researchers from the Department of Electrical Engineering at Korea University presented a paper outlining their software-based approach to solving this problem. Yoon-Ki Kim and Chang-Sung Jeong developed a distributed system based on Apache Kafka to process video streams in real time. Image frames are first published to Kafka by the camera or a remote receiver. Processing nodes subscribe to the Kafka broker, pulling data to process as capacity is available. This pull model is key to real-time processing, since it allows the processing nodes to keep themselves full instead of relying on communication back and forth with a push-based model. Breaking out multiple channels into their own Kafka topic improves the throughput when accompanied with an increase in node count. “Although the GPU is well suited for high-speed processing of images, it still has limited memory capacity. Using Kafka to distributed environment allows overcoming of the memory capacity that cannot be accommodated by one node. In particular, since image data can be stored in the file system, it is advantageous to handle large-scale images without data loss.” Although the focus here is on computation and memory, disk performance plays a role as well. The Hadoop Image Processing Interface (HIPI) is a similar project using Hadoop MapReduce to distribute computation tasks. Kim and Jeong considered HIPI to be unsuitable for real-time processing in part due to the I/O patterns of the Hadoop Distributed File System (HDFS). HDFS uses a random-access pattern, which can lead to disk I/O being a bottleneck. Kafka avoids this issue by using sequential access. “Apache Kafka is a distributed messaging system for log processing. Kafka uses Zookeeper to organize several distributed nodes for storing data in real time stably. It also stores data in several messaging topics. It provides a messaging queue that allows nodes that need a message to access and process it.” Using a software system to efficiently parallelize the computation makes real-time processing of large image streams allows for “normal” high-performance computing hardware to be used. This makes it a viable option for many use cases, from the frivolous addition of visual effects on social media and video chat applications, to serious applications like object recognition for defense. But as the problem was created by hardware, so might it be addressed by hardware in the future. Increasing the amount of memory per core (or GPU) or perhaps the future broad availability of processing-in-memory could address the constraints imposed by current hardware. At least until the size and resolution of the images to be processed leapfrogs ahead again.
<urn:uuid:94185472-57c3-47ad-8d6a-91eb82e650a8>
CC-MAIN-2022-40
https://www.nextplatform.com/2017/03/13/apache-kafka-gives-large-scale-image-processing-boost/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00331.warc.gz
en
0.931526
780
2.671875
3
Tripwire Log Center A decade or more ago, logs of events recorded by firewalls, intrusion detection systems and other network devices were considered more of a nuisance than a help. There were too many of them, they weren’t easily collected, and there was no easy way to make sense of which were important. When network administrators had log recording turned on, they were lost in a sea of data, and would have to sift through it all in an attempt at analyzing suspicious activities. Some organizations deployed early Security Information and Event Management (SIEM) systems to help filter out the noise. The problem, however, is that the industry and government auditors found a gap in what was collected. There was no way to capture the events that those early SIEM solutions weren’t aware of. The auditors said that everything needed to be captured and stored.
<urn:uuid:38bdc903-d1df-486a-8929-085f90ff13f3>
CC-MAIN-2022-40
https://www.bitpipe.com/detail/RES/1297176666_814.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00531.warc.gz
en
0.987634
180
2.515625
3
GSM, GPRS, 3G, 4G (LTE), etc.. is built around signalling and State/Entity Functions. As a result of a request an entity will move from one state to another in a well defined sequence. In order for this to happen timers must be used and supported. 3GPP has a lot of specified Timers that are required for the system to work in a well co-ordinated manner. Some example of GSM Timers – Call Setup – T3120, T3126, T3101, T3230, T310, T11, T10, T313, T3107 – Call Clearing – T13, T3111, T3109, T3110 – Handover – T3103, T8, T7, TQHO, T3121 – GPRS – T3182, T3164, T3198, T3180, T3190, T3192, T3312, T3314, T3321 – Miscellaneous – T3210, T3128, T3211, T3212, T3220 Look at the extensive signalling that GSM, etc.. supports. This works because of the defined timers used for the supported messages. VBR/ Wallis Dudhnath
<urn:uuid:c74b541a-7059-49cf-a476-204920cb4421>
CC-MAIN-2022-40
https://www.erlang.com/reply/69806/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00531.warc.gz
en
0.842365
282
2.828125
3
Mobile apps in both ecosystems (Apple and Google) are mostly unsecure and could see their users lose valuable personal information, if targeted. This is according to the Vulnerabilities and Threats in Mobile Applications 2019 report, recently issued by Positive Technologies. It claims that the most common vulnerabilities are insecure data storage, a flaw that was found in more than three quarters of mobile apps. This could allow hackers to steal passwords, as well as financial or personal data. Almost all (89 per cent) of vulnerabilities could be exploited by malware, it was said. Even though the vulnerabilities are almost equally spread across both ecosystems, it seems as Apple’s ecosystem is a tad bit more secure. Leigh-Anne Galloway, Cyber Security Resilience Lead at Positive Technologies said: " In 2018, mobile apps were downloaded onto user devices over 205 billion times. Developers pay painstaking attention to software design in order to give us a smooth and convenient experience and people gladly install mobile apps and provide personal information. However, an alarming number of apps are critically insecure, and far less developer attention is spent on solving that issue. Stealing data from a smartphone usually doesn’t even require physical access to the device. “We recommend that users take a close look when applications request access to phone functions or data. If you doubt that an application needs access to perform its job correctly, decline the request. Users can also protect themselves by being vigilant on not opening unknown links in SMS and chat apps, and not downloading apps from third party app stores. It's better to be safe than sorry." Image Credit: Syda Productions / Shutterstock
<urn:uuid:d7e8693c-ca45-4901-b1d3-1fce4678f6db>
CC-MAIN-2022-40
https://www.itproportal.com/news/huge-amount-of-mobile-apps-vulnerable-to-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00531.warc.gz
en
0.958768
331
2.953125
3
What is Application Modernisation? Application Modernisation is the practice of updating older software to newer computing approaches. It maximises the benefits of modern cloud environments and moves organisations away from out-of-date legacy systems. Applications developed on outdated platforms can be inflexible. Many of them are built as monoliths with their services and features packaged together. As these apps lack modularity, adding new functionality to one component can create regressions in others. Testing can quickly become complicated, maintenance is expensive and time-consuming, and scaling is inefficient and often leads to waste. Application Modernisation takes advantage of new infrastructure and tools like microservices, cloud, and DevOps, modernising systems and removing the burden of legacy applications. How does Application Modernisation work? Application Modernisation is made up of three different transformational processes: - Going from monolithic or service-oriented architecture to microservices - Moving from physical, on-premise infrastructure to cloud-based solutions - Modernising IT workflows through DevOps Each of these steps can be completed in isolation. However, organisations can run into trouble if they decide to stop after a single process. That’s because microservices, cloud and DevOps complement each other. For example, microservices run best in containers that allow them to scale dynamically – containerisation happens to be an important feature of cloud computing. Additionally, cloud and microservices are built for speed and resilience. Making them ideal tools for agile IT teams looking to automate deployment and testing – much like in DevOps. When a business has completed all three transformational processes, it will be able to run its apps as microservices, in a cloud environment, and deploy new features using an efficient development pipeline – reducing costs and providing a better service for end users. Here's a breakdown of each process: Monolithic to Microservices While monolithic applications are generally easy to develop, deploy, and manage, they are difficult to update and scale. This is an issue for businesses racing to meet changing customer demands in a cost-effective manner. The components within legacy apps all ship together. So, updates or faults in one component might impact another. As a result, deployments are more time-consuming and cumbersome, diverting resources away from other projects and slowing time to market. Scaling introduces similar issues. If even one component is facing load and performance challenges, it might be necessary to scale up the entire app – leading to considerable wasted compute and cost implications. The solution is to break monolithic applications up into a collection of small, loosely coupled microservices. With microservices, every application function is its own service that runs in a container. These containers communicate with each other through APIs. As each container and subsequent app is independent, the development team responsible for a particular component can choose a language and framework that works best for the task at hand. These independent components make it easier to commit smaller changes that make it through testing and deployment faster. It also means that if the component they’re working on fails, it won’t jeopardise the others – reducing the risk of downtime. When it comes to scaling, if the load on a particular service is higher than usual, additional capacity can be added quickly. When demand subsides, capacity can be scaled back to normal. Application Modernisation requires organisations to transition their applications, processes, and data management to a cloud-first environment. The main benefits of cloud migration include greater agility, scalability, and access to more compute resources. However, moving to cloud is just the first step. Adopters need to maximise the value of cloud services by capitalising on the innovation potential they offer. It’s easier to realise that value with cloud-native apps, whereas it can be a bit trickier with legacy applications. As the functionality of applications increases, so does the complexity of development. IT teams that were once only concerned with building, testing, and deploying now must expand their expertise to security, data, and project management. However, more cooks in the kitchen can disrupt workflows, leading to specialisation silos, bottlenecks, delays, and waste. DevOps promotes collaboration across IT teams. It makes the department more efficient by streamlining the process of code deployment. When done properly, work moves smoothly between teams enabling businesses to design and launch high-quality apps at speed. DevOps transformation aims to build a delivery pipeline that encourages teams to communicate and operate more effectively. |Action >>> Result| |smaller updates >>> rapid deployment of apps| |automated testing & deployment >>> higher quality products| |greater collaboration >>> more responsive to customers & market conditions| |lean mentality >>> easier to deliver business value| What are the benefits of Application Modernisation? The major benefits of Application Modernisation are improved organisational performance, accelerated time to market, and enhanced customer experiences - but that’s not all. Modernising your apps will also improve security, productivity, and reduce the total cost of ownership (TCO). Security: Organisations can rely on the robust security services offered by cloud vendors. The security capabilities of providers such as AWS and Azure are industry-leading and will safeguard workloads. Total cost of ownership (TCO): Legacy apps that struggle to meet user demands require a lot of maintenance. So do the on-premise servers they’re hosted on. Modernising architecture to microservices and infrastructure to cloud can reduce costs and prevent organisations from paying for under-utilised servers and databases. Adaptability: Compared to legacy systems, additional functionality can be added to modernised apps much faster and much easier allowing businesses to respond quickly to market demands. Improving productivity: Microservice architecture allows independent development, scaling, and automation of each application component. When this flexibility is combined with DevOps, IT teams can communicate and operate more effectively. Time to market: Decoupled microservices and high performing IT teams make it easier to commit smaller app changes that make it through testing and deployment faster, vastly improving time to market and customer experience. Scale: Unlike monolithic apps, additional capacity can be added quickly when the load on a particular component is high. When demand returns to normal, so does capacity. Sustainability: App modernisation can help you reduce the carbon footprint of your IT infrastructure since services are only spun up when they’re in use and scaled back when demand subsides. Strategies for modernising applications The first step in modernisation planning is determining the return on investment. Organisations typically begin with applications that have the highest business value and are the easiest to modernise. There are several basic approaches to modernisation, the right method largely depends on your business objectives: |Encapsulate||The existing functionalities of the app are broken down into microservices and provided to the end-user as services via an API.| |Rehost or lift & shift||The existing application is lifted from one infrastructure environment and rehosted in a more powerful one, without any redesign or restructuring of the architecture.| |Replatform||The core application is not restructured, but the overall functionalities and user experience are optimised.| |Refactor||Restructure and optimise the existing code (although not its external behaviour) to remove technical debt and improve non-functional attributes.| |Rearchitect||Involves a complete overhaul of the existing application framework, re-imagining the way architecture is conceptualised and developed. Cloud-native tools and software are often deployed in this method, providing more scalability, agility, and enhanced capabilities.| |Rebuild||Rewriting and re-developing the entire app in a way that preserves the original scope and specifications.| |Replace||Switch the existing app with a new one that has brand-new functionalities, structure, and scope.| *Markets and Markets
<urn:uuid:afceea81-3e6a-4f26-9e57-3c91fd8c1693>
CC-MAIN-2022-40
https://www.nasstar.com/hub/blog/what-is-application-modernisation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00531.warc.gz
en
0.917422
1,651
2.890625
3
Muscles and bones help move the air you inhale into and out of your lungs. Some of the bones and muscles in the respiratory system include your: Diaphragm: Muscle that helps your lungs pull in air and push it out. Ribs: Bones that surround and protect your lungs and heart. How does the skeleton protect the lungs? Vertebrae surround and protect the spinal cord and bones of the rib cage help protect the heart and lungs of the thorax. Bones work together with muscles as simple mechanical lever systems to produce body movement. What part of the skeletal system protects the lungs? The bones of the chest — namely the rib cage and spine — protect vital organs from injury, and also provide structural support for the body. The rib cage is one of the body’s best defenses against injury from impact. Flexible yet strong, the rib cage protects major vital organs such as the heart, lungs, and liver. Does the skeleton help with breathing? The skeletal system is responsible for supporting the body and helping it to move, as well as providing attachment points for muscles and ligaments and protection for certain organs such as the brain. The human respiratory system includes the organs that are used for breathing, such as the nose, throat and lungs. How does the skeletal system work together with the circulatory and respiratory systems? Your circulatory system delivers oxygen-rich blood to your bones. Meanwhile, your bones are busy making new blood cells. Working together, these systems maintain internal stability and balance, otherwise known as homeostasis. How does the skeleton protect? Protection – the bones of the skeleton protect the internal organs and reduce the risk of injury on impact. For example, the cranium protects the brain, the ribs offer protection to the heart and lungs, the vertebrae protect the spinal cord and the pelvis offers protection to the sensitive reproductive organs. How do bones protect the body? Although they’re very light, bones are strong enough to support our entire weight. Bones also protect the body’s organs. The skull protects the brain and forms the shape of the face. The spinal cord, a pathway for messages between the brain and the body, is protected by the backbone, or spinal column. What does the skeletal system protect? Protects and supports organs: Your skull shields your brain, your ribs protect your heart and lungs, and your backbone protects your spine. Stores minerals: Bones hold your body’s supply of minerals like calcium and vitamin D. How does the skeletal system work with the immune system? The regulation of bone by hematopoietic and immune cells. The immune system produces cytokines which are involved in the regulation of bone homeostasis. Cells related to osteoblasts, a type of bone cell, have been shown to regulate the production of blood cells – which are primary components of the immune system. How Does the musculoskeletal system work with the respiratory system? Explanation: It assists in breathing by pulling and pushing the lungs up and down to expand or contract For your muscles to function they need oxygen which is recieved from the respiratory system. Working muscles produce gaseous wastes which are carried by the blood back to the respiratory system and expelled. How does the skeleton protect against infection? Movement – muscles are attached to bones, which are jointed. When the muscles contract the bones move. Blood production – red blood cells (to carry oxygen) and white blood cells (to protect against infection) are produced in the bone marrow of some bones. Does the skeletal system need oxygen? Like the other organs in our body, many bone cells group together to form the bone tissue. Thus, all bone tissue is living tissue that needs food and oxygen. The nutrients allow the bone tissue to break down old tissue and regrow new tissue. What bones are used for protection? The function of flat bones is to protect internal organs such as the brain, heart, and pelvic organs. Flat bones are somewhat flattened, and can provide protection, like a shield; flat bones can also provide large areas of attachment for muscles. How does the diaphragm help us breathe? Upon inhalation, the diaphragm contracts and flattens and the chest cavity enlarges. This contraction creates a vacuum, which pulls air into the lungs. Upon exhalation, the diaphragm relaxes and returns to its domelike shape, and air is forced out of the lungs. What does the skeletal system create that benefits the circulatory and respiratory systems? The skeletal system is responsible for protection, shape, and support. For example the skeletal system protects all some of your most vital organs, such as the brain, lungs, and heart. The skeletal system creates red blood cells which the circulatory system transports. What system brings air into the lungs? Lung Health & Diseases Your lungs are part of the respiratory system, a group of organs and tissues that work together to help you breathe. The respiratory system’s main job is to move fresh air into your body while removing waste gases.
<urn:uuid:867655ca-a0a6-4a7f-99a3-2b557d1acf80>
CC-MAIN-2022-40
https://bestmalwareremovaltools.com/physical/how-does-the-skeletal-system-protect-the-lungs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00531.warc.gz
en
0.912999
1,057
3.796875
4
Are you an (Ethical) Hacker? | Cyber Security Cyber security is a growing industry as more and more businesses are waking up to the reality of their responsibility to ensure that their clients’ information is kept safe and secure. To be a successful ethical hacker requires you to be able to think like one. It is the role of the ethical hacker to attempt to break into the company network and access their personal data which is locked away carefully behind as many levels of security as necessary. Companies need to ensure that their defences can keep malicious cyber criminals out. This is not an easy task when ever more increasingly sophisticated technology is constantly being developed. Hackers will need to stay ahead of the game so as to retain their clients’ trust whilst maintaining their own credibility in protecting sensitive information in this fast paced race. Hackers most commonly simulate an attack against a network, to discover weaknesses in an organisation’s security posture, and ensure their security team is battle-tested. Links with the CyberEPQ Modules: 3. & 4. Vulnerability Assessment and Pen Testing 5. Information Security Vulnerability Concepts What do (Ethical) Hackers do? Ethical Hackers simulate how a cyber-criminal may attack the system, thus resulting in assisting how the entire organisation will become more secure against future such attacks whilst ensuring that everything that an attacker may think of is secure. Ethical hackers attempt to breach an organisation’s systems, in a controlled manner in what is known as the red team/blue team exercises in which each team is pitted against the other. The objectives may include... Read more
<urn:uuid:dc682542-0a83-42df-aea9-56c2bd3ca78b>
CC-MAIN-2022-40
https://www.cybersecuritytrainingcourses.com/article-details/123/are-you-an-ethical-hacker-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00531.warc.gz
en
0.959748
338
2.921875
3
The increase in the yearning for a healthy lifestyle, development of information communication technology (ICT), and general awareness of individuals is accelerating the evolution of the technical and medical field. In addition, technological advances have pushed the capabilities of the medical field; for example, paper medical records are replaced by electronic medical records systems, robotics is aiding in surgeries, artificial intelligence systems augment clinical decision-making, and the miniaturization of medical devices has made them compact, less invasive and sophisticated. Role of USB in Telehealth and Telemedicine: The progressive next-gen medical environment provides a platform for customized medical services using Telehealth and Telemedicine that supports the collection, transmission, processing, diagnosis, and storage of medical information without space-time restrictions which aid prevention and early diagnosis of diseases. Telehealth and Telemedicine are the indispensable medical care systems of the future that enables the medical industry to cope with the skyrocketing healthcare cost, insufficient medical workforce, and provision of remote medical services. The necessary appliances for telehealth and telemedicine include an electronic health recorder, sensory devices for gathering physiological information relevant to the particular illness, and optimized communication devices between the patients and doctors. USB has been influential in communication in telehealth and telemedicine. The Continua Health Alliance approved using USB in Continual certified telehealth systems due to its fast data transfer rate and storage capacity. USB facilitates integrated and comprehensive sharing of medical information, including information from doctors such as diagnosis and test results, transferring patients’ symptoms and medication use to the specialist, and prescriptions from pharmacies and insurance companies. USB acts as a tool that a patient can use to collect, track and share past and current information about their health or the health of someone in their care. Keeping a record of routine medical tests on the USB assists the medical personnel in retrieving vital patient information and saves the patient from spending money and the inconvenience of repeating medical tests. Why undertake ROM USB for the Medical industry? The Health Insurance Portability and Accountability Act (HIPAA) necessitates all medical organizations handling patients’ medical records to undertake measures to ensure the integrity and confidentiality of stored information. The ROM USB ensures secure transmission of medical data to healthcare professionals or vice versa. A Built-in module is applied for data transmission, and the encryption for saved medical data is implemented. In addition, the password-protected authentication process of ROM USB guarantees the confidentiality and integrity of stored medical information for telehealth and telemedicine systems. Telehealthcare technology solutions company Medtronic has a data sharing platform, Carelink, which connects to all MiniMed series insulin pumps, compatible blood glucose meters, and Guardian continuous blood glucose monitors. Data is uploaded from the pump to CareLink via the CareLink USB. Werfen is another company that provides patient care using acute care diagnostics, which delivers fast, actionable medical results in minutes, informing patient-management decisions and helping improve workflow. Use of ROM USB in Medical facilities: Hospitals and medical centers use ROM USB to store medical information. The advantage of ROM USB lies in the fact that it connects hospitals and patients while minimizing the dangers of information leakage by keeping the data secured and supporting sustained health management by acting as a convenient plug-and-play viewer. The medical facilities use different data acquisition peripherals, such as blood pressure meters, glucometers, pulse oximeters, digital scales, x-ray machines, CT scan machines, ultrasound machines, etc, to take patients’ vitals. Once the data is gathered using one of these acquisition devices, it is transmitted using a ROM USB. ROM USB has become so prevalent in the medical sector because it has become a communication standard as it fulfills the criteria approved by the Continua Health Alliance, a consortium of more than 200 member companies from the technology and medical device markets. Interoperability of health-related devices and sharing of medical information between patient and doctor, individual and fitness coach, or older person and a remote caregiver is expedited using a ROM USB. It empowers individuals to manage their health information by themselves anytime, anywhere, online or offline, by providing a tool for portable individual-centered lifetime medical information management. ROM USB extends high-speed data transfer rates, speeds up the medical information sharing process, allows the making of backup copies of medical data, and ensures operability across multiple devices. The massive storage capabilities of ROM USB assist in hoarding the high-definition X-ray, MRI, and CT-scan records of patients for assessment, as a result decreasing the costs for such transfers. In addition, ROM USB permits Physicians and patients to have immediate access to medical records mitigating the hassle of online network file transfer mechanisms. ROM USB- A medium to store medical user manual and software updates: Medical facilities employ ROM USB to hoard the software updates for Medical devices. The ROM USB stores the necessary continuous software updates for medical diagnostic machines such as medical imaging or X-ray machines. The authorized person can use the password-protected ROM USB with software updates to update the device. Unauthorized users who don’t have the password cannot use the ROM USB for software updates. The ROM USB can also be used for storing user manuals of various medical devices, guiding the necessary steps to be followed for using a device. ROM USB helps in the portability of user manuals of medical devices, making them readily available at the time of need. FLEXXON aids the Medical Sector with its versatile ROM USB storage solution: The medical industry is obligated to safeguard pivotal medical records. Flexxon apprehends the gravity of ensuring medical data’s integrity, confidentiality, and reliability, so it puts forward a portable, inviolable, tamper-proof, and intelligible storage solution of ROM USB for storing, transferring, and sharing crucial medical information. Flexxon’s ROM USB, with its compact and compatible design, is one of the most precise and secure mechanisms for sharing medical data. A flexible workflow for the medical industry and patients is ensured by utilizing Flexxon’s ROM USB with the freedom of enabling or disabling the Read-Only mode that permits or prohibits alterations in the medical records as per the obligation. Flexxon’s ROM USB is a versatile medical information storage alternative that can maintain the authenticity of data or authorize desired modifications in the records. Furthermore, the advanced security function of Flexxon’s ROM USB defends the sanctity of stored sensitive medical records by offering encrypted access. Features of FLEXXON’S ROM USB beneficial for the Medical Sector: Flexxon’s ROM USB finds its utilization in telehealth, telemedicine, medical facilities, and medical devices for software updates and user manual storage. The Flexxon’s ROM USB is a sleek physical storage device available in storage capacities of 8GB,16GB, and 32GB. Thanks to its quick data read and write speeds. The Flexxon’s ROM USB is perfect for storing high-definition medical records and large files. It is lightweight and durable and can handle significant workloads of medical data; the crucial medical data remains secure with Flexxon’s ROM USB. Significant features Of Flexxon’s ROM USB include the following advantages: Read-Only Mode: Once the Read-Only mode is switched on, ROM USB prohibits altering and deleting the stored medical data, keeping it safe and sound. Safeguard Option: This feature guards the stored medical data of ROM USB by enabling or disabling the Read-Only option permitting modification of data when the mode is disabled. Security Function: The password-protection feature of ROM USB validates the integrity of stored medical data. Flexxon’s ground-breaking ROM USB is an intelligible, inviolable, portable, tamper-proof, and high-speed storage solution that safeguards sensitive medical information while allowing modifications per the requirement. The password-protected access of the high-tech Flexxon’s ROM USB defends confidential medical data by extending robust encryption. Furthermore, the ROM USB stores high-definition medical records offering massive storage capacity, and is maneuvered to share the data between the doctor or healthcare monitoring system. As a result, Flexxon’s ROM USB has surfaced as a coherent medical data storage device offering services to various medical fields and providing immediate access to patient records.
<urn:uuid:84c6f778-f22c-4442-9cd1-a6375b3ff902>
CC-MAIN-2022-40
https://www.flexxon.com/rom-usb-a-facilitator-of-the-medical-sector/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00531.warc.gz
en
0.905243
1,711
2.859375
3
The evolution of supercomputing over the last several decades has dramatically impacted how scientists, researchers and technologists tackle global issues. And today it’s never been more important to model our world by using supercomputers because of the profound challenges we face globally in the areas of energy, climate change and health. To simulate how our planet works on a granular level and how our bodies function at the cellular level require the most analytic and computational tools available. Supercomputers are that tool. Can Wind Power Fuel the Entire World? Wind power is an unbeatable renewable energy source that, if harnessed effectively, could meet all of the world’s energy demands. In fact, according to GE, based on wind conditions at a typical site in the German North Sea, one turbine could produce as much as 67 gigawatt hours (GWh) annually, enough power for as many as 16,000 European households. That’s why Dr. Lawrence Cheung and the team from GE Global Research are working with a supercomputer to understand the elements critical to the development and design of wind turbines and wind farms. Wind is invisible to the naked eye and wind motions vary in scale as their movements change regularly – hourly, daily and seasonally. This makes wind extremely difficult to measure and track, and traditional methods such as radar and wind measurement devices don’t provide an adequately comprehensive picture of what’s going on around an entire array of wind turbines. Using a supercomputer, Dr. Cheung and the GE team are running predictive simulations of actual wind farms in just a couple of weeks. They’ve been successful in measuring and repeating what they’ve seen in the atmosphere and what’s happening in the wind field around the turbines. This data is now being used to build new, more efficient turbines. With supercomputing capabilities, Cheung says, “We’re getting things that can’t ever be reasonably measured in the field. We’re getting information that would be impossible to measure by other means.” Earthquake Simulations Help Cities Plan for Safer Infrastructure While science can’t yet predict earthquakes, the power of supercomputing is helping to work on the next best thing – predicting the effects on infrastructure so cities like San Francisco and Mexico City can plan their buildings, roads and utilities for when the “Big Ones” strike. Specifically, the team of scientists at the Southern California Earthquake Center (SCEC) is using a supercomputer to process enormous calculations and obtain seismic intelligence on how the ground would move during an earthquake. SCEC recently ran a simulation that generated two sets of seismic maps expanding from the original Los Angeles basin area into central California and covered 438 sites, including public utility stations, historic sites and key cities. Ground motion is mythically difficult to model and achieving an accurate view of ground motion requires more than standard practical techniques. CyberShake, one of SCEC’s major research efforts, has taken those standard techniques and built on them, using advanced computing, integrating advanced physics, and aggregating loads and loads of data and wave motions to produce complete and accurate earthquake models. For example, SCEC’s supercomputer used for earthquake simulations has almost 300,000 CPUs and 19,000 GPUs. This means it can solve trillions of equations in a matter of moments, faster than you can scroll down to the bottom of this article. Advanced Weather Forecasts Save Lives Every year, extreme weather conditions present a threat to lives and livelihoods around the globe. Advanced weather forecasting requires an incredible amount of compute power, something the Met Office – the U.K.’s national weather service – knows all too well. The Met Office is estimated to save as many as 74 lives and £260 million a year. These life-saving forecasts, however, demand immense compute power and, most importantly, forecasting accuracy. Without accuracy, the public’s attention to severe weather warnings would wane. The Met Office uses more than 10 million daily weather observations in the UK alone, an advanced atmospheric model and a supercomputer to create 3,000 tailored forecasts each day. The weather center is able to turn out such a vast number of forecasts because its supercomputer is capable of processing 16,000 trillion calculations each second. These accurate forecasts help save lives every day. The Met Office’s supercomputer allows it to take in 215 billion weather observations from all over the world every day, which it then uses as a starting point for running an atmospheric model containing more than one million lines of code. In addition, it’s expected that the supercomputer will enable £2 billion of socio-economic benefits across the UK through enhanced resilience to severe weather and related hazards. Getting Oil to the Pump Faster The oil and gas industry is in an aggressive race toward efficiency. Studies show that energy consumption is on pace to double by 2040, and as a result, the demand for oil and gas is soaring. At the same time, new oil and gas reserves are becoming harder to locate and the industry as a whole is encountering cutbacks. Exploration and production (E&P) companies are faced with a stark reality – finding oil and gas resources is no guessing game. PGS, a marine seismic company that creates high-resolution 3D seismic images for E&P companies, is flipping this reality on its head with the immense computing power of a supercomputer to produce increasingly accurate and clear images that create a better, faster and smarter game for E&Ps. In 2014, PGS conducted a survey of the resource-rich area of the Gulf of Mexico that covered 10,000 square miles and took almost two years to plan, and almost a year to conduct. This was the largest-ever seismic survey to process and required supercomputing technology to crunch the data in the shortest time possible. Put another way, the survey resulted in a 3D image with a pixel size of 30x30x10 meters that covered an area about 5 times the size of Texas, to a depth of 16 kilometers. Discovering Bone Fracture Treatments in One-Quarter of the Time Treating bone fractures is actually an uncertain science. You can have two patients with exactly the “same” situation, but one implant may work and one may fail. Dr. Ralf Schneider from the High Performance Computing Center Stuttgart (HLRS) is trying to figure out why that is. To predict how a specific patient’s bones would heal, Dr. Schneider is using HLRS’s supercomputer to conduct micromechanical simulations of bone tissue. But achieving accurate calculations begins with having the correct material data. Dr. Schneider uses samples from patients of different ages and genders to build a representative database. "You have to calculate the local strain within the bone in the correct way," says Schneider. "If you don't have the right elasticity, you'll formulate the strain incorrectly, which will lead to an incorrect estimation of bone remodeling" and an incorrect calculation of the risk of implant failure. While micromechanical simulations aren’t large in size, resolving each tissue sample requires 120,000 individual simulations. With a supercomputer, Dr. Schneider can accomplish these simulations in a day. Before, when he was using a PC-Cluster, one simulation required about four days to complete. Now, because the simulations are completed more quickly, Dr. Schneider has time to run more simulations and, therefore, more confidence in the results. As the above examples demonstrate, supercomputing plays an important role in finding answers to issues that will ultimately better our lives. And it has played that key role for decades already. With the coming emergence of exascale supercomputing systems that are even more powerful, we’ll all get to see what these next-generation systems can accomplish, guided by the brightest minds of today and tomorrow. Fred Kohout, Senior Vice President Products and Chief Marketing Officer at Cray Inc. (opens in new tab) Image Credit: Scanrail1 / Shutterstock
<urn:uuid:15f1ca8e-6f13-4374-b00c-e11ca5c33845>
CC-MAIN-2022-40
https://www.itproportal.com/features/five-ways-supercomputing-is-solving-some-of-the-worlds-greatest-challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00531.warc.gz
en
0.934514
1,666
4.03125
4
The importance of a certificate key size is immense in terms of encryption for all types of network traffic and communications security today. The safety of encrypted connections thus depends heavily on the used key size, as keys of sufficient length are necessary for better protection, as they are more challenging to break. Below you can find an overview of the basics of certificate key size and how to tweak your security properties for the best protection of your server certificates and overall systems. In particular, the article looks at TLS/SSL certificates and the RSA and ECDSA key types. TLS Key Size Security Assessment CVSS Vector: AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N How TLS/SSL Encryption Certificates Work Encryption certificates like the Transport Layer Security (TLS) and the Secure Sockets Layer (SSL) are the building blocks of secure online interactions. They protect our credentials, such as usernames and passwords, financial data, credit card numbers, and many other security items — personal and sensitive details transmitted online. In 2014, the Internet Architecture Board (IAB) called for the application of encryption protocols on all internet traffic — intending to protect both personal and business sensitive data and digital identities. The TLS is the most prevalent cryptographic protocol used on the internet, providing safety for web browsing. It’s also employed for file transfers, email, messaging, VoIP, and DNS, among other uses. The TLS became an evolved version of the SSL protocol that was created at the beginning of the 1990s. The TLS certificate uses a mixture of symmetric and asymmetric cryptography to deliver the best current combination between the level of security, performance, and efficiency. A symmetric encryption algorithm is based on data encryption and decryption using a secret key that both parties know. The asymmetric approach bets on a private key known only for decryption. These certificates encrypt data to protect data transfers and communication between parties, protecting them from unwanted interference and eavesdropping. In particular, the TLS encrypts the application layer for HTTP, FTP, SMTP, and IMAP protocols. In addition, it can also be applied to UDP, DCCP, and SCTP. The encryption process is executed through keys or public key infrastructure (PKI). There are two unique keys for SSL/TLS certificate-based websites, namely — a private key and a public key, based on the asymmetric cryptography approach, which is considered to yield more robust algorithms. The public key is used for encryption and is publicly available. Therefore, it can be easily checked in the browser. On the other hand, the private key is secret and is known only by the digital asset owner. It is used to decrypt session content encrypted with the public key. Because asymmetric cryptography offers a higher level of security, TLS uses this mode for creating and transmitting a session key. Different TLS/SSL Key Types and the Importance of Their Size The certificate key size is of particular significance for the level of protection that the TLS protocol can provide. But first, let’s go back to the most widely used TLS/SSL key types. There is a wide variety of key generation approaches, including Diffie-Hellman Key Exchange (DH), Ephemeral Diffie–Hellman Key Exchange (DHE), Elliptic Curve Diffie-Hellman (ECDH), and Ephemeral Elliptic Curve Diffie-Hellman (ECDHE). However, there are two main ones — RSA (named after its three inventors) and ECDSA, or Elliptic Curve Digital Signature Algorithm. RSA and ECDSA are currently the most well-known public key signing algorithms. The recognized industry standards and maximum sizes are the 2048-bit RSA with SHA256 key or 256-bit ECDSA with SHA256 on the P-256 curve key. The RSA key type, also referred to as a public-key cryptosystem, is more prevalent for securing data transmission. The Certificate Authorities (CA) have set the industry standard at least 2,048 bits in size. It was invented in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman. The ECDSA is a different key type that is not so massively used but has its benefits and is adopted by more and more organizations. For example, it performs quicker than RSA for SSL/TLS signing and handshakes. The same level of key strength for an ECDSA key compared to an RSA key is obtained through a minor key size, with 256 bits being the industry standard. In particular, the comparisons are the following: - 1024 RSA key size corresponds to 160 ECDSA key length - 2048 (minimum version for RSA) to 224 - 3072 to 256 (minimum version for ECDSA) - 7680 to 384 - 15360 to 512 This shows that the key size is not the only factor to consider when estimating a certificate key’s data and communications security. For example, a shorter ECDSA key may provide the same level of protection as a longer RSA key, with a smaller size that requires less computation power and time. Variations in the TLS/SSL Key Size In the case of RSA, the recommended industry minimum is 2048 bits. However, some organizations might opt-in to use 4096-bit RSA as an extra level of security. While this can be helpful in some ways and may provide longer compliance with National Institute of Standards and Technology (NIST) recommendations on cryptographic algorithms, this longer key size is also heavier for operations. That’s why it’s preferable to use the current industry recommendation for faster performance and not to increase the TLS/SSL key size from 2048 to 4096. You need to generate a new key to get a certificate with a larger key. You can refer to our Configure Trusted Certificates guide for further information on that process. Don’t forget that executing any and all security updates relevant to your systems is also necessary. Your Complete Cyber Security Plan with Crashtest Security Ensuring your cyber safety can be a complicated process. However, it gets easier with Crashtest Security’s Vulnerability Testing Software. Our platform allows you to execute a security assessment of all your systems. So you can keep tabs on security risks, including the CRIME attack, FREAK attack, BEAST attack, and Man-in-the-Middle attack, among others. It protects you from the ever-increasing number and types of cyber threats. What is a certificate key size? The term ‘certificate key size’ refers to the length of the key used by the TLS/SSL certificate for data encryption. Why does key size matter in encryption? The size of a certificate key directly impacts the level of protection it can provide. Shorter key sizes (of the same key type) usually are easier to break by malevolent users (i.e., 1024 vs. 2048 bits). What is a 2048-bit RSA key? The 2,048-bit RSA key is the most widely used key type and size. How secure is the 1024-bit RSA key? The short answer is: not enough. The current industry standard for RSA keys is 2048.
<urn:uuid:8eee28c0-45f1-4abd-a042-63c73703274f>
CC-MAIN-2022-40
https://crashtest-security.com/increase-tls-key-size/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00731.warc.gz
en
0.909269
1,518
3.546875
4
An average of 1.385 million new, unique phishing sites are created each month, with a high of 2.3 million sites created in May. The data collected by Webroot shows today’s phishing attacks are highly targeted, sophisticated, hard to detect, and difficult for users to avoid. The latest phishing sites employ realistic web pages that are hard to find using web crawlers, and they trick victims into providing personal and business information. Unique phishing URLs per month Phishing attacks have grown at an unprecedented rate in 2017 Phishing continues to be one of the most common, widespread security threats faced by both businesses and consumers. Phishing is the number 1 cause of breaches in the world, with an average of more than 46,000 new phishing sites created per day. The sheer volume of new sites makes phishing attacks difficult to defend against for businesses. Today’s phishing attacks continue to be short-lived The first half of 2017 highlights the continuing trend of very short-lived phishing sites, with the majority being online and active for only 4 to 8 hours. These short-lived sites are designed to evade detection by traditional anti-phishing strategies, such as block lists. Even if the lists are updated hourly, they are generally 3–5 days out of date by the time they’re made available, by which time the sites in question may have already victimized users and disappeared. Attacks are increasingly sophisticated and more adept at fooling the victim In the past, phishing attacks randomly targeted as many people as possible, with the hope that a substantial amount would open an infected attachment or visit a malicious web page. Today’s phishing is more sophisticated. Hackers do their research and utilize social engineering to uncover relevant personal information for individualized attacks. Phishing sites also hide behind benign domains and obfuscate true URLs, carrying more malignant payloads, and fooling users with realistic impersonated websites. Mix of companies impersonated continues to evolve Zero-day websites used for phishing may number in the millions each month, yet they tend to impersonate a small number of companies. Webroot categorized URLs by the type of website being impersonated and found that financial institutions and technology companies are the most phished categories. The top 10 companies being impersonated throughout the first six months of 2017 are: - Google – 35% - Chase – 15% - Dropbox – 13% - PayPal – 10% - Facebook – 7% - Apple – 6% - Yahoo – 4% - Wells Fargo – 4% - Citi – 3% - Adobe – 3%
<urn:uuid:ba83960a-6eab-4a57-bc0a-b8dc04639e63>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2017/09/22/46000-new-phishing-sites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00731.warc.gz
en
0.934314
545
2.546875
3
NTT DoCoMo is claiming a solution for one of the big problems that faces millimeter wave (mmWave) 5G -- transmitting the high-band signal through treated low-emissivity (low-E) glass. The operator tested a new window material which allows 28GHz signals to pass through, making it suitable for unobtrusive use in the windows of buildings and vehicles, DoCoMo claims. Millimeter Wave 5G has amazing raw speed, getting downloads of 1-gigabit plus wirelessly, but has terrible coverage range (1,000 feet to 2,000 feet) and can't penetrate energy-saving low-E glass, as well as grass hedges and concrete walls. This means that early high-band 5G networks don't work inside buildings, a major failing for the newest high-band cellular standard. NTT DoCoMo has now tested a new window material that it calls a "prototype dynamic transparent metasurface." The material is manufactured by Japanese global glass manufacturing company AGC, and can be used in the windows of buildings and cars. "The metasurface, an artificially engineered material, comprises a large number of sub-wavelength unit cells placed in a periodic arrangement on a two-dimensional surface covered with a glass substrate [underlying layer]," DoCoMo says in a statement. In the trial, 28GHz radio waves were beamed to measure penetration in two modes: full penetration, where the glass substrate and moveable transparent surface were attached to each other; and full reflection, where the two were separated by more than 200 micrometers. DoCoMo says that the prototype material can be used with frequencies higher than the millimeter wave band (traditionally listed at 30GHz to 300GHz). This means it can be used in 6G communications that go up into the 120GHz and even the Terahertz (THz) range. Why this matters The prototype window material allows mmWave signals to pass through the new glass. This is technically important for 5G cellular. So far, DoCoMo has only tested this technology on a small piece of prototype glass. The much larger issue is the cost of new and replacement glass that would be needed in millions of buildings around the world. It is way too early for DoCoMo to have any figures on how much it might cost to replace all existing windows with its new glass substrate, but it would doubtless be incredibly expensive! - Could 5G Have Found Its Glass Ceiling? - Verizon: mmWave Is Not 'a Coverage Spectrum' for 5G - Helping Millimeter Wave Achieve Its Potential — Dan Jones, Mobile Editor, Light Reading
<urn:uuid:a547bdae-6e1a-44de-92d3-a61df4bfc76d>
CC-MAIN-2022-40
https://www.lightreading.com/mobile/5g/docomo-claims-it-can-smash-the-5g-glass-barrier/d/d-id/756914
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00731.warc.gz
en
0.944267
555
2.796875
3
Publié le 22 mai 2013 How important is OSNR flatness in DWDM? To answer this question, you first have to distinguish two things: power flatness and OSNR flatness. Power flatness (sometimes called balancing channels) refers to the maximum power difference between channels. This value is important because it indicates that all channels are amplified by roughly the same gain at each optical amplifier. If one channel is much more powerful than the others, the gain might be uneven. A typical figure for power flatness is max 2 dB. OSNR flatness is a different story. Even when the OSNR is very different from one channel to the next, the system will work just fine. For instance, if channel 1 has a 30 dB OSNR and channel 2 has a 20 dB OSNR, the system will work well as long as the power flatness is good (. What matters is that each channel has an OSNR value above a specific threshold, which is usually greater than 15 to 18 dB depending on the system. Moreover, using the right OSNR method is essential. The IEC method works for 10G without ROADMs. However, as soon as 40G noncoherent or ROADMs are introduced, an in-band OSNR method must be used. For more information on in-band OSNR, be sure to read the Optical Spectrum Analyzers in Next-Gen Networks white paper.
<urn:uuid:15badaa6-fa84-452d-9d52-5c296e1cca09>
CC-MAIN-2022-40
https://www.exfo.com/fr/ressources/blog/osnr-flatness-dwdm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00731.warc.gz
en
0.950252
293
2.71875
3
The Industrial Internet of Things (IIoT), a sub-category of the Internet of Things (IoT), has been transformative for many industries. A market sized at over $263 billion in 2021, IIoT encompasses sensor-embedded devices, cloud-based data, and interconnected machines which reduce downtime, improve performance, and lower costs. Manufacturing firms are leading the charge in IoT adoption; of the forecast 83 billion connected IoT devices by 2024, 70% of these are expected to be in the industrial sector. How does the manufacturing sector use IIoT? 1. Maintenance Prediction As well as being able to pinpoint live issues, connected sensors can help manufacturers predict when a machine will likely break down. By gradually recognizing long-term patterns and identifying abnormal behavior earlier, predictive maintenance software helps limit downtime and improve safety. 2. Inventory Management IIoT enables manufacturers to track the location of inventory items, their movements in the supply chain, and the volume of materials required for a specific manufacturing cycle. Finding equipment within inventories is so time-consuming that one manufacturer found itself saving $3 million per year on each of its production lines once location-tracking sensors were installed. 3. Quality Control IIoT streamlines the quality control process with thermal and video sensors that can collect product data and test materials throughout different stages of the manufacturing cycle, catching and rectifying any flaws before the product reaches the market. 4. Worker Safety IoT-enabled wearable devices can be used to monitor employees’ health metrics while working on the factory floor. Collecting data on stress levels, heart rate, and fatigue can help business owners optimize the safety and wellbeing of their workers. Cybersecurity and IIoT Despite its clear benefits, IIoT has made manufacturing systems a perfect target for cybercriminals. Hackers are better able to exploit the larger attack surface of these systems, with incidents targeting Operational Technology (OT) environments increasing by over 2,000% in 2021. And many security issues stem from gaps in protection such as exposed ports, inadequate authentication, and legacy applications that have become obsolete; 47% of attacks on manufacturing are caused by vulnerabilities that the organization had not or could not patch. These challenges have caused attacks like ransomware and server access to increase dramatically across the manufacturing industry. In 2021, manufacturing experienced more ransomware attacks than any other industry, overtaking financial services and insurance. In targeting the manufacturing industry, hackers aim to cause disruption throughout the supply chain, affecting partners and customers and pressuring the organization into paying the ransom. Even if victims refuse to pay a ransom, the financial impact can be crippling. Norsk Hydro, one of the world’s largest producers of lightweight metals, was forced to halt production after falling victim to a ransomware attack, costing the organization $52 million in lost revenue. And these disruption-related costs are often passed down to consumers through supply and demand imbalance, for instance when wholesale meat prices spiked in 2021 after a ransomware attack on JBS Foods, the world’s largest meat processing company. Securing IIoT for manufacturers For manufacturers to secure their IIoT infrastructure and protect their client data, IP, and reputation within the industry, they need to understand the current and potential threats to their business. Centripetal’s CleanINTERNET service works at massive scale and machine speed to aggregate over 3,500 cyber threat feeds, proactively shielding against 99% of attacks identified by the global threat intelligence community. The Centripetal team then provides comprehensive findings on emerging threats via our team of threat analysts, enabling overburdened teams to focus on other business activities. By creating a Zero Trust environment, CleanINTERNET helps manufacturers comply with security standards such as the GDPR, ISO/IEC 27000, and ISO 15408. By operationalizing threat intelligence, CleanINTERNET provides manufacturers with customizable intelligence and superior protection against all known risks and zero-day threats.
<urn:uuid:3771c135-a275-46a1-9255-fe378f042af9>
CC-MAIN-2022-40
https://staging.centripetal.ai/manufacturing/what-is-iiot-and-how-does-it-impact-cybersecurity-in-manufacturing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00731.warc.gz
en
0.938028
811
2.578125
3
The big data revolution is allowing large enterprises to leverage data to achieve competitive advantage and deliver products and services that would have been almost inconceivable just a few years ago. Governments and other noncommercial organizations are also using data assets to improve accountability, fine-tune service delivery, and (mostly) improve our communities. These capabilities do not come easily, however. While the new data stores and other software components are generally open source and incur little or no licensing costs, the architecture of the new stacks grows ever more complex, and this complexity is creating a barrier to adoption for more modestly sized organizations. Hadoop is arguably the most ubiquitous component in today’s big data architecture. Hadoop’s HDFS distributed file system remains the most economical form of storage for the massive amounts of fine-grained data underlying big data initiatives, and MapReduce remains the default batch processing paradigm. However, MapReduce is gradually being overtaken by more flexible and efficient approaches for complex distributed processing, including YARN-based approaches such as Tez. Meanwhile, HDFS is being supplemented—in some cases, even replaced—by the memory-based Spark framework. Spark runs MapReduce-style workloads many times faster than Hadoop, and provides higher-level programming abstractions than Hadoop MapReduce. At even higher levels of programming abstraction, there are many options for SQL processing—Hive, Impala, SparkSQL, and others—as well as non-SQL data processing languages such as Pig. Big data applications rarely work in complete isolation from more traditional data sources, and open source tools such as Flume, SQOOP, and Kafka provide options for ingesting data from files or relational databases, or for efficient processing of data streams. Apache Oozie provides a mechanism for tying these various participants into higher-level workflows. Complex applications built on these foundations often need to manage datasets requiring more real-time updates or more predictable structure, so, you often see operational databases alongside Hadoop or Spark. These can be nonrelational systems such as MongoDB or Cassandra, but open source relational systems such as MySQL and PostgreSQL are commonplace. Finally, for our purposes, a big data project needs machine-learning and statistical analysis capabilities. Open source statistical and machine-learning frameworks such as R, Spark’s MLlib, RapidMiner, Weka, and Mahout provide a diverse set of options for deriving meaning from the data. The above represent the most common open source options for a modern big data project. In addition, virtually all major software and hardware vendors attempt to offer consolidated big data solutions, usually leveraging many of these technologies. For instance, Oracle, Microsoft, and IBM all offer solutions that combine Hadoop, Spark, R, and the other key open source components. Enterprise adopters are gradually shifting from reluctantly accepting open source software to actively demanding open source. Open source solutions—even when provided by a commercial vendor such as Cloudera—help prevent vendor lock-in and inhibit overall software licensing costs. However, the operational cost of implementing the open source solution can be orders of magnitude higher than that of an integrated commercial stack. The pioneers of big data and leading application vendors exploiting these new technologies generally hire the best and brightest to overcome the inherent complexities involved in integrating these sometimes disparate technologies. It’s not uncommon for them to employ the inventors and key contributors of the various technologies. This strategy obviously cannot scale down to the mid-market. So, how do smaller companies benefit from the increasingly complex big data stack? The answer may lie in another technology megatrend: the cloud. If vendors of Hadoop as a service and analytics as a service can hide the complexity of the underlying technologies on a cloud platform, the benefits of the complex big data stack might become available to all.
<urn:uuid:3c11df0e-6dbe-4739-9b82-96a6a7e233d5>
CC-MAIN-2022-40
https://www.dbta.com/Editorial/Think-About-It/The-Increasing-Complexity-of-Enterprise-Big-Data-102972.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00731.warc.gz
en
0.913051
799
2.578125
3
Programmers have to be on their A-game to fix bugs, especially once the software is live and people are actively using it. Depending on the type of bug, you’ll have to decide the best way to debug it—with the least amount of impact to the user experience. Things get even more urgent when a security vulnerability is discovered, and it’s all-hands-on-deck until a solution is implemented that will prevent successful exploitation of the system’s weakness, removing or mitigating a threat’s capability. Finding and fixing the problem is just the first step. Then, you have to decide how to fix the problem—and how to roll it out in a way that minimizes the impact to users. That remedy might be delivered via: Those terms are often used interchangeably, but there are differences in each one based on how a programmer incorporates their solution into the software. What’s a patch? In the early days of computing, a patch was, quite literally, a patch. Analog computers used punched cards and paper tapes to input programs the machines used for performing their calculations. These “decks” contained rows of holes and spaces that were a computer’s software, and just like today, the software suppliers would need to make changes to the programming. These updates were distributed on smaller pieces of paper tape or punched cards, and the recipients were expected to cut the bad part of the code out of the deck and patch in the replacement segment—hence the name. Of course, patching has come a long, digital way. Patches for today’s computers typically update existing software version’s code by modifying or replacing it using a publicly released executable program. Patches are often temporary fixes between full releases of software packages. Patches are used to correct both large and small issues that may or may not require immediate attention, such as: - Fixing a software bug - Installing new drivers - Addressing new security vulnerabilities - Addressing software stability issues - Upgrading the software Generally, patches are automatic updates that self-install packages in various sizes, from a few kilobytes up to large patches, like those for Windows that can be over 100 Mb. And like any Windows user can confirm, the installation of certain patches (on Patch Tuesday, of course) can cause interruptions and downtimes while being installed and possibly even require a system restart or two. Most patches are delivered on a set schedule. They can be included in the product’s new version release with additional updates and fixes. What’s a hotfix? Hotfixes can also solve many of the same issues as a patch, but it is applied to a “hot” system—a live system—to fix an issue: - Without creating system downtimes or outages. Hotfixes are also known as QFE updates, short for quick-fix engineering updates, a name that illustrates the urgency. Normally, you’ll create a hotfix quickly, as an urgent measure against issues that need to be fixed immediately and outside of your normal development flow. Unlike patches, hotfixes address very specific issues like: - Adding a new feature, bug, or security fix - Changing database schema Importantly, hotfixes are not always publicly released, in contrast to patches. Here’s an example: Let’s say a bank learns that their banking app could be hacked, exploiting and revealing user data like passwords, usernames, and account information. Even if the hack hadn’t occurred yet, it’s a risk so significant that even identifying its potential requires urgent action. The security team will likely drop everything, scrambling to deliver a hotfix that solves the vulnerability as soon as possible, with minimal disruption. What’s a coldfix? Where a hotfix is executed quickly without restarting any systems or hardware, a coldfix is just the opposite. Implementing a coldfix requires users to log out of the software while entire systems need to be rebooted for fixes to go into effect. These types of updates are common in online multiplayer games, for example, so they’re normally announced several days ahead of time to give users advanced notice the service will be unavailable while the fix is completed. Notices generally include estimated times the service will be back online since outages can last from a few minutes to several hours depending on the update. What’s a bugfix? We’re all familiar with the term bug: a programming defect or glitch that creates errors within a system or software. Removing these bugs is a practice called debugging. Even though the cute name makes these errors sound small and only mildly irritating, like a gnat, developers and programmers can spend a lot of time searching several different types of common errors, such as: - Syntax or type errors - Typos and other simple errors - Implementation errors - Logical errors Implementing a bugfix, also known as a program temporary fix (PTF), can be as simple as adding missing parentheses in a piece of code. But the fix can become quite challenging if the symptoms don’t clearly point to a cause. For instance, the cause and the symptom may be remote, with either located in the program code and the other in the execution of the program, or both. Symptoms can also be difficult to reproduce for better understanding the problem. Once you’ve uncovered the root cause and issued a bug fix, however, it’s not uncommon for your programmers to find that one bugfix can actually introduce a new bug. A bugfix sounds a lot like a hotfix, but the difference lies in the timing and execution of the correction. Bugfixes generally describe issues that are found and resolved during production or testing phases or even after deployment as part of the normal release cycle of a product. Hotfixes are applied only after the product has been released and is live. Summing up the fixes What are bug bounties? As software increases in complexity, debugging before and after a product launches is vital for protecting your brand. Applications are increasingly complex, multi-threaded, and large, with a greater number of developers working on them. All this complexity makes tracking down bugs more difficult and unpredictable. Multithreaded programs: - Lengthen the time elapsed between the root cause of the bug and its detection. - Make bugs difficult to track down and reproduce. Bugs are a risk too big for you to ignore. Programmers will spend weeks hunting them or even offer bug bounties to get help finding the problems in their code before they can apply the right fix. How to avoid bugs The only ways to avoid bugs and the time spent on fixing them are to write better code. And until everyone starts writing perfect code, we can expect a few more bugs to get into places they shouldn’t be.
<urn:uuid:28eae015-d4a7-4621-a7ed-d6bdf1cf11b7>
CC-MAIN-2022-40
https://www.bmc.com/blogs/patch-hotfix-coldfix-bugfix/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00731.warc.gz
en
0.937358
1,464
3.46875
3
IT disasters can strike any business. There are many examples of cases when lack of attention to disaster recovery resulted in business failure, even for large companies. The cost of recovery after a disaster can be enormous. One may think that serious measures of protection against force majeure are only for big enterprises, while small projects and startups do not need it. In fact no business safe from cyber-attacks. Hackers are still targeting data and money. In the third quarter of 2020, the number of ransomware attacks increased by 40 percent over the previous year. There were 199.7 million cases worldwide. Plus there are other forms of cyberattacks, intruders, and natural disasters, and this makes it clear that every business, no matter big or small, should have a well-designed business continuity and disaster recovery plan in place. What is disaster recovery? Disaster Recovery (DR) is the capability of an IT infrastructure to recover from a disaster. DR should not be confused with backups – unlike backups, disaster recovery is not just the regular transfer of important data to a safe location, but the creation of a functional copy of the IT infrastructure at a backup site. Data replication for disaster recovery is not scheduled (e.g., the frequency for backups in most cases is set to be once every two weeks), but non-stop. In the event of a natural disaster, attacks, and internal failures the company will be able to keep working or at least quickly restore operation and not lose important data. With real-time replication, data after disaster recovery will include changes made in the last minutes. The standard time for providing up-to-date information is 15 minutes. While a backup can take hours or even days to restore your data, disaster recovery will allow you to continue working in just a few minutes. How disaster recovery works There are two approaches for organizing disaster recovery – a backup data center or the cloud. The first approach involves duplicating the infrastructure to the backup data center. In this case, a second data center is organized, which either completely duplicates the working one or performs the creation and storage of business-critical data copies. Deploying and administering your own disaster recovery infrastructure entails some expenses. You will have to buy servers and network equipment, allocate special premises for their installation. In addition, it is necessary to take into account the cost of cooling and powering, as well as salaries for IT staff. The main expenses include replication software and related costs: security, software licenses for servers, and storage. The second option is to switch to DRaaS – cloud service for rapid IT infrastructure recovery. In this case, the cloud provider offers DR as a service. It is cheaper and easier than creating a second IT infrastructure. The time required to deploy your own disaster recovery system is nothing compared to a ready-made DRaaS solution. It can take weeks or even months to set up in-house, while with a service provider, it can take hours or days. The amount of time depends on the number of servers you plan to connect to the disaster recovery system in the cloud. You can connect any number of company servers to the DRaaS, from one to all of them, while self-deploying a disaster recovery system, it is possible to protect only the most important servers. Disaster Recovery-as-a-Service at Cloud4U Cloud4U DRaaS solution is based on the technology of IT infrastructure replication into our fault-tolerant cloud. If a failure affects the services on the main site, they will be instantly restarted from our cloud. The service is based on VMware's vCloud Availability 3.5. It is a tool for migrating VMware-based virtual servers to the cloud and back. It enables both migration and disaster recovery scenarios.
<urn:uuid:f367e379-7fd5-4220-a2a6-8e2111c0cb56>
CC-MAIN-2022-40
https://www.cloud4u.com/blog/how-does-disaster-recovery-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00731.warc.gz
en
0.943362
778
2.625
3
Blockchain as a Tool for Cybersecurity Among cybercrimes, financial fraud in particular can be notoriously hard to detect and easy to cover up. But is blockchain technology, with its distributed digital ledger, a new tool to help organizations reduce risk and improve the overall cybersecurity posture? Or is it potentially a source of further cybersecurity concerns? This sessions describes: - Blockchain and its uses beyond digital currencies; - Whether blockchain is tamper-proof; - Key cybersecurity risks, including tampering with data prior to storage, brute force attacks, DDoS and zero-day attacks.
<urn:uuid:23a2ee75-ff0b-4fce-a230-2f986666bba6>
CC-MAIN-2022-40
https://www.inforisktoday.asia/webinars/blockchain-as-tool-for-cybersecurity-w-1805
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00731.warc.gz
en
0.920034
118
2.734375
3
Is there anything more annoying than a frozen screen? Imagine a teacher or student trying to utilize wireless technology on campus only to be let down by slow or spotty wireless coverage. School districts cannot leverage a wireless network with performance issues. Hot spots are no longer good enough: there is a requirement for pervasive wireless access in today’s classrooms. Only with pervasive wireless access can technology be fully utilized to help innovate the classroom, whether it’s through access to online teaching tools, real time communication or other student engagement vehicles. Cisco BYOD Solutions for K12 Education offer flexible solutions that make a pervasive wireless network an affordable reality. A pervasive wireless network opens up anytime, anywhere access to enhanced teaching and learning resources. An overwhelming 94% of teachers say Google or other search engines tops the list of sources their students use for research*. In that same study, only 18% of students in the country would consider using a textbook for research*. So how, exactly, are these technology-driven educational initiatives taking off inside and outside of the classroom? Inside the classroom: – Online standardized tests: Scantrons, pencils, and heavily used erasers are becoming a thing of the past. Forty-five states signed on to Common Core national standards will make the transition to online testing by 2014-2015**. Online testing can result in a better understanding of a student’s comprehension on a subject by providing more complex questions. It can also lead to faster results and fewer opportunities for test tampering. – Education Apps and YouTube: Educators can now tap into what the K12 students love best – apps and video – and use it to engage and connect them in educational discussions. Geography can be brought to life, utilizing YouTube to show students wildlife on Safari grounds in Africa. Educational supplements such as Khan Academy or Starfall providing interactive tools in order for children to actually witness what it is they are learning – E-textbook – The textbook has become outdated. Heavy backpacks filled can be replaced with a single tablet that can hold several thousand textbooks. Read more about the new age (e)-textbook in the first installment of the Cartoon Catalyst Blog Series here. Outside the classroom: Wireless access is not just required in classrooms, but everywhere on campus. Group projects, studying, and research are all happening both inside and outside the classroom, so hot spots in key areas of the school are no longer the optimal solution. What schools need is seamless, pervasive connectivity reaching outside the classroom as well. . – Anytime, Anywhere Learning Opportunities: BYOD can bridge the gap between classroom and home learning. Mobile devices are a part of students’ daily lives, so using them to keep students engaged after school hours makes sense. With BYOD, students are no longer confined to the computer lab or library. A teacher can provide study guides, hand-outs, and other material instantly to their students for use regardless of their location allowing them to study anytime, anywhere***. – Collaboration: One of the trademarks of the modern classroom is new-age collaboration. Study groups can easily be formed using social media, for example. Students can meet outside of the classroom, anywhere on campus utilizing mobile devices in order to research group projects or help each other with homework without needing to head to a computer lab. BYOD promotes more real-time collaboration allowing students and teachers to continue discussions outside the classroom, to bring in outside resources immediately to a study group to elevate a student’s understanding on a given subject just by doing a quick search on a tablet. Cisco BYOD Solutions for K12 Education offer flexible solutions for small, medium, and large school districts that make a pervasive wireless network an affordable reality while opening the door for innovation in education. Cisco BYOD Solutions for K12 Education provides a range of 802.11n access points enabling pervasive wireless access along with physical or virtual controllers consolidating management efforts. Case Study: See how Chapel Hill-Carrboro City Schools installed Cisco BYOD Solutions for K12 Education in order to update their network foundation to support new services and increase accessibility to wireless networks. This resulted in reducing operating costs by $60,000 to $80,000 annually and improving efficiency for teachers and students by providing access to wireless networks. Webcast: Watch the on-demand, 3-part K-12 Education webcast series at Cisco Communities Webinars . Also be sure to visit the Cisco BYOD Solutions for K12 Education page to see how different school districts are innovating education and transforming their classrooms. Secured Networks *** Images (In order of appearance):
<urn:uuid:d8e759c0-370a-4986-909a-63c4ca2fa9e1>
CC-MAIN-2022-40
https://blogs.cisco.com/networking/cartoon-catalyst-blog-series-why-deploy-pervasive-wireless-for-k-12-whos-doing-it-and-how
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00131.warc.gz
en
0.932139
944
3.078125
3
In the worst security breach in the history of the company, multiple high-profile Twitter accounts—including President Barack Obama, Joe Biden, Elon Musk, Kanye West, and more—were hacked yesterday with tweets promoting a bitcoin scam sent from all accounts. After Twitter initially removed many of the messages, in some cases similar tweets were sent out a second time. Struggling to gain back control, Twitter took the drastic step of disabling the ability of verified accounts to send tweets. The company let its users know via tweet that users might not be able to post on the platform or reset their passwords while the situation was being investigated. The service was finally restored around 8:30pm on July 15th. So what happened? What we do know is that individual accounts were not hacked. Twitter announced the source of the breach was a “coordinated social engineering attack” affecting several employees that allowed the hacker(s) to gain entry to the company’s internal systems. What are social engineering attacks? Social engineering is the practice of psychologically manipulating people to reveal confidential information. Some common forms are: Phishing is a type of email or text scam that entices recipients to click on a malicious link or attachment. There are different types of phishing attacks. A phishing email might trick its recipient into logging in to a spoofed website in order to extract the victim’s username and password, or ask the victim to download a fraudulent attachment, which is actually malware. These attacks are successful because the spoofed emails are often indistinguishable from legitimate emails, aside from small changes to the “from” field, the link URL, or the spoofed company’s website. In this scenario, a bad actor creates a fabricated scenario or pretext in order to gain sensitive information. The person may impersonate a coworker or other trusted party. These situations often include some dialogue and back and forth in order to set up the false narrative and often target employees in finance or HR. Posing as system messages or sent via spam email offering false services, scareware bombards users with fictitious threats. An example is a popup banner designed to look legitimate that reads, “Your computer may be infected with spyware. Install this tool now.” The “tool” is actually malware that will infect the user’s computer. The massive bitcoin scam tweeted out by celebrities and politicians yesterday was actually a form of baiting. Baiting is similar to phishing except in these situations, the scammer looks to leverage their victims’ curiosity or greed through promises of money or items. While there is some speculation, it is still unclear what type of social engineering attack(s) allowed hackers to gain access to Twitter’s internal systems. Don’t fall victim to an experience like Twitter’s. A password manager can help prevent data breaches. Learn more and start a free 14-day trial of Dashlane today—no credit card required. What does this mean for businesses vulnerable to a similar attack? According to Intel Security, 97% of people around the world are unable to identify a sophisticated phishing email, and attacks typically don’t target seasoned security professionals—instead, criminals focus their efforts on the employee base, or on specific individuals within a business who tend to yield the most power or access. So step one is raising employee security awareness. This can help to decrease your exposure to numerous cybersecurity threats. However, while education is important, having the right tools and processes in place is just as integral. According to the 2020 Verizon DBIR, 80% of data breaches could be traced to a weak or reused employee password. A password manager is the best first line of defense against a data breach. By encouraging and enabling employees to change their poor security and password behavior, a password manager minimizes your organization’s attack surface and strengthens one of your biggest vulnerabilities. Read best practices and tips about cybersecurity in “A Practical Guide to Cybersecurity“.
<urn:uuid:0c198e26-f59d-490e-b243-e5ffdfa0c57d>
CC-MAIN-2022-40
https://blog.dashlane.com/twitter-attack-affecting-president-barack-obama-to-kim-kardashian/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00131.warc.gz
en
0.941542
841
2.734375
3
“COVID-19!” the mere mention of the word sends an ominous shiver down our spines! Such has been the phenomenal negative buzz associated with the rise and spread of the Novel Coronavirus, from Wuhan China to what appears to be every nook and corner of the world! However, as the US Surgeon General tweeted a few days ago, “Seriously people- STOP BUYING MASKS! They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk! Washing your hands, staying home when sick and other ‘everyday preventive actions’ are the best protections” He went on to urge people to get a flu shot, as fewer flu patients means more resources to fight the coronavirus. This tweet came during what has become a mask boom. With coronavirus popping up in the United States, some have begun buying face masks as a form of protection, despite the likes of the US Centers for Disease Control and Prevention saying they’re unnecessary. In this chaotic environment, each of us has a civic duty, to not just protect yourself but also our families and people around us to prevent further outbreak. If each one of us do our bit we could prevent COVID-19 from spreading it forward and help stop this global epidemic. According to the World Health Organization’s (WHO) Coronavirus Website – “COVID-19 is the infectious disease caused by the most recently discovered coronavirus. This new virus and disease were unknown before the outbreak began in Wuhan, China, in December 2019.” The virus was not known to be infecting humans before December 2019. Though it is less deadly than SARS (a previous virus-caused pandemic), it is more infectious than SARS. The infected count fro COVID-19 has gone up to 92,819, with approximately about 3% mortality rate. To this date, there is no specific anti-viral medicine or vacancies to prevent the COVID-19 virus. Infected patients’ health can be improved by supportive care, and people with serious illness should be hospitalized. The infection symptoms don’t show in some cases for a long time, so it is best to be careful and avoid joining larger gatherings, and to avoid non-essential travels to distant lands, as one might not know who is infected and who is not, nor the availability of proper care in case one did get infected. Risk of Misinformation Given the global state of panic there are bound to be misinformation, misleading statements and claims. We must beware of getting into these loops of misinformation. We must get our questions answered only from reliable sources. You can look for most common information at the following website maintained by WHO for frequently asked questions.: https://www.who.int/news-room/q-a-detail/q-a-coronaviruses Avoiding Fraudsters during the Crisis As it always happens at times of panic and mass hysteria during a pandemic, there are people out there who are working to exploit the situation. According to WHO, many people have been receiving fraudulent emails stating they are from WHO and offering “remedies” for COVID-19 at a bargain price. Please be aware that there is no cure found yet for the COVID-19. All such emails offering remedies are scams. Both the FDA and the WHO have asked the public to avoid opening such emails, and to stay away from scammers. They will rip you off your money and gather your personal information. Some tips to help you to stay away from the fraud schemes; - Do not open emails or click links you don’t recognize. - For information visit authentic CDC and WHO Don’t believe in emails and information claiming to be from the CDC or WHO. - Don’t purchase or invest in medications claiming a cure for COVID-19. There is no cure yet and any alteration to the current situation WHO is your best source of information. If you are making any donation for the coronavirus relief or research make sure the organizations are legit
<urn:uuid:8a686ae9-b7fc-404e-acc8-ad1895cfce0b>
CC-MAIN-2022-40
https://avianaglobal.com/cornonavirus-safety-tips-for-you-those-around-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00131.warc.gz
en
0.947963
888
3.03125
3
Computationally intensive research creates insatiable demand for faster supercomputers. The Energy Department's Oak Ridge National Laboratory and Cray announced a $200 million deal in June to complete the world's most powerful supercomputer in 2008. The supercomputer, which Cray nicknamed Baker, will use optimized Advanced Micro Devices dual-core Opteron processors to reach a peak speed of a petaflop, or 1,000 trillion floating-point operations/sec (teraflops). In comparison, the average PC reaches speeds of about 0.0001 teraflops. Later in June, DOE's Lawrence Livermore National Laboratory and IBM teamed to announce they had deployed the most powerful software computer code for the world's current most powerful supercomputer, Blue Gene. The computer code, dubbed Qbox, will help researchers run science simulations deemed essential to national security. Researchers say the DOE labs' race for greater supercomputing speed will generate new ideas and technologies for classified and unclassified science. 'Intellectual competition creates a perpetual game of leap frog, where each new system eclipses the previous leader,' said Dan Reed, director of the Renaissance Computing Institute and a member of the President's Council of Advisors on Science and Technology (PCAST). 'This friendly competition is both healthy and inevitable,' Reed added. DOE's weapons program requires large, detailed models and large computing capacity. In unclassified research, access to high-performance computing systems benefits climate and atmospheric modeling, quantum chemistry and physics, materials science, engineering, and manufacturing. PCAST is conducting an assessment of the federal government's Networking and Information Technology Research and Development portfolio, which includes a multidecade road map for computational science. 'Software remains one of the great impediments in high-performance computing,' Reed said. 'Our programming models remain very low level, software development costs are high, and great human effort is required to extract substantial fractions of peak performance from current systems.' Reed said the solution is a sustained, coordinated software R&D program. 'Computing really is the third pillar of science, and its potential is limited only by our imaginations,' he said. Petascale systems, such as the one in development at Oak Ridge, will expand research opportunities. It happens with each new order-of-magnitude increase in computing power, Reed said. 'There are already discussions of transpetascale systems,' he added. Assembling systems with high-peak performance is relatively easy. The challenge for petascale systems is sustained performance rather than peak performance, Reed said. 'We're not there yet.' Oak Ridge scientists will use the unclassified Cray supercomputer to solve problems in nanotechnology, biology and energy, Cray officials said. Industrial researchers will also get time on the system through a DOE program that grants academic and corporate institutions supercomputer access for computationally intensive research that has national interest. For instance, this year, guest researchers from Boeing and DreamWorks Animation SKG will run simulations to help design more efficient aircraft and improve computer animation, respectively. 'There's an almost insatiable demand for computing power,' Cray spokesman Steve Conway said. 'The more they can get, the better off the science is going to be.' Lawrence Livermore's Qbox operates at a level comparable to an online game with 300 million simultaneous players. With the Qbox application, Blue Gene achieved a sustained performance of 207.3 teraflops. Blue Gene, a classified machine, belongs to the National Nuclear Security Administration. The IBM-built Blue Gene provides analysis that NNSA needs to safeguard the nuclear weapons stockpile without going underground to test the weapons, said Dimitri Kusnezov, who leads NNSA's Advanced Simulation and Computing Program. The code enhancements for Blue Gene are critical to performing predictive simulations of nuclear weapons, Kusnezov said. 'These simulations are vital to ensuring the safety and reliability of our nuclear weapons stockpile.' NNSA researchers are attempting to decipher the radioactive decay of materials buried decades ago. 'We have to figure out what aging means and then put [the material] under extreme conditions and see if [it] behaves like we think it's going to behave,' Kusnezov said. 'Before, we used to take [the materials] to Nevada and blow them up,' he added. 'That's why we're pushing our computing so aggressively.' NEXT STORY: Alliant's goal is to be better than Millennia
<urn:uuid:8765e5e0-353c-4465-a44a-e344d8935bf9>
CC-MAIN-2022-40
https://gcn.com/2006/08/doe-raises-the-bar-on-supercomputing/312665/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00131.warc.gz
en
0.914481
922
3.203125
3
In today’s highly technological world, home security systems are extremely advanced. From 24-hour video surveillance to electromagnetic lock systems, the number of ways to improve your home security are nearly exponential. But where did this all start? And better yet, how did this all start? The history of home security systems dates back to the early 1700’s, where English inventor Tildesley created the first home intrusion “door alarm.” Even though his design was simple by nature, as it was a set of wind chimes linked to the door handle, it proved as an effective tool in deterring home invaders. The latter half of 1700’s also saw the creation and implementation of some of the first door locks, as the lever tumbler lock came onto the market in 1778. Nearly 30 years later in 1853 in Boston, Massachusetts, Augustus Russell Pope patented the very first electromagnetic alarm system. Pope’s invention is widely recognized as the foundation for the majority of our modern day burglar alarm systems. Gaining increasing popularity, Pope’s invention made its way into New York City, where engineer George F. Milliken built upon Pope’s design by linking the electromagnetic current from the door to windows. However, the public opinion about electricity in the mid-1800s was one characterized by fear. After all, not many systems relied on electricity during these times, and there was ample reason to believe that having electric powered systems in your home could endanger families. So how did home security systems make their way into nearly every American household? That feat was in large part accomplished by Edward Holmes. In recognizing the necessity for electric security systems, Holmes began to campaign for home security by flooding the public with pamphlets filled with testimonials and endorsements from people who had the systems installed in their homes. He also created campaigns that addressed public fears about electricity, which helped them feel more comfortable about using electricity. Years later in the 20th century, electric home security system companies partnered with telephone and telegraph companies to link home security system triggers to emergency call centers. And with some technological advances and tweaks in both design and function, the rest is history!
<urn:uuid:48c1baf1-7f4a-4283-9cec-a1eb0af77df6>
CC-MAIN-2022-40
https://central-alarm.com/2016/08/18/the-revolutionary-history-of-home-security-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00131.warc.gz
en
0.97123
446
2.84375
3
Google is moving workloads between data centers to boost its use of renewable energy, shifting the data processing for YouTube videos and Google photos to locations where green power is plentiful. It’s a development that creates powerful new opportunities to green create cloud applications powered by solar, wind and geothermal energy. Today’s news builds on Google’s carbon-aware computing strategy, and demonstrates how software and network connectivity can transcend traditional limitations on green computing. This is a growing priority for hyperscale cloud builders and their users, as climate-driven disasters bring new urgency to sustainability. Last year Google began shifting the timing of workloads within data centers to match their energy use to the availability of renewable sources. That laid the foundation for the next step – moving the actual compute capacity to more favorable locations. “Google can now shift moveable compute tasks between different data centers, based on regional hourly carbon-free energy availability,” said Ross Koningstein, co-founder, Carbon-Intelligent Computing at Google. “This includes both variable sources of energy such as solar and wind, as well as “always-on” carbon-free energy such as our recently announced geothermal project. This moves us closer to our goal of operating on carbon-free energy everywhere, at all times, by 2030.” Moving data and applications between data centers in real time requires sophisticated software and LOTS of network capacity. As the world’s largest computing platform, Google has the resources to be a trailblazer in this effort, which aligns with its corporate commitments to reduce its climate impact. As part of that commitment, Google will soon begin using geothermal energy to power its data centers in Nevada with carbon-free energy, teaming with energy startup Fervo to use fiber optic sensors to tap the earth’s own heat to power Google Cloud servers. Addressing the Location Challenge The recent growth of cloud computing has sharpened the focus on how data centers can retool the economy for a sustainable future. As the COVID-19 pandemic shifts more essential activities online, the carbon impact of the world’s IT infrastructure becomes even more critical in addressing climate change. Google’s announcement brings the realization of a futuristic vision for workloads to “follow the sun” to create solar-powered IT applications. Google has added wind and geothermal power, creating a broader “follow the green” strategy that integrates a wider array of renewable sources. Workload mobility addresses a number of challenges in matching green power to IT operations. Renewable electricity is linked to geography and weather, and most widely available in regions with sunny or windy conditions. There’s also the intermittent nature of renewable energy – solar power is only available during daylight hours. Wind energy can be used at night, but not when the wind dies down. “The amount of computing going on at any given data center varies across the world, increasing or decreasing throughout the day,” said Koningstein. “Our carbon-intelligent platform uses day-ahead predictions of how heavily a given grid will be relying on carbon-intensive energy in order to shift computing across the globe, favoring regions where there’s more carbon-free electricity. The new platform does all this while still getting everything that needs to get done, done — meaning you can keep on streaming YouTube videos, uploading Photos, finding directions or whatever else.” Free Resource from Data Center Frontier White Paper Library Get this PDF emailed to you. The first workloads using this location shifting include media processing tasks, in which Google encodes, analyzes and processes millions of multimedia files like videos uploaded to YouTube, Photos and Drive. “Like many computing jobs at Google, these can technically run in many places (of course, limitations like privacy laws apply,” said Koningstein. “Now, Google’s global carbon-intelligent computing platform will increasingly reserve and use hourly compute capacity on the most clean grids available worldwide for these compute jobs — meaning it moves as much energy consumption as possible to times and places where energy is cleaner, minimizing carbon-intensive energy consumption.” Significantly, Google Cloud’s developers and customers can also prioritize cleaner grids, boosting their use of carbon-free energy by assigning apps to regions with better carbon-free energy (CFE) scores. Where might workloads shift within Google’s global network? The regions with the highest current CFE ratings include Oregon (89%), Sao Paulo, Brazil (87%), Council Bluffs, Iowa (78%) and Hamina, Finland (77%). At the low end of the scale are Singapore (3%) and Sydney, Australia (11%). Google provides CFE data for all of its GCP regions to help companies incorporate carbon free energy into their location strategies, which may also include criteria on latency and data sovereignty. How Google’s Carbon-Aware Computing Works Google says its carbon-aware platform is now in use at every Google data center. The system compares forecasts for the average hourly carbon intensity of a local electrical grid, and how it will change over the course of a day. Google has also developed an internal tool to predict the hourly power resources that a data center needs to carry out its compute tasks during the same period. The company then uses the datasets to optimize its operations on an hour-by-hour basis, aligning compute tasks with times of low-carbon electricity supply. Adding the CFE scores allows Google to optimize workloads for both time and location. Google said it will share its methodology and performance results with the industry in upcoming research publications. “We hope that our findings inspire other organizations to deploy their own versions of a carbon-intelligent platform, and together, we can continue to encourage the growth of carbon-free electricity worldwide,” the company said. Sustainability has been a huge priority for Google, which has been a leader in green innovation in the industry. Google optimizes every aspect of data center operations, from the chips powering servers to the power infrastructure and cooling systems. Google’s relentless focus on efficiency yielded huge savings in electricity, slashing the amount of carbon needed to operate its Internet business. Google’s data center team has focused on procuring renewable energy to power its operations instead of electricity sources based on coal. Google’s use of power purchase agreements (PPAs) for renewable energy has been adopted by other cloud providers and data center REITs. Google has matched its electricity consumption with renewable energy purchases in each of the past three years, purchasing 1.1 gigawatts in 2019. Most of that green energy goes to support Google’s massive network of data centers, which power everything from YouTube videos to Gmail to every query you type into the search field. Today’s announcement brings Google closer to its goal of using renewable energy to power every hour of operation of its data centers, around the clock and around the globe. Google isn’t alone in this effort. Last year Microsoft announced plans to be carbon negative by 2030, and begin tracking the climate impact of its vendors, while Amazon and Apple have procured substantial amounts of renewable energy to support their data centers. Multi-tenant data center developers like Switch, Digital Realty, Aligned and Iron Mountain have also lined up green energy for their data center clients.
<urn:uuid:bb79ad29-87ea-497a-af70-ee3b81605af6>
CC-MAIN-2022-40
https://datacenterfrontier.com/google-moving-workloads-between-data-centers-to-use-greener-energy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00131.warc.gz
en
0.916798
1,523
2.609375
3
Packet Switching Explained January 10, 2019 What is packet switching? Packet switching is a networking communication method used in telecommunications systems, whereby data is grouped into blocks called packets and routed through a network using a destination address contained within each packet. By breaking the communication information down into packets, it allows the same path to be shared among many users in a network. It also means that each packet can take a different route to its destination. This form of connection (between sender and receiver) is known as connectionless (as opposed to dedicated). Regular voice telephone networks are often circuit-switched rather than packet switched; whereby for the duration of the call connection, all the resources on that circuit are unavailable to other users. An individual packet is a unit of data which is routed between an origin and a destination via the internet or another packet-switched network. If a file such as an email, GIF or HTML file is sent via packet-based transmission, the TCP (Transmission Control Protocol) layer of the TCP/IP will divide the file into chunks that are optimised sizes for routing. Each chunk is known as a packet and is separately numbered and carried along with a destination address. The packets may then travel on different routes to the destination, but when they all arrive, they are reassembled into the original format (by the TCP layer at the receiving end). Packet Structure and Content All packets have a basic architecture or a header, payload and footer but depending on the protocol being used they may also contain some further information, such as that listed below: In order to route a packet over a network, the packet must contain two network addresses, the first being the source address of the host (sender) and the second being the destination address of the receiving host (receiver). Packets contain checksum, parity bits or cyclic redundancy checks. These each detect errors that may occur during transmission. A preliminary calculation is performed before the packet is sent, at the transmitter. Once the packet is received, the checksum is recalculated and compared with the first calculation that is contained in the packet. If any discrepancies are noticed the packet can be corrected or discarded. Any discarded information is known as packet loss and will be dealt with by the network protocol. When a packet is faulty, they can end-up traversing a closed-circuit. If nothing was done about this, it could eventually build up, ultimately congesting the network until the point of failure. To tackle this, a ‘time to live’ field is added to the packet. The value of this field is decreased by one each time the packet passes through a network node. If the field reaches zero, routing has failed to successfully transmit the data and the packet will be discarded. Some packets will contain a field that identifies the overall packet length. In some network types however, the length is implied by the duration of the transmission and so this is information is deemed unnecessary and is omitted. When a network implements QoS (Quality of Service), it gives priority to certain types of packets over others. The priority field is used to identify which packet queue should be used. A high priority queue is emptied more quickly then a lower priority queue at points in the network where congestion is occurring. The payload is fairly self-explanatory. It is the core data that is being carried on behalf of an application. It can vary in length but the network protocol as well as equipment used on route will usually set a maximum length. If necessary, some networks will break larger packets up into smaller packets if the payload is too large. Carritech supply, purchase, repair & refurbish, and provide ongoing support for many telecommunication products which utilise packet switching methods. You can read more about the products we support here. Get all of our latest news sent to your inbox each month.
<urn:uuid:a3c4d3c0-df97-415d-82d6-8cc6c5b6ffb4>
CC-MAIN-2022-40
https://www.carritech.com/news/packet-switching-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00131.warc.gz
en
0.952393
822
4.15625
4
What Is Data Scraping? Data scraping, or web scraping, is a process of importing data from websites into files or spreadsheets. It is used to extract data from the web, either for personal use by the scraping operator, or to reuse the data on other websites. There are numerous software applications for automating data scraping. Data scraping is commonly used to: - Collect business intelligence to inform web content - Determine prices for travel booking or comparison sites - Find sales leads or conduct market research via public data sources - Send product data from eCommerce sites to online shopping platforms like Google Shopping Data scraping has legitimate uses, but is often abused by bad actors. For example, data scraping is often used to harvest email addresses for the purpose of spamming or scamming. Scraping can also be used to retrieve copyrighted content from one website and automatically publish it on another website. Some countries prohibit the use of automated email harvesting techniques for commercial gain, and it is generally considered an unethical marketing practice. Data Scraping and Cybersecurity Data scraping tools are used by all sorts of businesses, not necessarily for malicious purposes. These include marketing research and business intelligence, web content and design, and personalization. However, data scraping also poses challenges for many businesses, as it can be used to expose and misuse sensitive data. The website being scraped might not be aware that their data is collected, or what is being collected. Likewise, a legitimate data scraper might not store the data securely, allowing attackers to access it. If malicious actors can access the data collected through web scraping, they can exploit it in cyber attacks. For example, attackers can use scraped data to perform: - Phishing attacks—attackers can leverage scraped data to sharpen their phishing techniques. They can find out which employees have the access permissions they want to target, or if someone is more susceptible to a phishing attack. If attackers can learn the identities of senior staff, they can carry out spear phishing attacks, tailored to their target. - Password cracking attacks—attackers can crack credentials to break through authentication protocols, even if the passwords aren’t leaked directly. They can study publicly available information about your employees to guess passwords based on personal details. Data Scraping Techniques Here are a few techniques commonly used to scrape data from websites. In general, all web scraping techniques retrieve content from websites, process it using a scraping engine, and generate one or more data files with the extracted content. The Document Object Model (DOM) defines the structure, style and content of an XML file. Scrapers typically use a DOM parser to view the structure of web pages in depth. DOM parsers can be used to access the nodes that contain information and scrape the web page with tools like XPath. For dynamically generated content, scrapers can embed web browsers like Firefox and Internet Explorer to extract whole web pages (or parts of them). Companies that use extensive computing power can create vertical aggregation platforms to target particular verticals. These are data harvesting platforms that can be run on the cloud and are used to automatically generate and monitor bots for certain verticals with minimal human intervention. Bots are generated according to the information required to each vertical, and their efficiency is determined by the quality of data they extract. XPath is short for XML Path Language, which is a query language for XML documents. XML documents have tree-like structures, so scrapers can use XPath to navigate through them by selecting nodes according to various parameters. A scraper may combine DOM parsing with XPath to extract whole web pages and publish them on a destination site. Google Sheets is a popular tool for data scraping. Scarpers can use the IMPORTXML function in Sheets to scrape from a website, which is useful if they want to extract a specific pattern or data from the website. This command also makes it possible to check if a website can be scraped or is protected. How to Mitigate Web Scraping For content to be viewable, web content usually needs to be transferred to the machine of the website viewer. This means that any data the viewer can access is also accessible to a scraping bot. You can use the following methods to reduce the amount of data that can be scraped from your website. Rate Limit User Requests The rate of interaction for human visitors clicking through a website is relatively predictable. For example, it is impossible for a human to go through 100 web pages per second, while machines can make multiple simultaneous requests. The rate of requests can indicate the use of data scraping techniques that attempt to scrape your entire site in a short time. You can rate limit the number of requests an IP address can make within a particular time frame. This will protect your website from exploitation and significantly slow down the rate at which data scraping can occur. Mitigate High-Volume Requesters with CAPTCHAs Another way to slow down data scraping efforts is to apply CAPTCHAs. These require website visitors to complete a task that would be relatively easy for a human but prohibitively challenging for a machine. Even if a bot can get past the CAPTCHA once, it will not be able to do so across multiple instances. The drawback of CAPTCHA challenges is their potential negative impact on user experience. Regularly Modify HTML Markup A data scraping bot needs consistent formatting to be able to traverse a website and parse useful information effectively. You can interrupt the workflow of a bot by modifying HTML markup elements on a regular basis. For example, you can nest HTML elements or change various markup aspects, which will make it more difficult to scrape consistently. Some websites implement randomized modifications whenever they are rendered, in order to protect their content. Alternatively, websites can modify their markup code less frequently, with the aim of preventing a longer-term data scraping effort. Embed Content in Media Objects This is a less popular method of mitigation that involves media objects such as images. To extract data from image files, you need to use optical character recognition (OCR), as the content doesn’t exist as a string of characters. This makes the process of copying content much more complicated for data scrapers, but it can also be an obstacle to legitimate web users, who will not be able to copy content from the website and must instead retype or memorize it. However, the above methods are partial and do not guarantee protection against scraping. To fully protect your website, deploy a bot protection solution that detects scraping bots, and is able to block them before they connect to your website or web application. Scraping Bot Protection with Imperva Imperva provides Advanced Bot Protection, which prevents business logic attacks from all access points – websites, mobile apps and APIs. Gain seamless visibility and control over bot traffic to stop online fraud through account takeover or competitive price scraping. Beyond bot protection, Imperva provides comprehensive protection for applications, APIs, and microservices: Web Application Firewall – Prevent attacks with world-class analysis of web traffic to your applications. Runtime Application Self-Protection (RASP) – Real-time attack detection and prevention from your application runtime environment goes wherever your applications go. Stop external attacks and injections and reduce your vulnerability backlog. API Security – Automated API protection ensures your API endpoints are protected as they are published, shielding your applications from exploitation. DDoS Protection – Block attack traffic at the edge to ensure business continuity with guaranteed uptime and no performance impact. Secure your on premises or cloud-based assets – whether you’re hosted in AWS, Microsoft Azure, or Google Public Cloud. Attack Analytics – Ensures complete visibility with machine learning and domain expertise across the application security stack to reveal patterns in the noise and detect application attacks, enabling you to isolate and prevent attack campaigns.
<urn:uuid:9183fd23-fa95-42e8-9d02-e22fb79ec368>
CC-MAIN-2022-40
https://www.imperva.com/learn/application-security/data-scraping/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00131.warc.gz
en
0.895549
1,714
3.421875
3
A Web Application Firewall or WAF is a network security system that helps protect web applications from various types of attacks by making sure that a web server only receives legitimate traffic. Firewalls are systems that monitor and control traffic that enters and leaves the network. It acts as a barrier between your network and the open internet. A web application firewall is a specific type of firewall that focuses on the traffic going to and leaving web apps. Standard firewalls act as the first level of security but today’s websites and web services need more security. This is where WAFs provide specialized capabilities and thwart attacks specifically aimed at the applications themselves. Looking for a WAF Solution? Check out CDNetwork’s Application Shield. How Does a Web Application Firewall (WAF) Work? A WAF works by filtering, monitoring, and blocking suspicious HTTP/s traffic between a web application and the internet. Implementing traditional firewalls have been a basic cybersecurity practice for a while. These are deployed around networks and operate in the Layers 3 to 4 in the Open Systems Interconnection (OSI) Model. Their role is limited to inspecting packets over the IP and TCP/UDP protocol and filtering traffic based on IP addresses, protocol types and port numbers. A WAF on the other hand operates at Layer 7 (L7) of the OSI model and can understand web application protocols. They are essential to analyze the traffic going to and from a web application and to prevent attacks that might otherwise go undetected through a traditional network firewall and can be used as part of a positive or negative security model. When deploying a WAF, it acts as a reverse-proxy shield between an application and the internet. A proxy server is an intermediary that protects a client machine. Reverse-proxies on the other hand ensures that the clients pass through it before reaching a server. Crucially, a WAF can be used to protect multiple applications that it is placed in front of. A WAF uses a set of rules called policies to filter out malicious traffic from taking advantage of application vulnerabilities including the OWASP Top 10. These security policies are often based on known web attack signatures, including scanpoints like HTTP Headers, HTTP Request Body and HTTP Response Body. The set of rules can also be specified to detect patterns in URL or file extension, to restrict URI, header and body length, to detect SQL/XSS injection, zero-day exploits and even bots based on their signature detection and behavior The key benefit of using a WAF is that these policies can be modified and implemented quickly and with ease. Some WAF providers also provide functionalities for load balancing, SSL offloading, and intelligent automation of these policy modifications using machine learning to optimize your cloud security. This makes it easy to adapt and respond to varying attack vectors and for Distributed Denial of Service (DDoS) protection. On its own, a WAF cannot protect against all attacks. But it can enhance web application security to protect against these common attacks: These are attacks that force authenticated users of a web application to take actions that compromise the security of the app. Usually, an attacker tricks the user to click on a link by sending them a link via email. Once the user authentication and logins are completed, the user can be forced to perform requests such as transferring funds or changing their profile details and email addresses. If the attack is aimed at an admin account and becomes successful, it could compromise the entire web application. These are attacks where the attackers try to inject malicious SQL commands into websites and applications which have user-input data fields such as contact forms. The injected code can gain unauthorized access to databases and run commands to extract or modify private information contained in the databases. Need DDoS Protection and high-performance security solutions? CDNetwork’s Flood Shield is perfect for DDoS attacks mitigation. What Are The Different Types of WAFs? A WAF protects web applications by utilizing threat intelligence and blocking attacks that satisfy certain pre-set criteria while allowing approved traffic. They help protect against cross-site forgery, cross-site scripting, SQL injection, and file inclusion where attackers try to gain unauthorized access to an application to steal sensitive data or compromise the application itself. A WAF can be one of three types based on the way they are implemented. This is usually a hardware-based WAF and is installed locally. This means that it is placed close to the server and is, therefore, easier to access. As is the case with hardware-based deployments, they help minimize latency but can be expensive to store and maintain. A host-based WAF is one that is fully integrated into an application’s software. It exists as a module inside the application server. This type of WAF is less expensive than a network-based WAF and is more customizable. On the downside, they can drain the local server resources and affect the performance of the application. They can also be complex to implement and maintain. A Cloud-based WAF is more affordable and requires fewer on-premises resources to manage. They are easier to implement and often delivered as SaaS by a vendor. offering a turnkey installation as simple as changing the DNS to redirect web traffic. Because of the cloud service model, they also have minimal upfront cost and can be continuously updated to keep up with the latest attacks in the threat landscape. CDNetworks offers a cloud-based WAF that is integrated with our global data centers and content delivery network (CDN) and prevents web application-layer attacks in real-time.
<urn:uuid:1ef55bec-0022-49bc-bd62-c8f5939df928>
CC-MAIN-2022-40
https://www.cdnetworks.com/cloud-security-blog/web-application-firewall-waf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00131.warc.gz
en
0.936989
1,286
3.3125
3
A distributed denial of service (DDoS) attack is a malicious attempt to sabotage a network by overwhelming its ability to process legitimate traffic and requests. In turn, this activity denies the victim of a service, while causing downtime and costly setbacks. A DDoS attack is a network-based attack; it exploits network-based internet services like routers, domain name service (DNS) and network time protocol (NTP), and is aimed at disrupting network devices that connect your organization to the internet. Such devices include routers (traditional WAN, as well as ISP edge routers), load balancers and firewalls. There is quite a bit of confusion among information technology (IT) professionals about the difference between a DDoS attack and a standard denial of service (DoS) attack. A DDoS attack differs from a standard DoS attack in two specific ways. - A standard DoS attack directly attacks a particular resource, such as a web server, email server or industrial control system (ICS) device. A DDoS attack targets the devices that provide access and connectivity to the servers and services on a network. - Another difference between DoS and DDoS attacks lies in the first “D,” which stands for distributed. That means the former comes from a single source, whereas the latter comes from a huge network of devices, which we call a botnet. The use of reflection, spoofing and distribution often makes thwarting DDoS attacks very difficult. It is common for even experienced IT pros to think the majority of DDoS attacks involve using large amounts of traffic. This is not the case. More than 99% of successful attacks use a very small number of malicious packets. Large-volume attacks (sometimes called volumetric) often gain attention because it is easy to explain how a massive amount of traffic has overwhelmed network resources. DDoS attack perpetrators have many motives. They can be politically or financially motivated. Nation-states have been known to conduct DDoS attacks as part of efforts to disrupt communications during military campaigns or as part of efforts to cause chaos worldwide. Some actors have no particular motivation at all. In any case, an attacker will deny the victim access to their servers, disable physical network equipment or simply wreak havoc. While no one is completely safe from DDoS attacks, critical infrastructures and centralized control systems are the most vulnerable. These industries should be the ones paying the most attention to DDoS attacks and investing the most in their cyber protection. ICSs are vulnerable to DDoS attacks ICSs are an integral part of our lives today. They allow for easier management of our most critical infrastructures and processes. Manufacturing, gas, water, power distribution and transportation all depend on ICSs to keep their processes running on a daily basis. What’s more, the emergence of the Industrial Internet of Things (IIoT) allowed users to automate some tasks in the process. We can now control everything simultaneously from a remote location. Of course, that improved workflow efficiency big time, helping us reach never-before-seen speed and accuracy. ICSs also have many cybersecurity issues. From weak passwords in internet of things (IoT) devices and open-source software, to using commercial communication protocols — ICSs have more than a few DDoS vulnerabilities. With so much operational equipment and so many ICS layers to audit, malware can easily sneak by manufacturers without getting noticed. That’s frightening, considering how much we depend on these systems and what’s at stake. Anyone can execute a DDoS attack In 2020, DDoS attacks were on the rise, partly due to the COVID-19 pandemic, which forced many sectors into digitalization. Unsurprisingly, hackers took this as an opportunity to cause disruption and earn some money on the side. State-sponsored actors saw 2020 as an opportunity to disrupt business worldwide. As devastating as they can be for the target, DDoS attacks can be relatively easy to execute. With the emergence of booters/stressers, also known as botnets for hire, even those without any programming knowledge can carry out a successful DDoS attack. Many attackers are also enlisting long-existing botnets to help with DDoS attacks. DDoS attacks are costly for the target DDoS attacks are expensive for the victim, causing economic and reputational losses. According to Kaspersky’s 2017 report, the average cost of a DDoS attack for enterprises was around $2 million. However, years have passed and attacks have evolved and are now even more devastating. It’s fair to say this figure would be much higher today. Cost isn’t the only loss. Some things simply can’t be measured, such as brand reputation damage and loss of trust with clients and customers, among many other intangible effects. Aside from the resulting downtime and legal fees, a DDoS attack can be costly in many other ways, especially for ICSs. The energy, manufacturing and health care sectors, for example, are being increasingly targeted. An attack can stop all production and deny vital services and resources to millions of people. And shutting down crucial processes and equipment could potentially cause major, even fatal, incidents. DDoS attacks are becoming more sophisticated Recent technological advances have brought about efficiency in every possible way. Many people possess or benefit from multiple IoT devices, from everyday personalized gadgets and appliances to complex machines and robots that can build entire structures. However, as technology evolves, so do DDoS attacks. DDoS attacks are expected to become even more devastating as they deny network connectivity to our smart devices, rendering them useless. DDoS threat actors threaten to exploit various emerging — and emerged — technologies. First of all, 5G and Wi-Fi 6 have made connection and communication between devices faster and smoother than ever. Of course, DDoS attackers took advantage of that, expanding their botnets at incredible rates. Artificial intelligence (AI) has found its way into the hackers’ arsenal, as well. Today, they can automatically find, breach and hijack devices for their botnets. That’s how Mirai, history’s most notorious botnet, is one of the biggest cyber threats to this day. DDoS attack tactics are also changing with time. Recently, hackers have been modifying their use of longstanding DNS amplification techniques. In short, this method allows them to magnify small queries and turn them into large traffic-hogging responses. What users can do to prevent DDoS attacks We must build better defenses — address the security concerns surrounding IoT devices, implement multilayer security solutions and closely monitor every single activity in the ICS. After all, not doing so could compromise our critical infrastructures. On the bright side, we already have what it takes to effectively fight DDoS attacks. All in all, the best way to fight a DDoS attack is to prevent it. That often involves using scrubbing services, increasing available bandwidth during attacks and using a content delivery network (CDN). It’s important to have a detailed response plan in order to quickly stop attacks and mitigate the consequences as much as possible. Dr. James Stanger, chief technology evangelist, CompTIA. Edited by Chris Vavra, web content manager, Control Engineering, CFE Media and Technology, email@example.com.
<urn:uuid:5d3c3e24-6c8e-49a1-96f1-38f24f701433>
CC-MAIN-2022-40
https://www.industrialcybersecuritypulse.com/facilities/why-ddos-attacks-are-a-major-threat-to-industrial-control-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00131.warc.gz
en
0.952784
1,525
3.546875
4
The DOS world’s need for memory grew and the 64KB available to .COM executables was no longer adequate. The NE “new executable” executable file format was invented and uses the .EXE file extension rather than .COM. The first 2 bytes of these files includes a tag at the front to identify the format and if you guessed that this was “NE” to denote “new executable”, you’d be incorrect. The first 2 bytes of every .EXE file are “MZ” famously because the name of the programmer at Microsoft who wrote the code was “Mark ZbikoNwski”. The EXE file format allowed multiple segments to be defined, and included ability for separate compilation of portions of the program, and SDKs. That is, different parts of the program could be compiled into .OBJ files, and then a LINK step is performed to assemble the resulting EXE file. This enabled many good things like separate compilation of varied portions of programs and the ability to purchase libraries of code from other developers without them having to provide source code. The NE format also permitted programs to be “large”, occupying space to all the memory available on the DOS computers. I intended to write a detailed description of this evolution of file formats here, but there’s no need, it’s been well done in detail by others and I provide here a links here Cutting to the meat of it, the NE (MZ) format executable has these portions - Relocation list Which is really - Relocation list The Header includes information for allocating a heap and a stack. One grows up, one grows down, when they collide, the application is out of memory. Notice that this is still DOS so it isn’t like the operating system is going to do anything when the application collides it’s memory. Still, the executable format is starting to grow into a real concept of an operating system, with a loader. The Code and Relocation list can use a bit more description as there can be multiple code regions, each limited to 64-KB (size of a SEGMENT). The executable is defined in segments, each of which is loaded into memory at a paragraph boundary (16 byte boundaries). The SEGMENT of that paragraph of memory can be addressed using the segment registers as 16:16 segment:offset addressing converts to physical address by shifting the segment left 4 and adding the offset. At this time in the life of Intel processors, there was no such thing as virtual addresses. The 8086 CPU is a pretty straight forward machine. Segment:Offset converts staight to physical and when the CPU addressed it, it actually went all the way to the ISA bus where memory would respond. After loading each code segment into RAM, the DOS loader applies the fixup records so that code calling between segments can call the 16:16 addresses where the program segment is actually loaded at runtime. There is NO provision for DLL’s or dynamic linking. This file format was the primary format for DOS computers through the long life of the DOS operating system and it is still with us today. The modern PE file format includes a NE/MZ format executable as a “DOS Stub” at the front. This is primarily so that programs intended for Windows 3.11 or OS/2 could display a message along the lines of “this program is intended to execute under Windows” and then the stub terminates. Creative programmers can use the DOS stub to run a DOS version of a program when on DOS and a Windows or OS/2 version of program when on those operating systems. We’re on a journey here; PC operating systems are starting to look like “real computers”. The next post will take us into modern times of about 1990.
<urn:uuid:38d2ffc2-940a-4d1f-afe7-8af0e3f6b705>
CC-MAIN-2022-40
https://www.joenord.com/dos-new-executable-exe-file-format/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00131.warc.gz
en
0.931873
825
2.734375
3
A group of security researchers and computer scientists have recently uncovered a vulnerability in how a Diffie-Hellman key exchange is deployed on the web. Dubbed as Logjam, the vulnerability affects home users and corporations alike, and over 80,000 of the top one million domains worldwide were found to be vulnerable. The original report on Logjam can be found here. What is Diffie Hellman? Diffie-Hellman is used to establish session keys that are a shared secret between two communicating parties. Protocols like SSH (used for secure shell access) or TLS (a common protocol used to secure data on the web) can implement Diffie-Hellman session keys in order to transport data securely. Common examples of where Diffie-Hellman may be used include securing bank transactions, e-mail communication, and VPN connections, just to name a few. What did the researchers discover? Unfortunately, many implementations of Diffie-Hellman on web servers use weaker parameters. The researchers behind the Logjam attack found these web servers to be vulnerable, allowing an attacker to read or alter data on a secure connection. According to their report: We identify a new attack on TLS, in which a man-in-the-middle attacker can downgrade a connection to export-grade cryptography. This attack is reminiscent of the FREAK attack , but applies to the ephemeral Diffie-Hellman ciphersuites and is a TLS protocol flaw rather than an implementation vulnerabilityWhile much of the research is performed against a Diffie-Hellman 512-bit key group, the researchers behind the Logjam discovery also speculate that 1024-bit groups could be vulnerable to those with "nation-state" resources, making a suggestion that groups like the NSA might have already accomplished this. A comprehensive look at all of their research can be found here. This all sounds great. What do I need to do? You can use the link above to discover if your browser is vulnerable. If it is, you should see an image like the one below: At the time of this writing, patches are still in works for all the major web browsers, including Chrome, Firefox, Safari, and Internet Explorer. They should be released in the next day or two, so ensure your browser updates correctly once its released. These updates should reject Diffie-Hellman key lengths that are less that 1024-bits. In the meantime, you may want to use a Virtual Machine and avoid entering sensitive information into website forms. For those running web servers that implement Diffie-Hellman, make sure the key group is 1024-bit or larger. There is also a help page that can be found here.
<urn:uuid:28663e09-40e2-4086-9c14-53dcfaa841f0>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2015/05/the-logjam-attack-what-you-need-to-know
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00332.warc.gz
en
0.909688
549
3.5625
4
This is the first entry in the Malware Injection Techniques article series that we will be writing about. The ultimate goal of any malicious software is to be able to execute its code without any hindrance. In the past, security software such as Anti-Virus applications mostly relied (it is still one of the features today) on signature-based detections. This means that they use a known list of indicators of compromise (IOCs) – these may include specific attack behaviors or as simple as file hash and compare them to the examined sample. This is the biggest limitation of signature-based detections as they can only detect known attack patterns. This very plain method proved to be very inefficient these days. For instance, let’s assume a very trivial example if AV would rely solely on file hash detection1. An exceedingly minor change in the malicious code would render a whole new hash value for that same malware, thus making it look like an unknown malware (since the new files hash does not match to any in the list of IOCs). And we are talking about changes like simply changing one of the variables names, as an example. So, it was very straightforward for adversaries to generate new file hashes for the same malware, and the AV providers would need to play Cat & Mouse game to keep up with the latest. This eventually led AV providers to develop more sophisticated tools. One of the most common terms we may spot today is heuristic analysis. Heuristic analysis executes the suspicious program in an isolated environment where it monitors commands as they are performed, and other activities related to execution of the given file. However, it is still not perfect as the effectiveness relies based on experience, which means that the AV engine still makes some sort of comparison to the other known malicious behavior, but it is not limited to some pre-defined patterns. Going back to malware injection, malware developers are now tasked to evade this detection. One of the best evasive techniques is to abuse legitimate software or processes to execute malicious content. This is where malware injection techniques come in. DLL injection technique is perhaps the most basic one to start with. The idea of classic DLL Injection is to put the path of a malicious DLL into the address space of a legitimate process and then establish a remote thread to that process, which can control the execution of the injected code. So, how does it work? Let us examine a very basic POC of a classic DLL injection. The goal of this POC code is to inject a DLL into a Notepad process and execute it. Let’s walk through it. First, we receive a handle to LoadLibraryA function, which we will use to load our malicious DLL into allocated memory of the target process. Next, we will obtain a handle to our Notepad process, which should be running. In our simple POC we just hardcoded the PID value of a running Notepad process. Note that PID changes with each new execution. Following that we have allocated some memory in our target process, so that LoadLibrary function will be able to inject our evil DLL into that process. The memory allocation should be the size of the evil DLL’s path. Then we use WriteProcessMemory to write the DLL’s path name to the allocated memory space of the targeted process. Finally, we use CreateRemoteThreadto create a remote thread starting at the memory address where LoadLibrary was stored. We also provide the function with information on memory address of the DLL path from earlier. And this is basically it. Now when this POC code executes, it should inject our evil DLL into Notepad’s process and therefore execute the malicious code within it. Here are the downsides of this specific technique: Malicious DLL still needs to be saved on disk Malicious DLL will be visible in the import table So, why not simply prevent all process injections? Well because Windows and other applications use these mechanisms in legitimate way. For example, AV software itself uses this mechanism to inject itself into applications. Debuggers also inject themselves into applications to allow developers troubleshoot their programs. For these (and many more) reasons, we cannot simply prevent all process injections occurring and that is why it still remains as relevant evasion technique. Depending on quality of AV tool, it is still possible to determine with high probability which is the malicious injection technique. For instance, performing dynamic analysis in an isolated environment would reveal something like Notepad process loading a random DLL from the disk, typically not associated with Notepad process. This could trigger a suspicion and reason to prevent further execution. Such anomaly-based detections are more advanced and often come with a trade-off with increasing false positive counts. This is the first entry in the series of Malware Injection Techniques. In the next series we will continue with other types of injection techniques. 1Most of today’s AV products do not check only the hash value
<urn:uuid:d82f9284-d04f-4b81-ba7f-a61b9ef0f1a1>
CC-MAIN-2022-40
https://conscia.com/blog/diving-deep-malware-injection-techniques-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00332.warc.gz
en
0.937385
1,042
2.609375
3
All software you install or use presents a certain level of risk due to vulnerabilities (discovered and undiscovered) in the software itself. This article will discuss the common types of software vulnerabilities are and what you can do to reduce the impact of those vulnerabilities on your organization. In its simplest definition, a vulnerability is the technical term we use to describe a weakness of some sort. All software has varying degrees of these weaknesses or vulnerabilities. Let’s discuss a few of the most common vulnerabilities: Remote Code Execution (RCE). An RCE is one of the scariest vulnerability classes. Software vulnerable to an RCE can allow remote attackers to control the system running the vulnerable software. Once that happens, the attacker has full control of the system and likely the network it’s on. Denial of Service (DoS). A DoS vulnerability indicates that a particular vulnerability can render the software, and possible it’s server and network, unusable for some period of time. This is typically used in what are called Logic-based DoS attacks where the attacker exploits a company’s DoS vulnerabilities to crash servers or processes. Overflows. Overflow vulnerabilities are a very technical concept, but one of the most common types of vulnerabilities. An overflow occurs when an application tries to insert more data into memory than is allowed. When that happens, the data “overflows” and other data can be inadvertently exposed, modified, or deleted. Injection. Most often seen in web applications, injection vulnerabilities (e.g. SQL Injection, Command Injection) are a result of attackers inserting malicious code into inputs – like in a web form – and having the server execute that code. This often leads to data leaks, remote code executions, and denial of service. Reducing the Risk There are several ways we can reduce the risk that software vulnerabilities will affect your organization. Update your software Uninstall unnecessary software Do not expose software or services to the internet unless absolutely required Protect your web applications with a Web Application Firewall Train your software developers on security (if you develop in-house) Look up the software you run on a site like CVEDetails.com to see if you are at risk You’ll notice we’re reducing the risks posed by vulnerabilities, not eliminating the risk. There will always residual risk for all the software vulnerabilities that we don’t know about yet. As with most things IT-related, this is an ongoing process of assessment and fixes that organizations must commit to in order to keep risk low. If you’d like more help with this, or any other topic, please reach out! We’re here to help.
<urn:uuid:777e9caf-0244-4f54-bc09-b0abb1abad23>
CC-MAIN-2022-40
https://help.coalitioninc.com/en/articles/3623035-understanding-and-reducing-the-risks-of-software-vulnerabilities
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00332.warc.gz
en
0.912151
561
3.453125
3
Through ongoing advancements in healthcare technology, healthcare providers and patients can now access broader communication applications thanks to advances in data storage. These advancements have improved collaboration, increased communication outlets and have transformed the way data is stored and shared. Technology will continue to shape the way healthcare providers care for their patients. Now more than ever it is crucial to understand what healthcare information technology is and best practices for your business. Healthcare information technology (HIT) includes the storage, retrieval and sharing of healthcare data. Gone are the days of doctor visits being documented through paper records. All healthcare information is now managed electronically, so having state of the art hardware and software technology is more important than ever. Healthcare information technology also increases the lines of communication between a healthcare provider and a patient. Now a patient can share information directly with their provider in a safe and secure platform. This secure transfer of information allows the healthcare provider to gather important information and the patient to share with the doctor in real time their needs and concerns. HIT systems work with large amounts of data, and with a large amount of data comes an even larger sense of responsibility to protect that information. Electronic storage of health data has led to more affordable healthcare, fewer medical errors, and less tedious paperwork to be manually stored. While this has led to many advancements, it has also introduced an increased need for health data security. Healthcare providers are required to protect all healthcare information shared between the patient and the physician. HIPAA regulations dictate healthcare technology systems just as much as they do elsewhere within the health industry. The Health Insurance Portability and Accountability Act (HIPPA) is a federal law that creates a national standard for healthcare providers to protect sensitive patient health information and it requires all providers to get patient consent to share health information. The HIPPA Privacy Rule allows individual’s health information to be properly protected while still allowing the information to flow to the proper team to promote high-quality healthcare. If you have been to the doctors recently, you likely would have been exposed to technology that collects, manages, and protects your healthcare information. This technology may include your patient portal, any online healthcare education platforms, or telehealth phone calls that directly access your healthcare information. Access to this vital information can dictate how healthcare professionals choose to treat their patients, so it is vital that the information is both accessible and secure. Electronic health records (EHR) are the collection of one patient’s health records over time. These records are typically stored in “the cloud” and this makes it easier for providers to access their health information readily. EHR allows doctors to keep patient health information in one location, so transfer and security of files is easy. Personal health records (PHR) are similar to EHR but patients have the control over what information is shared with the provider. PHR’s allow patients to record critical health information in real time. This may include how often they exercise, blood pressure logs and any other vital information. More patients are taking advantage of having their providers send their prescriptions straight to the pharmacy of their choice without having to worry about losing a paper prescription. Business’ should strengthen their security profile by requiring two-factor authentication. Using two-factor authentication, employers can request knowledge, possession, or inheritance for their employees to validate their identity. Knowledge: Request information only the user would know. This can include username, passwords, or unique ID codes. Possession: Request an item only the user would have. This can include a physical ID card, mobile phone verification or a security token. Inheritance: Request a unique characteristic from your employee; this can include a fingerprint. Healthcare providers can implement a hybrid cloud solution to create a more secure and compliant environment to host EHR and PHR. These custom cloud solutions allow you to protect your patient’s data, as well as meeting regulatory requirements. It is critical that you work with a data center that is SOC 2 Type 2 certified and HIPAA compliant. Working with a managed service provider that offers service desk solutions, you will be able to reduce infrastructure and overhead costs. You will have a team of engineers that work 24x7x365, so you don’t have to worry about downtime slowing down your healthcare operations. A managed IT service provider can offer constant monitoring and maintenance of your IT environment. With consistent support, you will be able to shield your business from productivity loss associated with stop-and-start support. Information technologies has transformed the way health care providers have been able to serve their patients. Below are a few benefits doctors and patients reap when proper information technologies are implemented. Electronic prescriptions and health records that are properly stored and shared directly with the pharmacy reduce the risk of duplication and allows for a second check on medications to ensure no medication that could cause an allergic reaction is ordered. Electronic prescriptions ensure the prescription is sent directly to the pharmacy without interception. Healthcare data that is properly secured and easily shared between providers is the most useful. Properly installed IT systems allows providers the ability to access and distribute necessary information to all providers across America. Patients also benefit from direct access to their provider and health information. Electronic medical records has greatly reduced the amount of physical paperwork required of patients. Through consistent tracking, EMR keeps track of your medical history and is constantly updated. This is beneficial to both provider and patient and reduces unnecessary time in the patient meeting. Electronic health records make it easy and simple to engage with patients. Through automated systems, you will be able to set up reminders for patients to schedule appointments, order prescriptions and share health information between provider and patient. As healthcare technology continues to advance, the need for a technology partner that understands the unique industry regulations is crucial. Healthcare managed IT service providers can help you with HIPAA, PCI DSS, and other industry regulations. Your patients trust you with private medical records. It is critical that you protect that information properly. Meet with a local healthcare managed service provider to learn more about products and solutions that you can implement to keep your healthcare organization secure. The average cost of a data breach in the United States is $8.64 million, which is the highest in the world, while the most expensive sector for data breach costs is the healthcare industry, with an average of $7.13 million (IBM). Forty-three percent of attacks are aimed at SMBs, but only 14% are prepared to defend themselves (Accenture). The internal team was energized. With the Level 1 work off its plate, the team turned its attention to the work that fueled company growth and gave them job satisfaction. The cost of cybercrime is predicted to hit $10.5 trillion by 2025, according to the latest version of the Cisco/Cybersecurity Ventures “2022 Cybersecurity Almanac.”. It takes an average of 287 days for security teams to identify and contain a data breach, according to the “Cost of a Data Breach 2021” report released by IBM and Ponemon Institute. More than 33 billion records will be stolen by cybercriminals by 2023, an increase of 175% from 2018. 40% of businesses will incorporate the anywhere operations model to accommodate the physical and digital experiences of both customers and employees (Techvera). The three sectors with the biggest spending on cybersecurity are banking, manufacturing, and the central/federal government, accounting for 30% of overall spending (IDC). We did a proof of concept that met every requirement that our customer might have. In fact, we saw a substantial improvement. We did everything that we needed to do, financially speaking. We got our invoices out to customers, we deposited checks, all the things we needed to do to keep our business running, and our customers had no idea about the tragedy. It didn’t impact them at all. “We believe our success is due to the strength of our team, the breadth of our services, our flexibility in responding to clients, and our focus on strategic support.”
<urn:uuid:cfef5b4d-ccc0-4f6c-aad5-3b760c1bc82d>
CC-MAIN-2022-40
https://www.dynamicquest.com/healthcare-information-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00532.warc.gz
en
0.945195
1,663
2.640625
3
Anybody can crack a password these days. Multi-factor authentication (MFA) is the answer. By using more than one layer of security to access your data, you can keep your business safe. What is MFA? Multi-factor authentication is the layering of security through two or more methods. Differing from single-factor authentication (SFA) (simply entering a password), MFA works on the following three principles: Something you know, such as a password. Something you own, such as a mobile device or an email address with which to receive a verification code. Something you are, including fingerprint or voice recognition. My business already has a strong password policy – why should I consider MFA? Because it is now an essential component of cyber security. Using a password alone is no longer a reliable method, regardless of its length or complexity. Cyber criminals now have the means to use software which tests billions of password combinations per-second, based on words in the dictionary. If your password is only 6 lower case characters, then it can be cracked through this method almost instantly, but even a complicated password can still leave you vulnerable.The concept behind MFA is complex, layered security – a hacker may be able to find out a password that you know, but they then also need to acquire something you own and something you are. Without all three (or more) factors, the account cannot be accessed. MFA offers enhanced security and a simplified login process. Single Sign-On (SSO) authenticates the user through MFA during the initial login process. This allows universal access to all of the software that uses SSO, without the need for repeated entry of credentials. You may be compelled to implement some form of MFA due to the European Commission’s introduction of Payment Services Directive 2, which came into effect on 13th January 2018. A key aspect of PSD2 is that two-factor authentication, as a minimum, is required to be in place by September 2019 for all electronic payments under €30 made online. Won’t MFA be disruptive for my staff? No. It can add an extra step for accessing services if Single Sign-On (SSO) is not available, but the increased security this offers your business vastly outweighs any minor inconvenience. Regardless, MFA can now be implemented in a variety of ways: - Text – After successfully entering your credentials, MFA-via-text functions by texting a short code to your mobile phone. This ensures that only you can authorise account access. - Email – Similar to text authorisation, email MFA works by sending either a code or a verification URL to your unique email address. Presuming that you are the only individual with access to the email account, only you can activate the MFA code/URL. - Push Notification – Push MFA sends a notification to your chosen device, informing you that access has been made to your account. A push notification will typically have accept/reject options. - Authentication – Authentication tokens can be physical devices or software-based. They function by generating a unique code every 30 seconds. Entering this code after successfully entering your credentials provides a second line of defence and is unique to you. To find out more about how multi-factor authentication can secure your business, call us on 01283 753 333 or drop us a line at firstname.lastname@example.org.
<urn:uuid:1edbecc1-a6fd-4fe0-af94-eb3500edcbc6>
CC-MAIN-2022-40
https://neuways.com/mfa-multi-factor-authentication-securing-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00532.warc.gz
en
0.933584
721
2.671875
3
Dangerous security bugs stemming from widespread inconsistencies among 16 popular third-party URL-parsing libraries could affect a wide swath of web applications. Eight different security vulnerabilities could allow denial-of-service (DoS) conditions, information leaks and remote code execution (RCE) in various web applications, according to researchers. The bugs were found in third-party web packages written for various languages, and, like Log4Shell and other software-supply chain threats, could have been imported to hundreds or thousands of different web apps and projects. Among those afflicted are Flask, Video.js, Belledonne, Nagios XI and Clearance. URL parsing is the process of breaking down a web address into its underlying components, in order to correctly route traffic across different links or into different servers. URL parsing libraries, which are available for various programming languages, are usually imported into applications in order to fulfil this function. Researchers explained: “URLs are actually built from five different components: scheme, authority, path, query and a fragment. Each component fulfills a different role, be it dictating the protocol for the request, the host which holds the resource, which exact resource should be fetched and more.” According to a combined analysis, security holes crop up thanks to differences in the way each library goes about its parsing activities. Across the 16 libraries, researchers identified five categories of inconsistencies in how these libraries parse components: - Scheme Confusion: A confusion involving URLs with missing or malformed Scheme - Slash Confusion: A confusion involving URLs containing an irregular number of slashes - Backslash Confusion: A confusion involving URLs containing backslashes (\) - URL Encoded Data Confusion: A confusion involving URLs containing URL Encoded data - Scheme Mix-ups: A confusion involving parsing a URL belonging to a certain scheme without a scheme-specific parser The problem is that these inconsistencies can create vulnerable code blocks, thanks to two main web-app development issues: - Multiple Parsers in Use: Whether by design or an oversight, developers sometimes use more than one URL parsing library in projects. Because some libraries may parse the same URL differently, vulnerabilities could be introduced into the code. - Specification Incompatibility: Different parsing libraries are written according to different web standards or URL specifications, which creates inconsistencies by design. This also leads to vulnerabilities because developers may not be familiar with the differences between URL specifications and their implications. As an example of a real-world attack scenario, slash confusion could lead to server-side request forgery (SSRF) bugs, which could be used to achieve RCE. Researchers explained that different libraries handle URLs with more than the usual number of slashes (https:///www.google.com, for instance) in different ways. Some of them ignore the extra slash, while others interpret the URL as having no host. In the case of the former, accepting malformed URLs with an incorrect number of slashes can lead to SSRF. URL confusion is also responsible for the Log4Shell patch bypass, because two different URL parsers were used inside the JNDI lookup process. One parser was used for validating the URL, and another for fetching it. Open-redirect vulnerabilities are popular for exploitation because they enable spoofing, phishing and man-in-the-middle attacks (MITM). They occur when a web application accepts a user-controlled input which specifies a URL that the user will be redirected to after a certain action. When a user logs into a website, for example, they could be redirected to a malicious lookalike site. Neuways advises users to be wary of any applications they use. Through ensuring they are updated with the most recent security patches, then issues such as those dictated above can be avoided.
<urn:uuid:2c81820d-8f8f-477e-ba79-2e21a0a105dc>
CC-MAIN-2022-40
https://neuways.com/neu-cyber-threats-20th-january-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00532.warc.gz
en
0.903041
784
2.65625
3
Washington — Environmentalists and industry experts expect the first federal standards for waste generated from coal burned for electricity to treat the ash more like household garbage than a hazardous material. The EPA is expected to issue a rule Friday, ending a six-year effort that began after a massive spill at a Tennessee power plant in 2008. Since then, the EPA has documented coal ash waste sites tainting hundreds of waterways and underground aquifers in numerous states. Environmentalists wanted coal ash to be classified as hazardous, which would put Washington in charge of enforcement. The coal industry fought back, citing costs and a damping effect on the recycling market. About 40 percent is reused. By putting it in same category as trash, citizens and states would enforce the standards.
<urn:uuid:cc513c3f-fa0d-4e5b-b524-c0b0a818bb88>
CC-MAIN-2022-40
https://www.mbtmag.com/quality-control/news/13213119/epa-expected-to-treat-coal-waste-like-garbage
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00532.warc.gz
en
0.947773
153
2.96875
3
The Internet is a vast and amazing place. Some have even argued that it is one of people’s best-ever inventions. Some would push it further by actively attempting to outline what rights an Internet user has. Advocacy groups have been popping up, and while it has had a marked effect on public policy in more progressive nations, some nations look on these groups with disdain. Today we will take you through human rights advocacy on the Internet, and what to expect going forward. Initially, the advocacy of Internet rights was just that: the right to have access to the Internet. While this isn’t a problem for as many people as it once was, some places still don’t have fair, affordable access to high-speed Internet service. Some nations, despite providing access, have Internet laws that subdue use due to an overlaying censorship. This issue, and the monetization of collected consumer data, are two of the hot-button issues today for Internet rights advocates. The Internet is a relatively new technology, especially in the manner it is being used by people today. As a result, there are different views on how these technologies are disseminated, who profits from them, and how non-controlling entities have their rights repressed. As a result, you’ll find from the early days of Internet rights advocacy, the largest voices were from organizations that found the equitable portion of the Internet either unnecessary or repressive to the rights of consumers. Notice that the access to the Internet was not even on the roadmap. The nature of the early commercial Internet was such that it could be successfully described as libertarian. Through the end of the 1990s, as the first round of dot com investments started to tank, it became obvious that the technology would end up bigger than anyone had anticipated and needed regulation. In the U.S. many fights have been undertaken in the subsequent 20 years, many of which were pushed by Internet rights advocates. One of the most famous is: Reno v. American Civil Liberties Union (1997) In an attempt to clean up what some people considered indecent content on the Internet (pornography and the like); and more accurately, to keep kids away from this content, Congress passed the Communications Decency Act. The ALCU, which is a well-known civil rights advocate group, filed suit. The provision was eliminated by two federal judges before being heard in front of the Supreme Court, which upheld the lower courts’ rulings. This was a major blow against censorship; paving the way for free expression on the Internet. While the ALCU isn’t exactly an Internet rights advocate, the landmark case ushered in a new world of free speech on the Internet; and, it sets the tone for Internet rights advocates to this day. Today there are many organizations looking to protect people on the Internet. Sometimes their views overlap, and sometimes they don’t. One of these groups, the Electronic Frontier Foundation (EFF), is a major player in the fight to keep speech (and content) free from censorship on the Internet, the fight against the surveillance state, and most notably, the ongoing fight for individual privacy. Businesses of all kinds, as well as government agencies, have grown to take significant liberties with people’s personal information. Organizations like the ALCU and the EEF work tirelessly to get the topic of personal data privacy in front of decision makers. Have you ever wondered how you just had a conversation with your friend via some type of app about fingerless gloves and now your sidebar on every website is now filled with fingerless glove ads? Most users don’t fully understand that organizations that you interact with online keep a profile on you. All of your actions, any personal or financial information that you share, and more is stored in a file that is often packaged and sold off by those organizations to advertising firms. These advocates, among the other issues they stand up for, are trying to push the issue of personal data privacy. The main point of contention is that companies profit off of the information people provide, and since this information is very clearly personal in nature, it is their belief that individuals are being taken advantage of. This debate has been ratcheted up significantly with the European Union’s General Data Protection Regulation (GDPR) that intends to protect individual information. While it might be a matter of time before the U.S. gets a data privacy law in the same vein as the GDPR, Internet rights advocates will continue to act in the public’s favor on this issue, and many others. Net Neutrality & Access to All One of the biggest fights that Internet rights advocates are undertaking is against the companies that deliver the Internet itself: the Internet service providers (ISPs). For those of you who don’t know, over the past several years the U.S. Government created mandates that forced ISPs to provide access to applications and content without favoring any, even if they are the ones that use the most bandwidth. The theory is that the typical Internet user only does so much on the web. They typically access the same sites and use their Internet connection for the same things. This creates a situation where ISPs, using market adjustments would want to get more money per byte than if users used a variety of sites to do the same. With federal control, they were forced into charging a flat rate. The net neutrality laws that were instituted in 2015 were repealed in 2017, as controlling bureaucrats argued that there were enough people without fair access to the Internet and the only way to persuade the ISPs to commit to investing in infrastructure that would curb this problem is by repealing the net neutrality laws. Needless to say, this caused quite a stir. Internet rights advocates were quick to point out investment in infrastructure is in these ISP’s best interest and giving them the ability to slow down Internet speeds as they see fit is not good for consumers. Unfortunately for most Americans, these ISPs are the companies you have to get your Internet service from if you want speeds that allow you to use it the way you want. Advocates are still trying to do what they can to educate people about the benefits of net neutrality and have set up websites with information and for people to give their support. Organizations like the aforementioned ACLU and EFF, the American Library Association, and Fight for the Future, Demand Progress, and Free Press Action currently sponsor www.battleforthenet.com, a one-stop site for all things net neutrality. Advocacy can go a long way toward giving a voice to people who may not think they have one. What Internet-related topics do you find to be problematic? Leave your thoughts in the comments and subscribe to our blog.
<urn:uuid:8369241f-9211-44e9-bc64-dc1731563809>
CC-MAIN-2022-40
https://www.excaltech.com/what-does-internet-rights-advocacy-mean/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00532.warc.gz
en
0.970427
1,381
2.859375
3
A lot has been said over the years about the best ways to protect your machine from attacks and malicious code. But where do those recommendations intersect with ways to protect your friends from attacks? By failing to protect your own data, you’re sometimes putting them at risk as well. Here are a few ways people end up mindlessly spreading the malware love. 1. Neglect to Scan That File Before You Share It That spreadsheet you shared with your friends to organize a summer beach trip could end up bringing with it some unexpected cooties. But a quick once-over with an up-to-date antivirus scanner will help keep your trip relaxing. 2. Pick Up Abandoned USB Keys and Use/Share Them Would you pick up and use a comb someone dropped in the parking lot? Probably not – who knows what sorts of grossness could be lurking on it! But not everyone is so fastidious about their digital hygiene. A shocking number of people in one study picked up a “lost” USB drive in a parking lot. Ewh. Even experts are not immune. And then to share it with your friends? Totally uncouth. USB sticks are considered an infection vector unto themselves, as many Windows-based threats will attempt to run automatically upon inserting the drive. While it may not affect you, it may get your friends. 3. Click on Every Stupid Link on Facebook OMG, your best friend from the 3rd grade just posted something that offers a free ticket to a tropical, sunny location just for clicking on a link! Who could it possibly harm to try it? Sometimes those scams come with more than you bargain for, and you could in fact be putting your friends’ data up for grabs by clicking that link. Be skeptical of links that seem shocking or potentially scammy. Ask them if they intended to post the link if you really feel inclined to click. 4. Fall for Phishing Scams It’s tough when phishing emails are getting increasingly sophisticated and adept at making scary claims about what will happen if you don’t click that link. But it’s always a good idea to verify before you trust. Since the aim of phishing is in part to steal your contact data so they can hit your friends too, there’s more on the line than just your own data. If you receive an email from any of your accounts (social, financial, or otherwise) saying that you need to click a link and access your account, you can indeed check your account to be safe, but never do it via a link in email. Go directly to your browser and type in the address for the site. 5. Use a Weak Password on Your Email/Social Networking Accounts This is much the same idea – your password doesn’t just protect access to your account, but access to your friends’ data as well. Choose unique, strong passwords and change them often, or just use a password manager that will do the heavy lifting for you. (After all, the most secure password is the one you don’t know.) 6. Break into Your Neighbors’ WiFi It’s tempting, as more and more people get WiFi routers at home, to simply poach your neighbor’s bandwidth and save yourself that few bucks a month. But you really don’t have any idea what their level of protection is. I attempted this once on a sacrificial research machine, for the sake of curiosity and science, and the machine was infected almost immediately. That blew even my jaded, professionally paranoid mind. If you then have friends over that connect into your network, you could be putting them at risk, too. 7. Install Pirated Software on Your Friends’ Computer Oh, the digital hygiene horror! This is the InfoSec equivalent of having a dinner party to share your “freegan,” dumpster-diving haul. It’s one thing to take your chances with your own intestinal tract or computing device, but it’s another thing entirely to share that with your friends. Warez is a popular way for malware authors to spread their wares, as many people still believe you can get something for nothing without realizing the potential consequences. 8. Be Lazy About Updating Your WordPress Installation Your friends love your blog about designer dog sweaters, but it’s not yet caught on with the general public. So who needs to get around to updating it with the latest and greatest WordPress version? It’s precisely that problem (okay, maybe not the dog sweater part) that led to the explosion of Flashback. Lots of people with old blogs got compromised, and their friends and fans paid the price. It only takes a little thought and effort to avoid common ways for spreading malware. The investment is far less than it would take to write sincere, contrite apology emails to your friends and family members who had to deal with the virtual crud they got from you. - Are You Sabotaging Your Own Security Efforts? - 6 Ineffective Ways to Protect Yourself Against Online Attacks - Top 5 Ineffective Ways to Protect Yourself From Government Surveillance USB stick photo credit: Count_Count via photopin cc Facebook scam screenshot via CNET phishing image photo credit: ivanpw via photopin cc skull and crossbones photo credit: ☺ Lee J Haywood via photopin cc steal wifi photo credit: dana~2 via photopin cc
<urn:uuid:270166ee-8542-45b3-8545-b1731a0ab04a>
CC-MAIN-2022-40
https://www.intego.com/mac-security-blog/8-ways-to-accidentally-infect-your-friends-with-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00532.warc.gz
en
0.928597
1,146
2.734375
3
I’m sure IT Administrators always ask these questions: Do I really need an Anti-Malware solution? Is there a real value in investing in an Anti-Malware solution for personal or business use? What does a Malware seek to accomplish? What are the real dangers of Malware infecting my computer or network? Danger 1 – Denial of Service Attack (DOS): When PayPal and MasterCard withdrew their support to Wikileaks.org, the hacker group Anonymous orchestrated DOS attacks on their websites bringing them down temporarily. For DOS, the attackers activate Malware across millions of infected computers and increase traffic to a website and shut it down by overloading it. Danger 2 – Identity Theft: Your identity is very important to hackers. Using your name, address, phone number, login IDs, passwords, tax information, photographs, social networks, and banking information, alternate identities can be created. These identities are used to not only apply for credit cards in your name, but also to create fake passports and state IDs. These fake IDs are used by criminals to evade authorities. Without fingerprint or iris scan, it is very easy to reuse your identity. Danger 3 – Breaking Encryption Illegally: Communication between financial institutions and government departments are encrypted for security. Tremendous processing power is required to break the encryption. Hackers tap millions of infected computers across the world for their processing power to break the encryption. Once the encryption is broken, data can be stolen easily. Danger 4 – Espionage: Stealing unauthorized information is one thing. Transmitting the information to the recipient is another challenge. This is where computers infected with Malware come in. Unauthorized data is passed on via thousands of infected computers. Danger 5 – Cyber war: Governments across the world are building cyber war teams for cyber-offence as well as cyber-defense. What’s a better way to attack the digital assets of a country than to use computers locally! Computers infected by Malware are often used for this purpose. If your computer is infected with Malware, it can be used for all the above illegal activities. How do I know if my computer is infected with Malware? - Hard disk – Do you have a high level of hard disk activity even when your computer is idle? - Memory – Is your memory used by unknown services? - CPU – Do you see spikes in CPU usage even when your computer is idle? - Network – Is there high network activity even when all programs are closed? If the answer to any of the above questions is Yes, Malware is degrading your computer. If you want to protect yourself from the dangers of Malware, it is best for you to install a powerful Anti-Malware solution. Unless of course, you want your computer to be the hub of illegal activity!
<urn:uuid:489615f3-8d65-45ba-8ca2-c63bb165f3e1>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/do-i-really-need-an-anti-malware-solution
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00532.warc.gz
en
0.920191
585
2.90625
3
As part of our ongoing celebration of the 20th Anniversary of Entrust’s Public Key Infrastructure (PKI), we’re looking back in a four-part series on the pioneers, processes and events that have shaped this ever-evolving technology. In the Beginning: The Information Revolution Since the very earliest days of the information age, secure data networks – including those carrying highly-sensitive military and diplomatic cables – were based on hub and spoke topology. Encryption was used on each of the links in the network and the switching nodes were RED, meaning that messages were “in the clear” in the switching centres where they were transferred from one link to another. Back then, the only encryption algorithms available were “symmetric” which meant that the keys used to encrypt and decrypt the message were the same. Each link had a different key, and keys were commonly changed daily, with the change-over having to be carefully coordinated at each end of each link A large, centralized organization of highly trusted, trained staff was necessary to generate, distribute, install, retrieve and destroy key books for every hub and terminal in the network. Secure couriers and diplomatic bags were the most common means of transporting keys. Partly because of the cost of key management, and partly because sales of encryption products were strictly controlled, encryption was little used outside of government. Starting in the 1970s, computer technology underwent significant changes. Machines were becoming computationally more powerful, consuming less electricity, getting smaller and plummeting in price. It became possible to interconnect machines within a computer room or across a small campus. Networking approaches based on token-ring and collision-based bus architectures were fighting for supremacy. Finally, the bus approach came to dominate – first with proprietary protocols, then with the multi-vendor standard, Ethernet. In the wide-area, national and international research networks were being connected by circuit-switched leased lines. But in the telephony world, advances were being made in automated switching systems, quickly followed by reports of successful hacking incidents. For the time being though, data networks remained relatively immune from attack, as they were protected by physical security measures and trusted telecom providers. Packet-Switching and the Key Distribution Problem The telecom industry developed a wide-area packet-switching standard in the CCITT’s X.25. This, coupled with deregulation in the telecom industry, opened the door to cost-effective data networking based on packet-switching technology. Meanwhile, in the financial services sector, cash-machine networks were appearing and banking mainframes were being connected across countries and continents. The value of assets being entrusted to commercial data networks made them attractive to criminal organizations, so commercial grade cryptography was needed to protect data integrity. But governments treated encryption technology as a munition for purposes of export licensing and export licenses were generally granted only for financial applications using approved algorithms. IBM, with assistance from the US National Security Agency, developed the DES algorithm, with a cryptographic strength of 56 bits. The US Government National Bureau of Standards (later the National Institute of Standards and Technology) published it as a standard, and it was adopted around the world for use in financial applications. Measuring Cryptographic Strength Cryptographic strength measured in bits is the logarithm (base 2) of the number of operations required to defeat the algorithm. Logarithmic measures are familiar to us in the Richter scale for earthquake magnitude, where a one step increase in the scale represents a ten-fold increase in strength. For scales measured in bits, a one-bit increase in the scale represents a doubling. Therefore, a cryptographic strength of 56-bits represents roughly ten thousand million million operations. Interestingly, the details of the operation itself don’t come into the calculation – only the number of them. For a well-designed symmetric algorithm, the cryptographic strength is the same as the key size, meaning that the best-known method of attacking the algorithm is to do an exhaustive search of the key-space. The same is not true of asymmetric algorithms, where better attacks than exhaustive searches exist. So in these cases, the key size is always greater than the design cryptographic strength. The cost of managing symmetric keys on a link-by-link basis and the cost of providing trusted RED switching nodes for packet-switched data networks was going to be unacceptable for commercial applications. As a result, protection would have to be provided at the transport layer, resulting in end-to-end security and permitting BLACK “untrusted” switching nodes. This is where symmetric cryptography runs into a problem not solved until the development of the Needham-Schroeder protocol. In 1978, Roger Needham and Michael Schroeder designed a solution using symmetric techniques that could scale up to large networks. But even this new approach had limitations: Since confidential key material had to be communicated between each node and the central key distribution centre, the set-up procedure was onerous and expensive. And while their design eventually achieved widespread adoption for securing individual administrative domains as the Kerberos protocol, it failed to find acceptance for interconnected domains. That meant a newer, better solution was still needed. Luckily, there was someone already working towards it.
<urn:uuid:51d941a6-9a34-435b-a958-a4063576abbf>
CC-MAIN-2022-40
https://www.entrust.com/fr/blog/2015/01/in-the-beginning-the-information-revolution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00532.warc.gz
en
0.960998
1,079
3.21875
3
Seeing and then identifying objects is the foundation for all learning and cognition. This is true both in human development and in the development of training data for computer vision models. In order to explore the pros and cons of quality metrics like pixel tolerance and Intersection Over Union (IOU), it helps to zoom out a bit and ask ourselves: why do we need these two quality metrics in order to get high-quality training data? The Importance of Precise Object Localization Your human mind is absolutely extraordinary. Think about the ease and rapidity with which you learned to identify and classify objects: - You were processing and recognizing familiar faces within days of your birth. - You were around three months old when you first started to recognize a favorite teddy bear or chew toy. - By nine months, you could see a picture of an object and make the connection between the representation and the real thing The purpose of data annotation for computer vision is to teach a model how to identify and classify things. The human mind “annotates” effortlessly - we see something and we identify it (with varying degrees of specificity and accuracy based on prior experience). So far, not that dissimilar from how a model learns. However, even if we aren’t sure what something is, we don’t have any trouble seeing that there is something in front of us and that the thing has a distinct shape and size that distinguishes it from other objects. Except at distances or in visually chaotic situations, the average human with good eyesight does not struggle to understand the boundaries between one thing in our field of vision and another thing in our field of vision, what we could call precise object localization. Precise object localization can be more challenging for machine learning models. How Do We Localize Objects in Data Annotation for Computer Vision? We put a bounding box around it! If you are picturing the kindergarten workbook where you have to draw a circle around the apple, you’re not far off. However, unlike for the kindergartner, it is crucial that the human annotator or computer vision model draws the box with precision. The first steps of annotation are always about identifying 1) that an object exists and 2) that the object occupies a discrete space in the frame (e.g. by putting a bounding box around it). Pixel tolerance and IOU are our best tools for measuring how precise our object localization is. Pixel tolerance and IOU allow us to measure the quality of bounding box placement by measuring the difference between a known correct answer (called an authoritative answer, or sometimes a ground truth or gold answer) and an answer being tested (provided by a human worker or generated by a model). Now, let’s talk about the differences between these two localization metrics and explain why IOU is a more robust metric. So, Which Is Better? IOU or Pixel Tolerance? They are both useful, but we prefer the ratio metric (IOU) to the difference metric (Pixel Tolerance). Ratio metrics measure difference using proportions, while distance metrics measure difference between ground truth answers and other provided answers using an absolute difference on a specific scale (such as pixels in the frame). Both ratio and distance metrics are important depending on what you are measuring, but using a ratio metric can provide a higher degree of accuracy across a wider variety of cases because it is scale-invariant and domain agnostic. Intersection over union is the standard metric used in the machine learning discipline for bounding boxes and other shapes because it applies well to both large and small shapes. It also correlates easily to a distance metric like pixel tolerance. The scale invariance of IOU means that it requires tighter pixel tolerances for small shapes and allows larger pixel tolerances for large shapes. Curious about Getting High-Quality Training Data for Your ML/CV Project? This is where we specialize. We have a whole guide that walks through the four prioritized phases to quality training data, explains the significance of specific metrics for quality, and discusses how our Customer Success team partners with you to ensure the quality you need We’ve established quality management best practices and a reliable pipeline for quality training data based on our experience labeling tens of millions of images, video, text, and audio records alongside our customers. At Alegion, it’s not just the platform that supports your quality training data needs, it’s our whole team of experts, dedicated to your success. Every CV team needs high-quality training data; let’s use our expertise to get you there, together. Reach out to us today and request a demo now to learn how we can help you avoid bias in your training data
<urn:uuid:2c0c25eb-e75c-45c1-a87b-e38e6dfe6309>
CC-MAIN-2022-40
https://content.alegion.com/blog/pixel-tolerance-vs.-iou-which-one-should-you-use-for-quality-training-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00732.warc.gz
en
0.931621
976
2.875
3
The protection of your business’ computing assets is a bigger deal today than ever before. This is because there are dozens of ways that things could go wrong. One tool that many IT administrators like to use is called Active Directory, a feature found on most Microsoft Server operating systems that allow administrators to control users. This month, we take a look at Active Directory. The first thing you should know about Active Directory is that there isn’t a static plan that can be used by every business. We will go over some of the best practices, but you need to take into account that you need to configure your Active Directory settings to fit your business’ needs. If your business is coming from a situation where it doesn’t have any system in place, Active Directory is a great place to start. Nobody Needs to be an Administrator When someone logs into a business’ domain server, they use their account, which by default is centralized in Active Directory. This alleviates the need for a central IT admin to login and set administrative privileges, and works across the network from the server to the endpoint to keep a business safe. After all, if people that don’t need access to certain information, don’t get access, nothing it lost. This is called the least privilege administrative model. It works like this: each user has the minimum permissions to complete their work. You can always elevate access temporarily if needed. Otherwise, if a user gets a virus, that virus will have the same access the user does, and could do a lot more damage because the user has access he or she didn’t need in the first place. The virus has the capability to spread across the network, where if the user’s permissions were locked down, the virus would only have a minimal impact. This means that everyone on the network, including the business owner, the employees and the IT staff log in as a regular non-administrator to do their normal day-to-day work. If they need to get administrative control, they can log in with a separate admin account. You will want to keep credentials to that administrative account safe and protected. Force Strong, Complex Passwords and Set Password Expirations Most people aren’t able to memorize complex passwords. Some can’t even create them. Unfortunately for everyone, the people that want to break into computing networks have tools that are extremely proficient at guessing passwords that aren’t complex enough. You will want to ensure that your staff has learned the value of the use of a passphrase. Instead of combining string of words that could potentially makes sense, stringing together multiple random words is actually more secure. Keep in mind, the words need to be very random. Here’s a quick example: Bad Passphrase Examples: Good Passphrase Examples: Back to Active Directory, you should require passwords to be long – at least 12 characters, and settings should lock a user out after three failed attempts. Forcing passwords to expire every month or two is a good strategy to ensure that password security is maintained. Delegate Permissions to Security Groups, not Individual Accounts When we go in and audit a new prospect’s network, we often find that they have gone ahead and assigned permissions to individual accounts rather than using security groups. As your organization grows this can present problems with controlling access. Keeping track of who can see what using security groups is a much better and more organized option. Use LAPS (Local Administrator Password Solution) Inside Active Directory, there is a feature called the Local Administrator Password Solution (LAPS). It allows Active Directory to handle the local admin accounts on each individual PC on a given computer network. Since the local administrator has full control over the machine, it is definitely not an account you want compromised. A common practice by IT professionals is to send images of Windows across each machine on a business’ network, to save time. After all, setting up every computer individually will take a lot more time than doing it globally on the network. The way this works is that your IT administrator takes a pre-built clone configuration that includes the operating system, most of the software, and optimal settings that your company’s IT admin has agreed on, and sends it out on the new system. Unfortunately this image-based deployment will also carry over admin accounts and passwords. LAPS solves this by assigning each device its own unique password that is controlled through Active Directory. It’s one of the best free and simple solutions for protecting your network against lateral threat movement from device to device. Document Everything, and Schedule Reviews and Clean Up Sessions Many organizations go round-and-round thinking about building permission groups and determination of who has access to what, only to be seriously confused when examining permissions a year later. The key is to document everything. The groups that have access, network permission, exceptions, etc. By scheduling regular audits of your Active Directory setting, having everything clearly defined, and routinely updated, it will make managing your computing resources that much faster and less problematic. Active Directory is the Backbone of Issue Monitoring Since Active Directory is used to manage every user and device on your business’ computing network, it can log information and report on potential issues. Our technicians actively use this data to catch potential problems early, often resolving them before they affect a business in any noticeable way. Here are just a few things that Active Directory lets you monitor and report on: - Group permission changes - Account lockouts - Antivirus being disabled or removed - Logon and Logoffs - Spikes in bad password attempts - Usage of local administrator accounts Additionally, IT professionals are able to put together Windows Event Logs to provide information about each machine’s physical well-being. Get Your Network Assessed Admittedly, Active Directory is a much more vast and powerful resource than we have time to write about here; and it is often improperly configured when we go in to do network audits, meaning that it is being underutilized. If you would like to see how you are currently using Active Directory, call COMPANYNAME today and we will assess your business’ computing infrastructure and build a report on any security issues or misconfigurations we find. We can do this very discreetly as to not cause your current IT administration team any undue consternation. Call us today at PHONENUMBER to learn more.
<urn:uuid:69a6b607-a307-412c-abab-b6915f916885>
CC-MAIN-2022-40
https://www.activeco.com/by-controlling-active-directory-you-can-control-your-whole-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00732.warc.gz
en
0.940395
1,419
2.59375
3
As I discussed in my previous musing, I recently had to deal with a wildfire near my home in the mountains of California. As many of you know, global issues have created environmental conditions conducive to various natural disasters, not the least of which is wildfires. Over the last few decades, what has happened is a steady increase in global temperatures and, in the case of California, extreme drought. In turn, the lack of water leads to the death and consequent drying of vegetative growth, and when combined with extreme heat (the temperature on the day the Oak Fire started was 108F/42C), all it takes is a little spark to start a wildfire, which quickly grows out of control. Now temperatures in many countries have also reached these scorching levels. Europeans are now beginning to deal with forest fires, which are part of California's natural cycle of life. However, the fires are not typical in many European countries. What makes forest fires particularly disastrous today is that places once only inhabited by trees and wildlife are now increasingly inhabited by humans and all their possessions (houses, cars, and so on). Mixing in the artificial materials creates conditions that take a forest fire from being a natural ecological event to an absolute disaster. So, let's look at this in terms of the networked world. There was a time when the networked world was sparsely populated. A cybersecurity issue was generally a nuisance rather than a potential disaster. Now, a cybersecurity event has the potential to snowball into a catastrophe. The risk has increased with the immense growth in the network population. Moreover, the ability of a cybersecurity “firestorm” to spread quickly and impact unforeseen systems has now become a genuine concern. Now, this is very important to consider because, with the explosion of networked devices, many approaches to addressing cybersecurity have not grown to match the potential for a large cybersecurity disaster. Organizations that have invested efforts in cybersecurity management and recovery due to events that happened a decade ago may be unaware of how much more at risk they are as times have changed. Going forward, I want to consider some of these disasters and specifically discuss how the level of danger during a cybersecurity disaster varies based on the specific systems and environments in the disaster’s pathway. As we have learned in California, not all wildfires are the same. The environmental impact has more to do with the location than the fire's size. Stay tuned.
<urn:uuid:57077424-ccee-4fba-a1a1-0ebb42e39777>
CC-MAIN-2022-40
https://www.arcticsecurity.com/resources/conditions-conducive-to-disaster
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00732.warc.gz
en
0.977562
492
2.75
3
In recent years, machine learning and artificial intelligence enable enterprises to extract knowledge and insights from large volumes of textual data such as e-mails and social media posts. Most of the respective applications fall in the realm of natural language processing (NLP) i.e., they combine text analysis and computational linguistics to automatically identify the meaning of textual information. Sentiment analysis is one of the most prominent NLP applications for modern enterprises. It leverages text analytics to extract and analyze comprehensive information about the affective states of the subjects that produce the text. For instance, sentiment analysis tools identify whether the nuance of a specific text is positive, negative or neutral. In several cases, they can also grade and quantify the level of positivity or negativity of the text. As already outlined, sentiment analysis is a machine learning application. As such it is developed based on classical machine learning and knowledge extraction methodologies, which involve the tasks of collecting data, exploring and preprocessing data, testing various models, evaluating alternative machine learning models and ultimately deploying the most successful ones. The machine learning models used in sentiment analysis are primarily aimed at developing an “affective” scoring mechanism. This mechanism helps classifying words, phrases, or entire conversations in terms of their sentiment (e.g., positive or negative). Accordingly, the scoring mechanism is used to classify and analyze text that corresponds to opinions and comments. There are different ways for scoring phrases or even entire groups of phrases. For instance, leveraging a dictionary of keywords it is possible to identify the sentiment of specific comments based on the keywords that they contain. Specifically, a phrase that comprises many positive keywords (e.g., good, fantastic, spectacular) is likely to reflect positive sentiment. On the other hand, negative keywords (e.g., bad, fail, disappointing) are strong indicators of negative sentiment. Nevertheless, as language evolves and more complex constructs are possible, keyword scoring alone cannot lead to satisfactory accuracy. This is where Machine Learning (ML) models come in. ML models are trained with large volumes of labeled textual data to become able to classify sentiment. Moreover, they are fed with many dictionaries of keywords and are tuned based on domain knowledge provided by linguistic experts. In this way, they achieve acceptable accuracy for business applications. A variety of ML models are currently used for sentiment analysis. Surprisingly, it is possible to build simple, yet effective sentiment analysis tools using classical ML models like Naive Bayes, Support Vector Machines, Decision Trees, and Random Forests. Nevertheless, these models work well in cases where the training dataset is rather small. As the volumes of training data increase, deep learning techniques (i.e., deep neural networks) yield much better performance. General-purpose Recurrent Neural Networks (RNNs) (e.g., the popular Long Short-Term Memory (LSTM) model) and Convolutional Neural Networks (CNNs) architectures have been successfully used in sentiment analysis problems. Furthermore, more specific deep learning methods have emerged to facilitate NLP and sentiment analysis tasks. As a prominent example, Recursive Neural Tensor Networks have been introduced and proven very effective in capturing complex linguistic patterns. State of the art sentiment analysis tools tend to combine multiple ML models, which helps them outperform conventional techniques. Recently, unsupervised learning approaches for NLP and sentiment analysis have been also proposed (e.g., the Unsupervised Sentiment Neuron from OpenAI). Their main value proposition lies in their ability to operate with very small amounts of training data. This can be a huge advantage in some contexts. Sentiment analysis tools are nowadays very powerful marketing and branding tools. They are used to monitor sentiment about products, brands, services, and to analyze customer feedback as part of customer analytics and retail analytics processes. Here are some prominent use cases: In the era of Machine Learning and AI, sentiment analysis is, without doubt, a powerful tool for enterprise growth. Modern enterprises must integrate sentiment analysis in their marketing, branding, and customer relationship management strategies. It is already proven that sentiment analysis improves marketing performance, generates leads and increases customer satisfaction. Therefore, companies must consider how to integrate sentiment analysis insights in their marketing and branding strategies. In this direction, they must analyze the very rich landscape of sentiment analysis tools towards selecting the vendor and services that can best suit their needs. The emerging role of Autonomic Systems for Advanced IT Service Management How to Create an Inclusive Conversational UI Achieving Operational Excellence through Digital Transformation Keeping ML models on track with greater safety and predictability How education technology enables a bright future for learning Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:38e03417-f18a-4d10-824f-c984a4d394fa>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/sentiment-analysis-a-powerful-tool-for-better-business-results/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00732.warc.gz
en
0.914167
1,145
3.375
3
Standardized by IETF in 1998, OSPF protocol is a link-state routing protocol designed to be run internally to a single Autonomous System. Each OSPF router maintains an identical database describing the Autonomous System's topology. From this database, a routing table is calculated by constructing a shortest- path tree. Efficient, it has replaced RIPv2 and is now largely deployed in IP infrastructure to decrease routing provisioning cost. OSPF recalculates routes quickly in the face of topological changes, minimizing overhead of routing protocol traffic. OSPF provides support for equal-cost multipath. An area routing capability is provided, enabling an additional level of routing protection and a reduction in routing protocol traffic. MARBEN OSPF-TE has been designed to provide the simplest interface that hides OSPF protocol mechanisms to the user of the MARBEN Networking Protocols stack. Indeed, MARBEN OSPF-TE fully handles: - OSPF adjacency auto-discovery; - Path computation with shortest path first algorithm; - IP routing table update; - OSPF Graceful Restart; - Support of NSSA; - Support of Opaque LSA. MARBEN OSPF-TE entity provides a service for the management of opaque LSA. This service allows sending opaque LSA information received from the network to the application on top of OSPF and also allows this application to send proprietary formatted information that will be sent to the network through opaque LSA.
<urn:uuid:58cfc2d0-de8e-454a-994c-590d03ddfd64>
CC-MAIN-2022-40
https://www.marben-products.com/mpls-protocol-ospf-te/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00732.warc.gz
en
0.849623
332
2.734375
3
The market for gutta-percha is expected to grow at a CAGR of around 5.3% from 2020 to 2027 and expected to reach the market value of around US$ 269.5 Mn by 2027. The gutta-percha (GP) is a material derived from a Malaysian tree, used to fill a tooth permanently after a root canal procedure.GP is purified and coagulated latex obtained from the genus of Palaquium and Payena (Sapotaceae) trees, which are found both wild and cultivated in Malaysia and the Indonesia region. It is extensively used for years and has established itself as a gold standard. This was first discovered in 1656 by John Tradescant and English natural historian to Europe. Moreover, this was first introduced in 1846 by Alexander Cabriol for surgical use. It’s available in white color and placed inside the root canals by using a variety of techniques. The standard technique is endodontic that involves inserting a gutta-percha cone into the cleaned-out root canal along with a sealing cement. The increasing prevalence of dental caries, rising public awareness pertaining to oral hygiene, and advancement in techniques are bolstering the demand for gutta-percha in the market. Increasing dental caries due to inadequate fluoride intakes, changing living standards, behavioral factors, eating habits, social status, and socio-demographic factors are additionally bolstering the demand for gutta-percha in the market. The increasing canal obturation procedures and oral diseases requiring professional dental care are further bolstering the market growth. On the other side, disadvantages associated with the gutta-percha such as easy distortion, lack of adhesive quality, lacks rigidity that prevents its use in small canals are the factors likely to limit the growth to an extent over the forecast period from 2020 to 2027. Regional Instance of Global Gutta-percha Market North America accounted for the major revenue share in the gutta-percha market In 2019, North America accounted for the maximum revenue share in the gutta-percha market, and the region is also projected to maintain its dominance over the forecast period from 2020 to 2027. The presence of gutta-percha in the dentistry practices from the traditional time in the region is supporting its dominance. Gutta-percha is accepted as a very safe-to-use filling material in the region is another factor supporting its dominance in the region. Moreover, the presence of the geriatric population which is prone to develop various dental problems is further bolstering the regional market value. According to the American Dental Association publication 2005-2006 Survey of Dental Services Rendered, around 22.6 million endodontic procedures have performed each year during the survey period. The survey held by The American Dental Association (ADA) has estimated the increase of around 25 million endodontic procedures annually in the followed decade of the survey period. According to the same survey of American Dental Association, around 58% of the U.S. residents visit a dentist once a year and around 15% of people makes an appointment with the dentist because they experience mouth pain. Asia Pacific is anticipated to exhibit fastest growth with major CAGR over the forecast period from 2020 to 2027 The rapidly developing economies of the region are involved in the development and growth of oral care infrastructure for its residents in order to provide better solutions is primarily supporting the regional market growth. The continuous efforts made by the concerned authorities for the development of advanced procedures are further supporting the regional market value. The Asia Pacific Dental Federation (APDF) and the International College of Continuing Dental Education (ICCDE) have launched an official publication in 2020. The publication includes a study that elucidates the rheological properties of gutta-percha and a discussion about whether the method using gutta-percha is proper for obturating the root canal. Key Market Players The players profiled in the report include Coltène Whaledent GmbH, Davis Schottlander & Davis, Dentsply Sirona, DiaDent Group International, Essential Dental Systems, FKG Dentaire, Kerr Endodontics, META BIOMED, Micro-Mega, Premier Dental Products Company, Sure endo, and others. Market By Product Type Surface Modified Gutta-percha Nanoparticles Enriched Gutta-percha Market By End-use Dental Academic and Research Institutes Market By Geography Middle East & Africa The market for gutta-percha is expected to reach a market value of around US$ 269.5 Mn by 2027. The gutta-percha market is expected to grow at a CAGR of around 5.3% from 2020 to 2027. Surface modified gutta-percha, medicated gutta-percha, and nanoparticles enriched gutta-percha are the segments by product type in the gutta-percha market. The increasing prevalence of dental caries, rising public awareness pertaining to oral hygiene and advancement in techniques, increasing dental caries due to inadequate fluoride intakes, changing living standard, behavioral factors, eating habits, social status, and socio-demographic factors are some of the prominent factors driving the market growth. Coltène Whaledent GmbH, Davis Schottlander & Davis, Dentsply Sirona, DiaDent Group International, Essential Dental Systems, FKG Dentaire, Kerr Endodontics, META BIOMED, Micro-Mega, Premier Dental Products Company, Sure endo, and others are the prominent players in the market. North America held the highest market share in the gutta-percha market Asia Pacific is expected to be the fastest growing market over the forecast period
<urn:uuid:52d75049-cbbd-47d0-933f-fd9232a5737c>
CC-MAIN-2022-40
https://www.acumenresearchandconsulting.com/gutta-percha-market
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00732.warc.gz
en
0.918974
1,252
2.53125
3
Organisations know that they need to protect their systems, data, and employees from data breaches. Virtually every day there are reports of businesses who have suffered cyberattacks exposing personal data records. However, that is not the end of the story – there is so much more to consider, and resolve should you suffer a data breach. The chart (2) below shows the number of records exposed over the last 7 years, which in 2020 resulted in more than 37 billion records being breached. If you are the owner of a business or organisation trying to keep your sensitive data secure – it must be a scary thought that humans contribute to 82% of cyber-attacks. Your employees (1) could be duped into exposing the organisation to reputational damage and data breaches, without realising. The sophistication of the cyber criminals these days is staggering, using phishing and spear phishing techniques to gain users details and passwords, knowing that the potential gain from extracting this information is huge. The work involved in plugging the leak after a cyber-attack, whilst continuing to run your business is a major cost that no one really accounts for. Not to mention the reputational damage that a data breach causes and the increasing level of fines. All of which could be avoided if strong PKI-based authentication had been implemented. Public Key Infrastructure Public Key Infrastructure (PKI) is the strongest form of passwordless authentication. Put simply, PKI consists of a set of roles, policies, software, hardware, and procedures which together provide the gold standard solution for protecting digital identities. The strongest form of two-factor authentication is a digital identity comprising a PKI certificate issued to a secure device, as recognised by standards such as US FIPS 201 (PIV), enabling organisations to be sure that users accessing systems, networks and sensitive data really are who they claim to be. 36% of all data breaches involved Phishing Phishing is normally a communication, which is sent to a recipient within an organisation, who is then asked to perform an action in a timely fashion. Sometimes it may even look like it has come from within the organisation, so the individual is fooled into responding or taking an action quickly, without too much time to think about it. If you act and then are asked to re-enter your password, the rogue site steals that information, and a cyber-attack has occurred. 91% of cyber-attacks begin with a spear-phishing email This is a much more sophisticated type of phishing campaign, where the criminals target a specific person within the organisation and encourage them to take an action. This involves more intelligence, planning, and research from the cyber criminals, as well as time. But if successful, is more likely to give them access to more valuable data. Often the criminal uses communication tools to strike up a conversation and then builds trust over a period of time, before asking the victim to click on a rogue link or divulge sensitive information. What can you do to ensure you are safe from the possibility of cyber-attacks? 1. Ensure you have the strongest possible authentication in place, ideally Public Key Infrastructure (PKI). 2. Train your employees to spot phishing and spear phishing attacks via email and other channels. 3. Make sure your systems are protected using phishing resistant hardware-backed strong authentication from anyone or anything that could cause harm or steal your data. MyID® is a feature-rich credential management system (CMS) that enables organisations to deploy digital identities to a wide range of secure devices simply, securely and at scale. Unlike passwords, one-time passwords (OTP) or other forms of MFA, MyID credential management uses Cryptography-based credentials to strongly bind a digital identity to an individual, enabling organisations to take control of their user identities, providing optimum protection against the number one case of data breach – weak or compromised passwords. We work seamlessly with smart card and USB token vendors to provide the credential management software to easily manage your digital identities. For more information see MyID
<urn:uuid:1710ebac-70ca-4880-9be9-9a7d9dd08e3b>
CC-MAIN-2022-40
https://www.intercede.com/82-of-all-cyber-attacks-involved-a-human-element/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00732.warc.gz
en
0.941755
830
2.5625
3
The term API stands for “application programming interface,” and that basically means software that enables two or more applications to exchange data (messages) with each other. You can think of an API as a sort of virtual interface, like a touch screen, that you can interact with to enter data, read data, or send/receive data. Web-based APIs can be conveniently accessed via the internet. What is API Integration? API integration refers to the system-to-system connection, via APIs, allowing those two systems to exchange data. APIs are designed so you can use a system remotely and connect systems, people, IoT devices, and more. Two or more systems that have APIs can interoperate in real-time using those APIs, which saves time, money, and is far more reliable in terms of information currency and data accuracy. For example, let us say your company has a TMS (transportation management system) and my company has an ERP (enterprise resource planning) system, and these two systems need to exchange data. In the old days, we might have faxed or emailed this information or discussed it on the phone. With API integration, it happens digitally, without human interaction. API integration is what opens a channel that enables our companies to, quite literally, conduct business faster and more accurately. In this diagram, you can see a visual representation of API integration with a Netsuite ERP instance connecting to Amazon Marketplace, Shopify, and SAP Ariba: By keeping data in sync in all connected systems, productivity is enhanced, so you can leverage that data to improve efficiency and drive more revenue. 4 Way to Use API Integration From applications and data, all the way to business ecosystems, APIs are quickly becoming a mainstay in most enterprise integration strategies. Here are just four of the countless ways your business can start looking at APIs to facilitate integration. # 1 – APIs for Configuration, Administration, and Monitoring of Products When someone discusses “headless administration,” this type of API integration is what they are referring to. A “headless” environment is a computer that operates without a monitor, graphical user interface (GUI), or other devices, such as a keyboard or even a mouse. This type of API allows you to do any type of administration with your cloud that you can do through an administrative GUI. You can run the system “headless” and manage it without having to go to a keyboard and literally touch things. All data management functionality is available today through REST APIs. There are limited capabilities to manage the translation or transformation through APIs, but part of the design of that is that the transformation is headless, so the studio and the runtime are separated. So, in many ways, while there are capabilities, there remain some gaps to fill in as well. Rather than using the GUI to update your trading partners, AS2 connections, or to manage your certificates, you would use an API to accomplish those tasks. Instead, a clearer way to think about it is to treat the scenario as if it were an administering API that automates several key product tasks, including: The next steps from here would be to complete the REST APIs for data movement and refine the “headless” strategy for data transformation. #2 – APIs to Upload and Download Files If you take a look at data movement capabilities, typically you will start with multiple secure communications protocols. These protocols are wide-ranging, used for file-based integration, and include FTP, SFTP, AS2, as well as is often the case, a secure portal for person-to-system file flows. If you want to upload a file, you can use REST APIs to accomplish that, as well as supportive APIs that can be set to programmatically upload and download the files to and from the integration platform. These types of APIs are relevant to how a company can operate within the traditional data movement and support versatile and flexible file-based integration scenarios in their environment. #3 – Using Tools to Connect Other Systems Together Using Their APIs The third example revolves around the APIs that are provided by other systems, versus those in-house. Some of the most popular examples of core enterprise systems include Salesforce with approximately 20 percent of the global CRM market, and NetSuite, a consistently dominant name in the ERP field, to name a few. In this case, Salesforce and NetSuite present those APIs allowing a company to consume them to do some sort of application-based cloud integration. #4 – Using Cleo Tools to Provide APIs for use by Other Systems The fourth and final example is actually the other side of the previous example, where an enterprise would use the system’s APIs. Here, the enterprise presents the APIs for others to access. For instance, providing an API to order products. Someone at Salesforce wants someone to be able to access their environment, to be able to do operations using a program through their APIs. So how do they do it? That’s performed by presenting an API to the world that others outside of the company can call and access. If someone wanted to provide an API to order a product or check their order statement, they could use integration technology to build that API and allow people to call in and try to understand what is happening with their order. Presenting Vs. Consuming APIs The important distinction to make between the third and fourth examples is that the third example is calling or consuming an API provided by someone else, while the fourth example is providing an API for other people to call. Why Do You Need API Integration? In Cleo’s 2021 State of Ecosystem and Application Integration Report, an annual survey, integration experts across industries called out some of the top integration challenges their companies are facing today. Here’s a snapshot: API integration using a modern integration platform can help you to address each of these challenges and more. Here’s how: Integrating legacy applications More than half of companies struggle with integrating legacy applications, a problem that’s only growing more profound as cloud-based applications proliferate. Many businesses have invested heavily in their legacy applications, and they don’t want to throw the baby out with the bathwater, yet they do want access to the cloud. Fortunately, with today’s modern integration platforms companies can accelerate seamless end-to-end integrations between their multi-enterprise ecosystem and their internal systems. Plus, by introducing APIs to complement your EDI onboarding processes, you can automate them and take on new ecosystem trading partners faster. IT Modernization steps like these will upgrade your integration capabilities so you can activate B2B eCommerce more quickly and ensure the success of your business strategy. Revenue loss due to integration issues We all know time is money, and when companies cling to outdated, manual processes they can’t help but fall behind financially in today’s digital economy. According to the 2021 survey, 66% of companies are losing up to $500,000 per year due to poor integration, with 10% losing more than $1 million annually. Digitalization via API integration helps everything involved with your integrations move faster, and ultimately that means money moves faster too. Integrating new applications With a modern integration platform, your business can unlock any communication protocol whenever you need it, with no additional costs. Such a platform leverages proven pre-configured connectors and templates, allowing you to spin up fast and efficient application integrations. With API integration, for example, the information contained in an important database can be shared with your other internal systems, increasing the value of that data across multiple teams. Externally, your company could make its APIs available to your customers and partners, so selected, mutually beneficial data can be shared seamlessly and in real-time. Poor application and system visibility Increased visibility into data processes running across your entire ecosystem reduces exposure to risk. Next-generation integration tooling provides real-time visibility via customizable dashboards, and such technologies deliver real-time monitoring and reporting to alert stakeholders of any data challenges with their most important business relationships. With a modern integration platform that accommodates API integration, you know whether your new trading partner relationships are at risk. All told, API integration is important for logistics companies, manufacturers, wholesale distributors, retailers, and others today because with it, existing applications can be preserved yet opened up to other systems and applications across your entire B2B ecosystem. Doing this speeds up your business's processes, delivering faster application integration and increased visibility across your end-to-end (external and internal) business processes, ultimately driving more revenue, better relationships, and higher performance. Additional Benefits of API Integration As you can see, API integration provides many valuable benefits for your overall business, but diving a bit deeper now, what more can API integration help with? We mentioned better relationships above. And there’s probably nothing more important these days than consistently delivering a customer experience that is commensurate with what your customers and partners expect. In fact, customer experience precedes revenue in its importance, because only by being easy to do business with can you ensure yourself happy, repeat customers. They have many alternatives, and loyalty is often fleeting. For decades integration has been one of the most complex, specialized, and frustrating tasks in all of IT. APIs help makes integration easy by making connections, data movement, and data transformation easy. By enabling different systems or applications to collaborate and seamlessly share information, all of your business processes become more fluid. Further, with today’s plethora of largely pre-built solutions, you can accelerate delivery of custom API integrations and save on development costs -- or enjoy having your developers build applications that are core to your business instead of spending their time building complex API-based integrations themselves. Performing tasks manually is cumbersome and time-consuming. Automation of routine tasks through API integration is a sure-fire way to make your life easier. Ideally, your automated API integrations should look something like this screenshot from the Cleo Integration Cloud platform. In the interaction below, you can see Shopify eCommerce data on the lefthand side being automatically converted and connected to be ingested into the Netsuite ERP on the righthand side. With an API-first approach to cloud integration, you get built-in capabilities to resolve virtually all your integration needs, allowing for application integration that naturally works in conjunction with EDI/B2B integration. How Leading Organizations are Accelerating API-based Integrations in 2022 What then is the real value of choosing a modern API-based integration platform? In kitchen-table-speak, it just makes life easier and better, both for your company and the customers and partners that comprise your business ecosystem. In tech-speak, it brings you three key capabilities no doubt your integration approach couldn’t benefit from before. First, you get the ability to perform API and EDI integration using a single platform, eliminating the need for multiple disparate integration solutions. Second, it gives you direct control and visibility over your most revenue-critical end-to-end business processes, like Order to Cash, Procure to Pay, or Load Tender to Invoice. Last, you get a choice of how such a platform would best be deployed in your unique circumstance – do you want a self-service model? A managed-services approach? Perhaps a blend of the two is best? Cleo Integration Cloud is the only modern integration platform that can accomplish all three. Even better, you can start wherever you are and steadily mature your ecosystem enablement capabilities as your digital transformation initiatives progress. Plus, if you're using a solution like Cleo Integration Cloud, the API support for both presenting and consuming APIs is built right into the platform so you have everything you need to be able to integrate and use those APIs for your integration, gaining end-to-end visibility as a result. As the world leader in ecosystem integration technology, Cleo is always listening to its customers and innovating new capabilities to enrich our platform. Cleo Integration Cloud also enables such common, essential API use cases as transforming batch processing to real-time, and provides API equivalency for EDI to deliver more agile interactions with members of your business ecosystem. There’s much more to Cleo Integration Cloud than this, of course, and we invite you to learn more.
<urn:uuid:b9d20183-0994-484c-aa25-bc8e53604b0a>
CC-MAIN-2022-40
https://www.cleo.com/blog/what-is-api-integration
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00732.warc.gz
en
0.9359
2,576
2.828125
3
In many organizations, there is little differentiation between what most stakeholders view as projects versus programs. Often, when there is a desired outcome needed, people are quick to assume a project can be spun up without fully understanding whether it is truly a project or actually a program. A project is a temporary endeavor that drives change and has a definitive start and end date. In most cases, a project is focused on creating a service, product, or result. Projects can be of any size or content. A Project Manager is responsible for managing the scope (deliverables), resources, schedule, and budget. All of this must be managed within guidelines set by the stakeholders of the project. Like projects, programs are temporary endeavors that drive change and have a definitive start and end date. Unlike projects, programs contain two or more projects with the same strategic goal. Once all projects within a program are complete, the program is considered complete. A Program Manager is responsible for ensuring the various projects within a program stay aligned to the business strategy. The Program Manager is responsible for managing dependencies between the projects within a program, as well as for removing projects from the program if they don’t align or bring value to the program. The main difference between projects and programs is specificity. Programs refer to a collection of projects to create an output, whereas a project refers to a singular endeavor that produces a specific output. |Components||A program’s components are projects.||A project’s components are smaller tasks or activities.| |Team-size||Programs have larger teams and consider outcomes more holistically.||Projects usually consist of small groups working together.| |Duration||Programs tend to be longer in duration and are usually completed in multiple phases.||Projects are shorter in duration and are usually completed in a single phase. |Success||A program’s success can be measured by its ability to meet the needs of all beneficiaries or stakeholders.||A project’s success is usually determined by its resource management, time and quality.| Finding the right balance between projects and programs can be critical to a company’s success. Now that you know the difference between them, hopefully, you can be more empowered in your company’s strategic planning. Finally, if your organization needs help with any aspect of project or program management, please don’t hesitate to reach out to us. We’d love to help you get started. Business Analyst/Project Manager Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously. |cookielawinfo-checbox-analytics||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".| |cookielawinfo-checbox-functional||11 months||The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".| |cookielawinfo-checbox-others||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.| |cookielawinfo-checkbox-necessary||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".| |cookielawinfo-checkbox-performance||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".| Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features. Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc. Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads. Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
<urn:uuid:c10572e1-4a43-4b2c-a181-6569122738cf>
CC-MAIN-2022-40
https://anexinet.com/blog/project-vs-program-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00732.warc.gz
en
0.911539
990
2.609375
3
Minitab's Power and Sample Size Tools For any type of work, you need a specific tool that offers the right amount of power. You don’t use a sword to cut vegetables or a knife in a battle-field because neither provides you the desired outcome. Minitab, designed to meet specific needs for six sigma professionals, is statistical software that provides an effective and efficient way to input and control numerical data, recognize trends and patterns, and derive insights. While performing the statistical test, you have to consider the precision and have to be confident with your results to meet your goals. Minitab’s Power and Sample Size tools enable users to balance these issues. It can efficiently determine exactly how much data is needed to be sure about the results of an analysis. Understanding Power and Sample Size Minitab’s Power and Sample Size tools collect enough data to conduct a thorough analysis. In order to be more precise, Minitab can assess the statistical power of tests that have already been run and estimate the sample size. Gathering fewer data restricts the reliability of an analysis, but gathering too much data leads to wastage of resources. Statistical power refers to the probability that your hypothesis test identify a significant difference or effect when one truly exists. For example, testing DVD players requires a low degree of certainty as compared to testing critical airplane parts that demand a higher degree of certainty. To differentiate between two papers, how many samples are needed if the average thickness of paper differs from one supplier to another? How many times should an experiment be replicated to have at least an 85% chance of detecting the factors that significantly affect a manufacturing process? The answers to these questions become easy with Minitab’s Power and Sample Size tools. Minitab Statistical Software does following statistical tests: Sample Size for Estimation 1- and 2-Sample t 1 and 2 Proportions 1- and 2-Sample Poisson Rate 1 and 2 Variances 2-Level Factorial Design General Full Factorial Design The tool has the capability to examine how different test properties affect each other. The test includes boxplots, scatterplots, and histograms and has the ability to calculate descriptive statistics. It’s an essential tool any Six Sigma program making it an important tool in process improvement. For example, while testing two samples, t-test can calculate: - Sample sizes - the number of observations in each sample. - Variances - the minimum variance between the two samples - Power - the probability of detecting a significant difference when one truly exists. Minitab has the ability to calculate the third property if the user enters values for any two. For instance if the values for the sample sizes and power are entered, Minitab determines the minimum variance between the two sample Minitab Statistical Software is the most preferred data analysis tool for businesses of all sizes – small, medium, big. It is used by thousands of distinguished companies including RBS, Toshiba, Boeing and the leading Six Sigma consultants. The Power and Sample Size tools in Minitab make it easier than ever to be sure you can count on the results of your analyzes.
<urn:uuid:358734f0-02ae-45ae-95f0-ea5234fc9da7>
CC-MAIN-2022-40
https://www.greycampus.com/blog/quality-management/minitabs-power-and-sample-size-tools
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00132.warc.gz
en
0.898439
707
3.015625
3
A wireless radio can perform one of the 4 activities. Power consumed by each activity increases in the given order ( 1-4). 2. Idle & awake There are 3 methods of power management used in 802.11 1. 802.11 Power Management 2. Unscheduled Automatic Power Save Delivery (U-APSD) from 802.11e amendment 3. Power Save Multi-Poll (PSMP) from 802.11n amendment. In any of the above method basic power management structure summarized into 4 steps 1. Before a STA goes into the doze state, it sends a frame, usually null data frame, to the AP indicating that power management is enabled. 2. Once STA indicate, that it is in Power Save mode, the AP begins to buffer all frames destined to that station. 3. When the station goes into awake state, it sends a frame to the AP in order to begin the data retrieval process. 4. When AP has finished sending all buffered data to the station, the station goes back into the doze state. Every 802.11 power management method begin when the STA associates to the BSS. When AP send “Association Response” frame to the STA, an Association Identifier (AID) value present in the AID fixed parameter field (16 bit) as shown below. AID is presence in Association Response or Reassoication Response frames. Traffic Indication Map (TIM) TIM is an information element & it has below structure. Element ID (1 byte) : Value 5 indicate it is a TIM Length (1 byte) : Length of the info carrying fields (DTIM count, DTIM Period, Bitmap Control, Partial Virtual Bitmap) DTIM Count (1 byte) : Incremental beacon frames until the next DTIM. DTIM Period (1 byte) : number of beacon frames between DTIM beacon. Bitmap Control (1 byte) : to indicate multicast/broadcast are buffered at the AP & use as space save (bitmap offset) Partial Virtual Bitmap(1-251 byte) : Series of flags indicating whether each associated STAT has unicast frames buffered at the AP. Each bit in this filed corresponds to a AID of a STA. Here is a TIM information element in a beacon frame. Note that DTIM period is 3 in the below, which mean every 3 beacon one DTIM will be advertised. DTIM count is 2 indicating in 2 beacon time there will be a DTIM In bitmap control field, first bit set to 1 that indicate buffered traffic at AP is broadcast or multicast. Remaining 7 bits used as Bitmap Offset, which may have any value between 0 to 127 used as a space saver. For an example, if there is no buffered traffic to AID1-70 then all those values are 0 in Partial Virtual Bitmap section. To save some space, you can use Bitmap offset value to indicate how many bytes are Zero in Partial Virtual Bitmap (PVB). Let’s say Bitmap Offset is N, then 2xN bytes are zero in PVB. In hour case N=4 where 8 bytes (or 64 bits) can be zero & those 64 bits can be skipped by setting Bitmap Offset value to 4. Delivery Traffic Indication Message (DTIM) DTIM beacon is identical in structure to the ordinary beacon. The only difference is tha the content of the TIM IE will give information about broadcast/multicast traffic that is buffered at the AP, in addition to the typical information about buffered unicast frames that is always present in the TIM. Below shows a DTIM beacon where DTIM count set to 0. If broadcast/multicast traffic buffered at the AP the first bit of Bitmap Control set to 1, otherwise 0 (which is the case in this DTIM) 802.11 Power Management: In legacy (802.11 standard) power management is uese, the STA never sends a frame with power management flag set to 0. It is always set to 1 (figure 8.6) & then AP send a buffered frame. In this method, when station return to “doze”mode STA does not notify the AP, & AP has to always buffered frame intend to that STA. When Power Save Poll ( PS-Poll) frames are used in 802.11 power management, then STA has to send a PS-Poll control frame to request AP to send a buffered unicast frame. In that frame if AP set “more data” bit to 1 the STA understand AP has more data to send & therefor remain in awake state & send another PS-Poll frame to get next frame.802.11 power management has two major limitations 1. Additional overhad added to wireless channel (decrease throughput) 2. STA must spend too much time in Transmitting state. 802.11e U-APSD : This is introduced in 802.11e amendment & part of WMM-Power Save certification as well. In this method, a STA typically sends a null data frame in order to retrieve buffered unicast frame from AP. Power Mgmt bit set to 0 in this frame (indicated STA in Active mode). Note that in this method, AP will send ALL buffered unicast frames to that STA. When STA goes into Power Save mode, it has to send another null data frame with power management bit set to 1. 802.11n Power Management : 1. PSMP – Power Save Multi-Poll This is a power management method that builds on schedulded automatic power save delivery (S-APSD) for network that use HCF Controlled Channel Access (HCCA) 2. SMSP- Spatial Multiplexing Power Save SMSP involves STA reducing the number of data streams used during spatial multiplexing. SMSP will temporarily disabling spatial multiplexing to conserve battery life. IBSS Power Management In IBSS (ad hoc network) there is no AP to send TIM or DTIM. So if a STA goes into power save mode multiple other STAs has to buffer its data for specific STA. So in IBSS, Announcement Traffic Indication Message (ATIM) is use for power management. It is a management frame with no frame body. When a STA receives ATIM, that formally dozing station must begin the process of retrieving buffered frame from the stations that transmitted the ATIM. 1. CWAP Official Study Guide – Chapter 8
<urn:uuid:4facd0b6-f6fd-438c-bd5b-3b1c11ccfc0e>
CC-MAIN-2022-40
https://mrncciew.com/2014/10/14/cwap-802-11-power-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00132.warc.gz
en
0.856819
1,397
2.90625
3
In honor of Cybersecurity Awareness Month, we're kicking off a series on network attacks, so you can better understand how they work and how to stop them. One of the most devastating database attack techniques is also the most prevalent. SQL injection (SQLi) gives attackers an alarming amount of access to a website. With a small piece of code, an attacker can steal data, delete data, change a website, or compromise a server to launch more attacks. Code injection flaws are the most critical web application security risk according to the OWASP foundation. A key factor that makes SQLi such a common, dangerous technique is the prevalence of injection vulnerabilities in web applications. How Does SQLi Work? Let's start with a short, general background on how web applications relate to Structured Query Language (SQL). Web applications are programs that are supported by a web server. Applications can also be linked to a database, which usually holds important website or application data. Relational databases, the most common database type, contain structured data within tables. That data can be managed by writing queries in SQL. For example, a query for retrieving data stored in an "accounts" table begins with the following SQL statement. SELECT * FROM accounts User interactions with a web application affect which data is retrieved from a linked database—e-commerce shopping carts, search filters, or login forms. When a user enters information (such as username and password) into a login form, the web application processes that information as parameter data in the SQL query, which then retrieves account data for the user from the database. SELECT * FROM accounts WHERE username = 'sea monster' AND password = 'kraken' SQLi occurs when the attacker injects a piece of SQL code, or fragment, into the web page that generates a malformed SQL query and has unintended results. Returning to the login form example, the attacker might enter a single quote and comment delimiter SQL fragment ('--) after the username (sea monster'--). The comment delimiter cuts the SQL query short, instructing the database to ignore the password field when retrieving account data. The unintended result is that the attacker can view account data without submitting a legitimate password. SELECT * FROM accounts WHERE username = 'sea monster'--' and password ' ' Many injection techniques exist, varying by the vulnerability and the database management system (DBMS) that they exploit. Attack techniques can be generally grouped into these categories: - In-band SQLi: The web application includes specific error messages for SQL syntax errors in HTTP responses. The web application also includes query results in HTTP responses. After an injection attempt, the attacker can refine their injection technique based on error messages and results. - Blind (inferential) SQLi: The web application does not include specific error messages or query results in HTTP responses. The attacker must make several injection attempts—with conditional true/false or time-based statements—to evaluate HTTP responses and refine their injection technique. - Out-of-band SQLi: The web application does not include specific error messages or query results in HTTP responses. The attacker injects DBMS commands for the database to send DNS or HTTP requests with information to an attacker-controlled server, providing an indirect method for refining their injection technique. How to Detect SQLi Attacks Detection methods range from checking server logs to monitoring database errors. Most network intrusion detection systems (IDS) and network perimeter firewalls are not configured to review HTTP traffic for malicious SQL fragments, making it possible for an attacker to bypass network security boundaries. Web application firewalls (WAF) can be integrated into security solutions to filter HTTP requests that match SQLi attempts. But a WAF must be continuously updated to filter new techniques. ExtraHop automatically detects unusual HTTP traffic crossing the network that could result in malformed SQL queries. From the detection card, defenders can investigate records of HTTP transactions containing the URL-encoded SQL fragments that can alter SQL queries. URL-encoded refers to the escaped reserved characters, such as spaces and single quotes, within the URL of the HTTP request. Encoding non-ASCII characters into the URL ensures that the HTTP request successfully crosses the internet. When the HTTP request arrives at the web server, the web application decodes the URL and processes the data. Identifying URL-encoded SQL fragments can help you create SQLi match conditions in your WAF or escape these input character values in web application code. In the example below, an attacker injected a single quote and space introducing the UNION SQL operator (' UNION all SELECT) into the name field of a login form, generating an HTTP request with this URL-encoded fragment: %27%20UNION%20ALL%20SELECT. The UNION operator combines the results of two or more SELECT statements into one HTTP response. How to Prevent SQLi The best way to mitigate SQLi is to reduce the number of vulnerabilities an attacker can exploit. Check out the OWASP SQLi Cheat Sheet for tips on how to prevent SQLi attacks, including the following best practices: - Validate user input. Reject SQL fragments submitted with user input so they are not processed into SQL statements. - Escape input. Specific characters such as a single quote (') have a specific meaning in SQL statements. But blocking these characters from user input might not be feasible because they're also needed for valid input (for a username such as O'Brien, for example). These values can be escaped, or placed into quotes, so that the data can still be incorporated into the SQL statement in the proper context. - Implement parameterized queries and prepared statements. These can be hard-coded into the application to keep user-submitted input separate from queries and commands. - Implement stored procedures. These can be hard-coded into the database to keep user input separate from commands. - Enforce the principle of least privilege. Prevent unauthorized users from making database changes. Despite these mitigation options, attackers can still catch web and database teams by surprise. Well-known, publicly-available tools can help attackers perform vulnerability scans, fuzzing (the process of discovering new, unknown vulnerabilities), and SQLi attacks. Stay vigilant for unusual HTTP requests to prevent attackers from accessing web servers and valuable data. To see how ExtraHop Reveal(x) detects SQL injection and provides the contextual information needed to stop an attack, take a look at our demo.
<urn:uuid:609d91a7-2964-4f94-b98a-8bf986793650>
CC-MAIN-2022-40
https://www.extrahop.com/company/blog/2020/sqli-attacks-definition-and-how-to-protect-against-them/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00132.warc.gz
en
0.843961
1,324
3.5
4
WHAT IS A VULNERABILITY ASSESSMENT? With the wide-spread adoption of web applications, mobile applications and cloud-based environments, the network perimeter as we once knew it has changed. All it takes is one software defect or misconfiguration for cyber criminals to get a foothold in your environment and steal or compromise valuable information and assets. Vulnerability assessments and network scans are designed to identify and rank security gaps in information systems and technology. These system and design flaws can span business systems, web servers and critical web applications across your network. Vulnerability Assessments FAQs Vulnerability Assessment vs. Penetration Testing Vulnerability assessments look for known weaknesses and security flaws in a variety of systems. This includes servers and workstations, desktops, laptops, mobile devices, firewalls, routers and cloud-based environments. Since a vulnerability scan may produce thousands of results, third-party security experts can help you prioritize what to patch first. They can also help you identify where you need to upgrade, update, or install new hardware, software, or other solutions. By contrast, penetration testing, also known as pentesting, is used to see how attackers actually use these vulnerabilities to get into your network, how far they can move within the network once they’re in and what information and data they can find and exfiltrate. What are the Most Common Vulnerabilities? One of the most common vulnerabilities that can lead to security incidents is security patching . Some of the biggest data breaches of the 21st century show that known vulnerabilities played a role in many of them. For example, SQL injections are a dangerous web application security vulnerability that enable attackers to use application code to access or corrupt database content. Attackers can add, delete, or read content in a database, read source code from files in a database server, and write files to the database server. Overall, web application security vulnerabilities are largely due to coding and configuration errors. Development teams can often identify vulnerabilities in the development phase by conducting code audits from start to finish, but this step is often overlooked, and vulnerabilities can be hard to spot. What Are Other Common Vulnerabilities? - Cross-site Scripting – Cross-Site Scripting is a malicious attack that tricks a web browser into performing undesired actions that appear as though an authorized user is doing them. - Buffer Overflow – Buffer Overflows occur when there is more data in a buffer than it can handle, causing data to overflow into adjacent storage. - Cross-site Request Forgery – Cross-Site Request Forgery (CSRF) is another malicious attack that tricks web browsers into doing things that appear as if an authorized user is performing those actions. - CRLF Injection – CRLF Injection attacks refer to the special character elements “Carriage Return” and “Line Feed.” Exploits occur when an attacker can inject a CRLF sequence into an HTTP stream. What are Key Benefits of a Vulnerability Assessment? Vulnerability assessments offer a number of key benefits: - Find Known Security Issues - The primary goal of conducting regular vulnerability assessments is to find known security issues before attackers do, and to plan accordingly. - Inventory Your Devices and their Vulnerabilities - Vulnerability assessments will help you develop a comprehensive inventory of all the devices on your network, along with vulnerabilities associated with each device. This inventory can help you better plan your budget for new and upgraded equipment, devices and security solutions. - Establish a Baseline - They can help you establish a baseline for your organization to measure progress over time and optimize your existing security benefits based on your risk levels. Conducting self-assessments can provide a more complete picture of how security is managed and improved over time. How Often Should Vulnerability Assessments be Conducted? While penetration tests should be conducted annually, vulnerability scans and assessments should be conducted at least monthly. This schedule can depend on many factors, such as your industry, the type of data you handle, your risk tolerance, business needs, and compliance requirements like the Health Insurance Portability and Accountability Act (HIPAA/HITECH), Payment Card Industry Data Security Standards (PCI-DSS) and the Gramm-Leach-Bliley Act (GLBA). In both cases, independent and objective experts like those at Motorola Solutions can help you get the most from these assessments. Identify weak points with our range of pentesting services.
<urn:uuid:50a61cbc-06c6-423b-9484-c4871d8e466a>
CC-MAIN-2022-40
https://www.motorolasolutions.com/en_us/solutions/what-is-vulnerability-assessment.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00132.warc.gz
en
0.928304
921
3.09375
3
Autonomous vehicles have been on a path to widespread availability and adoption for some time. But this technology is now moving forward much faster than most people realize. Both Tesla and General Motors this fall made news with their fully autonomous vehicle efforts. Tesla CEO Elon Musk in September said the test version of its fully self-driving Autopilot system would be released in “a month or so.” Less than a month later, Tesla revealed plans to offer a limited number of drivers a beta of this capability. Around the same time, GM announced plans to launch a test of unmanned autonomous vehicles by the end of this year in San Francisco. Meanwhile, Google Waymo and Volkswagen are moving ahead with self-driving commercial truck efforts. Some companies are working on using automated follower trucks in the platooning of multiple trucks, too. This can entail having a human at the wheel of the lead truck only. Peloton Technology and Locomation expect automated following to scale quickly and more broadly. The autonomous vehicle will be just as disruptive as the horseless carriage The future of the autonomous vehicle is now. And I believe that this new generation of vehicle will be as disruptive today as the first horseless carriages were in the horse-and-buggy era. I know about this topic because I’m an automotive historian, specializing in the 1886-1913 period. I’ve written several articles for collector car magazines. And I’ve read almost every book ever written on how disruptive motorized vehicles were. And I expect this to happen again. So, fasten your seatbelts because big change is on the way. I believe that in about 20 years non-autonomous cars will be outlawed. If people like me want to drive their classic cars, they’re going to have to put those vehicles on trailers, take them to tracks, and drive them there. This is happening whether we like it or not – and will drive widespread change I invested $100 to reserve a Tesla Cyber Truck with a fully autonomous system. But some people say they’ll never use an autonomous vehicle. But I truly believe that we will not have a choice in the matter at some point in the not-too-distant future. I say that because if every car on the road can communicate with the other vehicles, the propensity for accidents is going to be greatly reduced because it removes that human factor of the distracted driver. Autonomous cars will also improve traffic flow. That could create efficiencies in road construction. In Los Angeles we have roads that are 16 lanes wide, and that’s still not enough. The move to subscriptions will impact our homes and yards as well. We won’t need big garages, so we’ll be able to use that space for extra rooms. We won’t necessarily even need driveways. Autonomous vehicles need to be safe and secure regardless of class The way government and industry rate and regulate vehicle safety will also need to change. What if someone asked you: What is the safest car in America? You can’t answer that because it’s a trick question. The reason it’s a trick question is because safety is dependent on class. If you want the safest car in America, you’re going have to buy something like a Mercedes SUV. But most people can’t afford that. So, the safety commission provides safety rankings by class. However, when it comes to the autonomous vehicles, every vehicle must have the exact same safety or security protocol to be successful. This is the reality regardless of whether the vehicle is a Toyota Tercel, or some other subcompact car, or the most expensive Mercedes. Yet there’s no way that automakers can rely on consumers to keep in-car security systems current. It’s simple to put gas in a car, but people still run out of gas. Imagine what will happen when people get reports that they need to update security certificates in their cars and that if they don’t those vehicles are no longer secure. Are they all going to immediately drive to their dealership to have that security certificate updated? It’s just not going to happen. It is for this reason that consumers won’t purchase cars and the responsibility for maintaining the security will fall to subscription fleet operators. The rise of driverless vehicles is converging safety and security Autonomous vehicles are changing how we need to think about and address safety and security. Today there’s a big difference between safety and security. But in the coming era of autonomous vehicles, there will be no difference between the two. Now the focus is on employing physical measures like airbags, crumple zones and seat belts to address safety. All vehicle safety features are based on the survivability of an accident, not intentional harm. No one ever thinks about security of a car, short of a car getting stolen. But as autonomous vehicles become mainstream, electronic security becomes more important. In a scene in “The Fate of the Furious” movie, Charlize Theron says “make it rain.” This prompts her techie coworkers in the room to remotely control vehicles in a parking garage. They use this control to drive those cars off the edge of the parking garage into a heap on the street below. If this were possible, that would be the end of the autonomous vehicle market altogether. The public will accept accidents, such as an autonomous car running a stop light. But we will not, as a society, accept situations in which people intentionally run cars into walls or off cliffs or buildings. That’s why the security of fully autonomous vehicles is of paramount importance. Public key infrastructure and hardware security modules can provide the needed protection Automakers, component makers and security companies like us are already working on this. In fact, our work on security for autonomous vehicles has been in development for a long time. The automotive industry has selected public key infrastructure (PKI) to secure autonomous vehicles. PKI, a proven way to secure digital systems, provides security using a pair of cryptographic keys. Public and private keys work together to secure the manufacturing and updates to cars and their components. But it is the private key that acts as the PKI root of trust. In the case of autonomous vehicles, the private key lives in the car or with the manufacturer. If a hacker compromises the private key, that hacker can gain control of the car and override its programming. So, it’s important to keep the private key safe and secure at all costs. You could store the private key in an application server. But that’s dangerous because perimeter security is your only safeguard, and hackers frequently breech “secure” perimeters. Hardware security modules (HSMs), used during the manufacturing process and maintained by the manufacturer throughout the cars’ lifecycle, are a more effective way to secure PKI systems and keys. HSMs are specialized appliances that hold and secure cryptographic keys. They are widely used and serve as the ultimate root of trust for a range of applications and systems. Without an HSM, cryptographic keys are at high risk of compromise. This a large and growing problem since encryption keys are going to be in every car going forward. And it’s why using HSMs for autonomous vehicle security infrastructure should become an industry standard. HSMs may seem a small detail, but going without them can have big repercussions Earlier I talked about the horse-and-buggy generation that was disrupted by the horseless carriage. Now let’s turn back time even further to the days of Ben Franklin, who famously quoted this poem about how small things can make a big difference. I see the cryptographic key as akin to the horseshoe nail. If you’re not securing that one little piece, you could run the risk of everything else getting compromised. And that risk is only growing as time progresses since computer systems are becoming faster and their ability to crack crypto is getting stronger. But if you are safeguarding your systems using PKI and HSMs, you can keep those systems, autonomous cars, and the people who use them safe and secure.
<urn:uuid:daf098c4-8589-4927-b59c-34b732b4b006>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/fully-autonomous-cars-are-moving-forward-fast-so-are-efforts-to-secure-them/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00132.warc.gz
en
0.957603
1,687
2.515625
3
A Layered Security Approach Is Essential in Today’s Threat Climate Ransomware, phishing scams, malicious email attachments, hacker attacks — the list of potential cybersecurity threats just continues to grow. Most experts agree that it’s a matter of when, not if, an organization will be the target of a cyberattack. In an age when no two network threats are exactly alike, it is important to understand that different cyber security threats call for different security measures. But simply adding an array of security tools isn’t enough. As we noted in a previous post, a fragmented approach to security can make it harder to identify and respond to threats. The best defense is a layered security architecture that recognizes both the strengths and limitations of various security products. Also known as defense in depth, layered security places multiple security controls throughout the IT environment. If an attack gets by one security tool, others are in place to increase the odds that an attack will be identified and stopped. Layered Cyber Security Components The foundational component of layered security is perimeter defense, which involves keeping malicious traffic from ever reaching the network. Perimeter defense begins with a firewall, which can be implemented using software, a hardware appliance or a cloud-based solution. Today’s next-generation firewalls (NGFWs) combine traditional firewall functionality with deep packet inspection, intrusion prevention, antimalware and other security features. By consolidating multiple security functions on a single device, NGFWs reduce costs and simplify management of the security environment and are better equipped to combat sophisticated cyberattacks. Of course, perimeter security is no longer enough — more and more users are accessing systems and applications remotely using a wide range of devices. Identity-based and device-aware access controls enable the enforcement of policies according to the user, device type, location and other criteria. Continuous monitoring of network traffic and file activity helps detect and stop threats that make it past initial defenses. Layered Security Email Scanning Email scanning and filtering is another component of layered security. Malware that can cripple PCs and corporate networks continues to spread through email attachments and malicious links. Rather than depending on users to detect these threats, email security solutions prevent malicious content from ever reaching inboxes. Active content filtering tools should also be considered for blocking web sites that could compromise security. Encryption plays an essential role in a layered security approach. Confidential data — whether residing on a storage device, traveling across the network or sent in an email — is at risk if left in “plain text.” Strong encryption algorithms, combined with encryption keys that are regularly changed, protect the data from prying eyes and add another crucial layer of security. Data loss prevention tools work in concert with encryption to prevent the unauthorized distribution or sharing of proprietary information. GDS Delivers Layered Security: The best cyber attack defense is a layered security architecture that recognizes both the strengths and limitations of various security products. GDS delivers a suite of fully managed security tools that work together to create a layered security strategy. Each cyber security solution includes best-in-class hardware and software backed by 24x7 monitoring, management and support by cybersecurity experts. GDS keeps the tools up to date as new threats emerge and the IT environment changes. The GDS team also responds rapidly to security incidents to minimize the impact of a cyberattack. Threats are constantly evolving, and no security infrastructure can be considered invulnerable. But a layered security approach that addresses all aspects of cybersecurity can significantly reduce the risk of an attack. GDS takes this a step further with fully managed tools that are monitored and supported around the clock by our experts. Benefits of Managed IT Services from Global Data Systems - Strategic Managed IT: We help you solve your technology related business problems. - Connectivity: We get you reliable, secure connectivity anywhere in the western hemisphere in 48 hours. - Support: When you need help simply call our 24x7x365 support number. - Billing: Instead of managing hundreds of vendors - get one, easy to read bill from GDS.
<urn:uuid:ce9839ae-3650-446f-a7a0-6a3f795b8469>
CC-MAIN-2022-40
https://www.getgds.com/resources/blog/cybersecurity/layered-security-approach-is-essential
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00132.warc.gz
en
0.914336
840
2.6875
3
Once upon a time, in order to keep up with the rapid growth of data, cloud computing was introduced into organisations as a solution. In the recent years, the capability of cloud computing has beginning to show decline in its effectiveness as the number of data being generated continuously expanding and growing at overwhelming speed. As a results, organisation is again introduced to a more effective architecture called fog computing. However, its implementation is not an easy task. This post explain the challenges in implementing fog computing. Fog computing vs cloud computing Fog computing is not a replacement of cloud computing but instead an extension to cloud computing that enhances the already established cloud architecture. Here’s how – While the servers nodes of the cloud computing are located within the internet, the fog computing have them at the edge of the networks. With this parameter, fog computing enhances cloud computing by functionally manages data from mobile devices thus reduced latency and improved response time. What are the challenges in implementing fog computing? The need for fog computing is crucial. However, its implementation come with various challenges including 1. Data privacy As fog computing involves deployment of fog nodes at the edge of Internet, more end users are more accessible to the fog nodes. This increases the number of sensitive data being collected by the fog nodes compared to the remote cloud making it become the target of cyber attackers. The most critical security is the risk of having malicious user in using a fake IP address to access the data stored in the certain fog node as fog computing involves authentication of devices at different gateways. This lead to the need for the use of intrusion detection system at every layer of the platform. 3. Network management Without SDN and NFV techniques, the management fog nodes, network, and also the connection between the nodes are a heavy task since they are connected to heterogenous devices. 4. Positioning the fog servers The positioning of the group of fog servers requires analysis on work done in each node in the servers in order to optimise the service delivered by fog computing as well as lowers the maintenance cost. 5. Energy Consumption Fog computing involves high consumption of energy fog environments apply massive number of fog nodes. Nevertheless, every new implementation is paired with challenges. Thus, organisations need to understand the challenges needed to addressed to leverage the implementation of the chosen technology, architecture or practice. E-SPIN Group in the enterprise ICT solution supply, consultancy, project management, training and maintenance for corporation and government agencies did business across the region and via the channel. Feel free to contact E-SPIN for your project requirement and inquiry. Related posts that may interest you:
<urn:uuid:2786dc9f-74b9-47eb-9428-7d9f89d02d47>
CC-MAIN-2022-40
https://www.e-spincorp.com/challenges-in-implementing-fog-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00332.warc.gz
en
0.924336
541
2.78125
3
As we start moving into the deep learning and AI world, it might be a good idea to reflect on how we went from basic data collection to an information-based world. Stored data is just stuff until you can figure out how to turn it into actionable information, and sometimes it takes years of collecting data to have enough to get to that point. Good examples of data that require long-term collection include: medical trials with new processes, medication or equipment; group behavior based on external factors that happen infrequently; and climatic change. The thing about data is you do not know what you do not know about it. A good example is “junk DNA,” a term from the 1970s and 1980s that was used to describe DNA that was not chromosomes and was often in between chromosomes. By the 2000s, it was discovered that some of that “junk” DNA regulated how and when chromosomes replicated. Good thing people stored that data, which was costly at the time given the cost of storage per byte. An even higher cost at the time was the cost to sequence the DNA, which is why it was kept. Historically this is pretty common, where the cost of collecting the data was high and the cost to store the data was also high, so we can thank those who preceded us for doing the right thing. They stored this old data because we have learned a lot from it. We know that some weather forecast centers keep all the collected data every day, including the output of their forecast models. When these sites have a new forecast model, they run the old data through the new model and look at the model output and observations to see if the new model is better than the old model and by how much. Doing this for one city might seem easy, but doing this for the whole planet is a lot of data and information to compare. So the challenge falls to storage and data architects to preserve this data by developing an architecture that meets the need for performance, scalability and governance. What is Information Management? Since the dawn of data collection, the whole point of collecting data was to make sense of all the data being collected. Collecting data and doing analysis by hand was very time-consuming, and the time it took to change data into information was both labor-intensive and costly. The modern age of information began with the use of Hollerith punch cards for the 1890 U.S. Census, though they were blank, unlike the formatted ones you might have seen . The key point here is that having lots of data without tools to analyze the data and turn it into information is costly, and before the 1890 Census this was done by hand. Clearly the information generated in the 1890 census was very rudimentary by today’s standards. But by the standards of the 1890s it was revolutionary that people could look at the results of the Census so quickly and make decisions (e.g., actionable information based on data). Today we wouldn’t call the tabulation of the data from the 1890 census data information. The definition of information –compared to just data – should be based on the standards of the time, and the definition in many areas is now evolving rapidly. The size and scope of the information analysis market is expanding at an ever-increasing pace, from self-driving cars to security camera analysis to medical developments. In every industry, in every part of our lives, there is rapid change, and the rate of change is increasing. All of this is data-driven and all the new and old data collected is being used to develop new types of actionable information. Lots of questions are being asked about the requirements around all of the data collected and information developed. What Does This Mean For You and Your Organization? There are many requirements based on the type of information and data you have. Some might involve using what is called DAR (Data Encryption at Rest), which encrypts the storage device so that if removed from the system, the data is nearly or totally impossible to access (the degree of difficulty depends on the encryption algorithm and the size, complexity and entropy of the key or keys for the device). Understanding what is required from a governance point of view for your data or the resulting information is based on things like best practices for your industry or regulations and agencies like the U.S. National Bureau of Standards (NIST), ISO, HIPAA, SEC, GDPR in Europe. And the resulting architectural or procedural changes are the types of things that you will need to address as part of your architecture. You or your compliance group will know best how long you might need to keep data or information, but there are many other requirements that you will have to address to ensure that you meet your business objectives in the areas of performance, availability and data integrity, all of which need to be address for the life of the data and information. Compliance is not easy, nor is it free. The cost depends on lots of factors, but trying to force compliance after the architecture is planned and built is always far most costly than doing it beforehand. It is my opinion that when defining compliance requirements, you should be looking to the future rather than the present because of the cost and challenge of shoehorning things in after the fact. That means that someone needs to be continuously studying compliance requirements in your industry, along with best practices. Data will only become more important in the future, and we need to be up to the challenge. About the Author: Henry Newman is CTO, Seagate Government Solutions
<urn:uuid:22c16b25-9cc3-488a-8068-2ddc62165c0c>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/management/data-storage-turning-data-into-information/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00332.warc.gz
en
0.96965
1,141
3.109375
3
The increase in etch and deposition steps, new materials, and new structures used in 2.5D and 3D packaging rely heavily on cleaning processes like photoresist strip and descum to ensure contamination free surfaces. Devices require varying levels of cleanliness using different materials throughout the manufacturing process so it is increasingly important to offer multiple cleaning options to achieve the required clean levels to ensure good devices and high yield. Surface activation, an important process tied to cleaning, conditions and prepares the surface for the next process step ensuring good quality adhesion resulting in high quality die. Prior to a wafer's entry into the fabrication process, its surface must be cleaned to remove any adhering particles and organic/inorganic impurities. Silicon native oxide also needs to be removed. Continually shrinking device design rules have made cleaning technologies ever more important to achieving acceptable product yields. In modern device fabrication, wafer cleaning procedures can make up 30% - 40% of the steps in the total manufacturing process. Wafer cleaning has a long developmental history within the semiconductor industry. See collections and studies of the classic methods from authors such as Werner Kern, and Tadahiro Ohmi for more detailed discussion. Contaminants on wafer surfaces may be present as adsorbed ions and elements, thin films, discrete particles, particulates (clusters of particles), and adsorbed gases. Figure 1 shows a schematic of the kinds of contaminants present on the wafer surface prior to it entering the process flow; Table 1 describes the impact of the different kinds of surface contamination on device performance while Table 2 shows the cleaning solutions employed to remove the different contaminants. |Type of Contamination||Main Influences on Device Characteristics| |Metallic Contamination||Alkali Metals|| |Chemical Contamination||Organic Material|| |Inorganic Dopants (B, P)|| |Inorganic Bases (amines, ammonia) & Acids (SOx)|| |Native and Chemical Oxide Films Due to Moisture, Air|| Table 1. Wafer contamination and its effects. |Contaminant||Cleaning Procedure Name||Chemical Mixture Description||Chemicals| |Particles||Piranha (SPM)||Sulfuric acid/hydrogen peroxide/DI water||H2SO4/H2O2/H2O 3-4:1; 90°C| |SC-1 (APM)||Ammonium hydroxide/hydrogen peroxide/DI water||NH4OH/H2O2/H2O 1:4:20; 80°C| |Metals (not copper)||SC-2 (HPM)||Hydrochloric acid/hydrogen peroxide/DI water||HCl/H2O2/H2O1:1:6; 85°C| |Piranha (SPM)||Sulfuric acid/hydrogen peroxide/DI water||H2SO4/H2O2/H2O3-4:1; 90°C| |DHF||Dilute hydrofluoric acid/DI water (will not remove copper)||HF/H2O1:50| |Organics||Piranha (SPM)||Sulfuric acid/hydrogen peroxide/DI water||H2SO4/H2O2/H2O 3-4:1; 90°C| |SC-1 (APM)||Ammonium hydroxide/hydrogen peroxide/DI water||NH4OH/H2O2/H2O 1:4:20; 80°C| |DIO3||Ozone in de-ionized water||O3/H2O Optimized Mixtures| |Native Oxide||DHF||Dilute hydrofluoric acid/DI water||HF/H2O 1:100| |BHF||Buffered hydrofluoric acid||NH4F/HF/H2O| Table 2. Cleaning solutions used to prepare substrates for the CMOS process. Particle contamination can originate as airborne dust from a variety of sources including fab equipment, process chemicals, the internal surfaces of gas lines, wafer handling, gas phase nucleation in film deposition systems, and fab operators. Even particles of low nanometer dimension have the potential to generate "killer" defects, either through the action of physically occluding the formation of key features in the device (producing patterning, feature and implant defects) or by creating localized electrically weak spots in thin insulating films. Cleaning solutions for particle contamination include piranha cleans for gross particulate (and organic) contamination and SC-1 cleans for small, strongly adhering particles. Piranha solutions are extremely strong acids which oxidize many surface contaminants to produce soluble species that can be removed in solution. SC-1 solutions remove insoluble particles by oxidizing a thin layer of silicon on the surface of the substrate which then dissolves into the solution, carrying adsorbed particles with it. Modern SC-1 cleans employ megasonic (0.8 - 2.0 MHz) vibration to aid in the removal of particles from the surface. SC-1 solutions prevent re-adsorption of the particle by inducing the same zeta potentials a measure of electrostatic repulsion, on the particle and substrate surfaces. All cleaning solutions that contain hydrogen peroxide (piranha, SC-1, SC-2) leave a thin oxide layer on the silicon wafer surface. Semiconductor devices are particularly sensitive to metallic contaminants since metals are highly mobile in the silicon lattice (especially metals such as gold) and therefore they easily migrate from the surface into the bulk of the silicon wafer. Once in the bulk silicon, even moderate process temperatures cause metals to rapidly diffuse through the crystal lattice until they are immobilized at crystal defect sites. Such "decorated" crystal defects degrade device performance, permitting larger leakage currents and producing lower breakdown voltages. Metal contaminants can be removed from the substrate surface using an acidic clean such as SC-2, piranha or dilute hydrofluoric acid (HF); these cleans react with the metal to produce soluble, ionized metallic salts that can be rinsed away. Chemical contamination of the substrate surface can be broken down into three types: surface adsorption of organic molecular compounds; surface adsorption of inorganic molecular compounds; and an ill-defined, covalently-bound thin (ca. 2 nm) native oxide consisting of the chemical oxide/hydroxides of silicon, SiOx(OH)y. Surface contamination by organic compounds either through airborne contamination or as residue from organic photoresists (PRs) is omnipresent in cleanrooms due to the presence of volatile organic solvents/cleaners and outgassing from polymer construction materials. Gross contamination by organics, such as occurs with incomplete PR removal, can impact device yields by leaving residues that form carbon during high temperature process steps. These carbon residues can form nuclei which behave as particle contaminants. Small amounts of residual metal resident in the PR compound can be trapped on the surface in these carbon residues]. PR residual contamination can be removed using piranha cleans and other high efficiency PR cleanup methods, as described in Dry Substrate Surface Cleaning. Organic contamination due to ubiquitous volatile airborne contaminants also require removal from the wafer surface. The presence of these contaminants can hinder the removal of native oxide by dilute HF solutions (see below), producing poorly defined interfaces between the gate oxide and the substrate and gate electrode. Poor interface characteristics strongly degrade gate oxide integrity. The presence of organic compounds on the surface can affect the initial rates of both thermal oxidation and CVD processes, introducing undesirable and unknown variations in film thickness. The SC-1 clean removes these organic residues through oxidation by peroxide and solvation of the products by NH4OH. The SC-1 clean slowly removes any native oxide, replacing that layer with a new oxide produced by the oxidizing action of the peroxide. In recent years, ozone dissolved in DI water (DIO3) is finding increasing use as a replacement for older Pirhana and SC-1 cleans as a "green" and safer alternative for the removal of organic contaminants. Chemical compounds containing dopant atoms such as boron and phosphorus can be present on wafer surfaces due to effects such as the outgassing of phosphorus-containing flame retardants or dopant residuals in process tools. If they are not removed from the wafer surface prior to high temperature processing, these elements can migrate into the substrate, modifying the targeted resistivity. Other kinds of volatile inorganic compounds such as basic compounds like amines and ammonia and acidic compounds like sulfur oxides (SOx) will also produce defects in semiconductor devices if they are present on the substrate surface. Acids and bases can cause unintentional shifts in the basicity or acidity of chemically amplified resists leading to problems in pattern generation and resist removal. These compounds are highly reactive and will readily combine with other volatile ambient chemical species to create particles and haze due to the formation of chemical salts on the substrate surface. Adsorbed acidic and basic species can be removed from the substrate surface by the combined action of SC-1 and SC-2 cleans. Silicon, like many elemental solids, naturally forms a thin layer of oxidized material on its surface by reaction with oxygen and moisture in the ambient air. The chemical formulation of this layer is not well-defined, being a more or less random aggregation of Si-O-Si, Si-H and Si-OH species. The presence of this native oxide on the silicon surface causes problems in semiconductor device manufacturing since it can lead to difficulties in controlling the formation of very thin thermal oxide thicknesses. Any native oxide that is present on the substrate during thin gate oxide formation will electrically weaken the gate insulator through the incorporation of hydroxyl groups. Additionally, if native oxide is present on the silicon surface of a contact pad, it will increase the electrical resistance of that contact. Over the past 50 years, our understanding of the nature of silicon native oxide and its impact on device performance has greatly increased. These studies found that very dilute solutions of HF, in de-ionized water, DI, or dilute solutions of ammonium fluoride, NH4F, HF and DI water (buffered oxide etch, BOE) completely remove silicon native oxide, leaving a hydrogen-terminated clean silicon surface according to Figure 2. The first successful wet-cleaning process for front-end-of-line (FEOL) silicon wafers was developed at RCA by Werner Kern and co-workers and published in 1970. Since then, there have been many developments and successful modifications of the approach and RCA cleaning continues to be the primary FEOL pre-deposition cleaning in the industry today. RCA cleaning procedures are a combination of the different procedures described above. The process consists of consecutive SC-1 and SC-2 solutions, followed by treatment with a dilute HF solution or buffered oxide etch (BOE). The product is a clean, hydrogen-terminated silicon surface, ready to be used in the process flow.
<urn:uuid:62c5f583-e1e3-4dee-b01d-80fd8a89e478>
CC-MAIN-2022-40
https://www.mks.com/n/wafer-surface-cleaning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00332.warc.gz
en
0.891382
2,419
3.21875
3
In recent weeks, one thing that keeps cropping up repeatedly when talking to clients is that they want to know how they can deploy technology to make businesses COVID-secure. At the same time, safety guidance is ever-evolving, so any solution has to be flexible. And any investments need to deliver a return even when the pandemic is over. In particular, I am often asked about AI in video analytics, and how it can contribute in the time of coronavirus – and that is what I want to focus on in this post. What is AI? Elements of AI are already woven into our lives. When you talk to a chatbot, for instance, AI makes the experience more natural, and performs routine tasks, such as answering frequent questions, setting up appointments or placing orders. When you finish that new TV series on Netflix, AI collects information on your preferences and creates a tailored list of viewing recommendations. When you open Google Maps in a hurry to get somewhere, AI helps you find the fastest route. And those are just a few examples. Yet despite its prevalence, not everyone knows what AI means. Put very basically, AI, i.e. artificial intelligence, describes machines that mimic human mental processes, such as problem solving. These machines can “learn”, respond to changing inputs, and perform specific, even highly sophisticated tasks. Let’s look at a popular example: You want a machine to learn to recognise a cat. You feed it an enormous quantity of data, in this case photos of felines. The software parses the data, identifying correlations and recurring patterns, such as whiskers, pointy ears and furry tails. It can then make informed decisions based in this “experience” – picking out a moggy from, say a football (no ears) or a car (no fur). AI in video analytics Video analytics typically uses a type of a subset of AI called deep learning. Deep learning leverages a multi-layered structure of algorithms called an artificial neural network (ANN) that is designed to work like a biological brain. But before we get lost down a terminological rabbit hole – the key takeaways for us are that deep learning is exceptionally accurate and remarkably independent: it can correct itself and fine-tune its performance without expert human intervention. So compared to standard analytics solutions, cameras with embedded AI engines are much more powerful, and deliver better results. They offer rich functionality and an outstanding degree of flexibility. With the help of a software development kit, applications can be developed for the cameras in line with specific and evolving needs. Back to COVID For example, right now you might want to train a camera to recognise when shoppers, commuters, employees etc. are not wearing face masks. Or you might want it to recognise when people are not maintaining social distancing, or to identify where high foot traffic is creating congestion. Using this technology, you can count the number of people at building entrances, estimate crowd sizes, manage queues, and much more. Furthermore, thresholds can be defined, and alerts sent when cameras observe the rules being flouted. This helps create a safe environment for both customers and employees – and it reassures them that protective measures are in place, as many people have not fully understood or correctly adopted preventative behaviours. Flexible and fit for the future AI-based video analytics can be harnessed in diverse scenarios, from retail, to warehousing and logistics and airports or government. The cameras can support multiple applications simultaneously. And they are not limited to same, fixed applications – they can adapt. So in the future, when COVID-related rules (hopefully) cease to be an annoying necessity, new apps can be developed. For instance, a company may want to better identify item removal e.g. in shoplifting, to detect spills or a person who has fallen and needs assistance – or even to spot a pointy-eared animal with a fluffy tail that has sneaked into the store. The possibilities are virtually limitless.
<urn:uuid:10ff4222-1130-4db8-916d-4bcb9ef64878>
CC-MAIN-2022-40
https://i-pro.com/global/en/surveillance/news/ai-video-analytics-what-it-and-why-it-matters-during-and-post-covid
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00332.warc.gz
en
0.943449
828
2.578125
3
How is Industrial IoT Making Manufacturing Safe? - COVID-19 is redefining how industries see healthcare and safety measures in 2020 and beyond. - Industrial IoT will play a major role in implementing safety measures and ensuring business continuity. - IoT based sensors, wearables, AR/VR, robots, and automation will be used for social distancing, remote accessibility, reducing onsite accidents, and improving other safety issues. As the novel coronavirus continues to persist, government, healthcare authorities, and industry leaders are focused on developing innovative solutions. IoT is playing a huge role in these innovative solutions for a wide variety of industries, especially manufacturing. Manufacturing has been one of the most impacted areas after COVID -19 and IoT has the greatest potential for it. In fact, it has its own term, Industrial IoT or IIoT, a major element of industry 4.0. Its implementation had already optimized operational efficiency increasing worker’s safety and reducing workplace injury but with COVID-19, health safety measures have taken on new dimensions. In addition to wearing masks, PPE suits, and maintaining at least 6 feet distance, the need of the hour for the manufacturing industry is data-gathering sensors, real-time analytics, automation, and working robots. By connecting working staff with these digital technologies, factories will not only make the working environment safer but improve bottom-line revenue. Sensors for Social Distancing Using IoT sensors in manufacturing plants can limit employees’ movement inside the plant and hence reduce the chance of the spread of the virus. These sensors can quickly detect and alert if a new person enters an already occupied space. Using these sensors, plant managers can allow a zoning facility as well to keep working staff in the areas where they are authorized to be. It applies to equipment and tools as well, as the virus can stay on these surface for longer. IoT sensors can also create reports that show managers where and when employees are violating the rules. Remote Access to Machines As industries are forced to reduce the number of workers working within facilities, remote access to machines is important. Managers can see real-time equipment functions and remotely with the help of IoT sensors. The sensors can even record the equipment’s reading, reducing the need for employees to be present on-site. These sensors can even alert if any equipment operates out of range. Managers can then schedule maintenance time, avoid crowding of people working at the same place, and prevent unplanned downtime. High Tech Wearables Smart wearables are one of the most innovative ways to improve safety in the industrial workspace. With IoT sensors embedded in the wearables, managers can gain information and determine if any particular work has been designed correctly. It can also measure motions of the body, physiological data, collect information about the temperature of the body as well as the surrounding, noise level, and hazardous atmosphere. IIOT wearables complement protective equipment like goggles, high visibility vests, personal protective equipment (PPE), etc. Connected safety glasses with augmented reality components help employees to know best working practices giving a visual reference to complicated procedures. Augmented and Virtual Reality The use of Augmented Reality (AR) and Virtual Reality (VR) is helping the manufacturing industry to keep engineering and design work remote. It allows engineers to design within the automotive industry virtually as well as collaborate and work together without being in the same room. AR/VR plays an important role in training employees and transferring knowledge. The AR instructions can be easily published and viewed on a variety of devices across the enterprise. Better trained workers cause fewer accidents meaning improved the safety of the factory. IoT Based Access Control IoT based access control allows companies to limit working on-site by monitoring the equipment remotely. All that they need is to connect their critical assets to cloud-based software. Employees can then access the tools and equipment remotely while following security standards, protecting the company, and customers. IoT based access is also used to grant contactless entry to a facility. It can detect employees with elevated temperatures, offer automated COVID-19 self-attestation questionnaires, asking employees to verify if they have recently come in contact with an infected person. This functionality helps to keep employees’ health concerns at great priority. Industrial Robots and Automation Automated vehicles and drones have huge potential for manufacturing plants especially in such challenging times of COVID-19. Automated vehicles help to streamline inventory management, automate functions in warehouses, manufacturing facilities, and quality assurance tests. It can also automate the task of picking up goods from the warehouse shelves. To automate routines and mechanical tasks, robotics, and AI are used, freeing up employees for higher-value tasks. Today there are, on average, 84 robots for every 10,000 workers, according to the International Federation of Robotics.1 Using automated robots on the factory floor reduces the number of workers to be physically present and keeping the production work speeded. Robots, moreover, are well aware of the space they are working in and hence can avoid any type of machine-human collision. How iLink can help you emerge stronger? Though industrial IoT can be leveraged for a lot of purposes, the safety of the workers will remain a priority for all companies. Implementing IoT solution isn’t just one big project though, it involves strategic planning along the journey. At iLink, our experts help organizations to customize solutions as per the need. Our industry-based insights and deep technology expertise help organizations to implement new IoT products and emerge strong and successful in any challenging time.
<urn:uuid:bccda4f8-4dc7-4603-9177-f3521be56982>
CC-MAIN-2022-40
https://www.ilink-digital.com/insights/blog/how-is-industrial-iot-making-manufacturing-safe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00332.warc.gz
en
0.937756
1,158
2.671875
3
30 senior citizens from New York have learned how to use the computer, thanks to students from Pace University. The program was created and organized by a gerontologist from the university, in an attempt to allow seniors who want to be a part of the digital age, to do so. The computer can be a scary thing for those who have no idea how to use it. But it seems that there are more senior citizens online than there were a few years ago. Facebook and email let them keep in touch with family, and connect with old friends. Sites such as Facebook and Skype allow the elderly to keep connected with their loved ones, but someone has to teach them! Not all senior citizens want to learn how to use the computer, but options should be available for those who do. Kudos to you, Pace University!
<urn:uuid:707eba1c-ea6a-43aa-82a2-c5e9e5c9b0fa>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/online-seniors
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00332.warc.gz
en
0.984925
169
2.859375
3
As organizations look to improve their infrastructure and develop new software, the cloud offers tempting benefits to extensibility and efficiency in development and operations (DevOps). Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) are two solutions that can provide these benefits to organizations but at first glance seem very similar. What’s the difference between IaaS and PaaS? What is IaaS? Infrastructure-as-a-Service provides organizations with completely cloud-hosted servers and an associated operating system (OS) with which they can do whatever they please. This gives them the ability to implement their software completely in the cloud, or house other necessary infrastructure without the need for on-prem server stacks. Using IaaS for development is predicated on the fact that the organization deploys all of the software stack above the virtualization layer. This includes middleware, runtime, and other peripheral applications. IaaS benefits allow for the scalability of more tedious organizational needs, such as storage, disaster recovery, etc. By offloading these to cloud infrastructure, organizations can use the time normally spent worrying about these tasks to better their businesses in other crucial ways. Several key players in the IaaS space include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). What is PaaS? Platform-as-a-Service gives organizations a fully featured platform in which they can develop, test, and deploy their applications from the cloud. In essence, with PaaS, developers only need to bring their code, which can usually be created in a variety of popular languages, and the hardware, networking, and security (and potential infrastructure failure) are handled by the PaaS. PaaS streamlines operations and provides agility to organizations looking to get the most bang for their buck. Like IaaS, generally PaaS solutions are charged by some combination of time, amount of computer time/power used, and network bandwidth/storage. The good news is that organizations do not need to implement their own dev environment, either on-prem or in the cloud. And, with the rise of remote work in the modern world, PaaS gives developers the opportunity to perform their duties regardless of where they are. Examples of popular PaaS vendors include Heroku, AppScale, and AWS Elastic Beanstalk. Comparing IaaS and PaaS At the end of the day, IaaS and PaaS are really quite similar. PaaS provides customers with an option that reduces the amount of work/hosting needed from product developers and engineering. This also means that PaaS-leveraging organizations do not need quite as many DevOps positions; the majority of the work is already laid out for them in the PaaS. It does, however, leave them beholden to whatever the offerings of their particular PaaS provider are. This can be troublesome for smaller/less-experienced teams. Also, they are vulnerable to being locked in to the provider, the particular language or stack being used, and a cost model that is likely scaling on them. IaaS requires a bit more work/setup to properly implement, but it gives DevOps engineers the ability to freely develop their app to the specs they want. Beyond that, it provides massive cloud data centers that can be used to offload almost all on-prem infrastructure, and theoretically is easier to switch from one Linux or Windows cloud provider to another. Both solutions also have one major thing in common. Given the criticality of the data stored in IaaS and PaaS, both require strong identity management in order to protect them from identity compromise. Traditionally, identity management stems from an on-prem directory service. Unfortunately, many of these options struggle to extend identities to cloud infrastructure without a good bit of help from other tools. That’s why a cloud directory service is an ideal tool for IaaS and PaaS identity management. Cloud Directory Service for IaaS & PaaS A cloud directory service gives organizations all of the usual benefits of on-prem identity management, but from a centralized source in the cloud. A cloud directory service connects end users to the wide variety of IT resources they need, including IaaS and PaaS, using several authentication protocols. For example, using SSH keys, a cloud directory service securely authenticates user access to Linux cloud servers, regardless of the provider. SAML-based single sign-on (SSO) through a cloud directory service can be used for a variety of PaaS tools. A cloud directory service also provides multi-factor authentication (MFA) applied to VPNs through RADIUS, which helps secure IaaS access as well. By locking access to IaaS and PaaS down tight, IT admins can rest assured that only the right folks have access to the resources they need; no more, no less. You can learn more about using a cloud directory service in tandem with IaaS and PaaS by contacting us. Our expert team can help you navigate through cloud identity management infrastructure, systems, applications, networks, and more.
<urn:uuid:bddaa209-8f52-499b-9d06-7a1d94016728>
CC-MAIN-2022-40
https://jumpcloud.com/blog/iaas-paas-difference
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00332.warc.gz
en
0.939628
1,092
2.515625
3
What is Enterprise Architecture As information technology continues to evolve at breakneck speed, larger organizations require a structured framework to assimilate the latest technological innovations and assess how they relate to business strategy. Whether it’s machine learning, automated analytics, or cloud computing, businesses need a plan to help them decide what technologies to invest in and how to use them. This is where Enterprise Architecture (EA) comes in. What is Enterprise Architecture? Enterprise Architecture is a cross-functional discipline for designing and implementing a company’s complex enterprise-wide technology infrastructure. The goal is to design an IT framework that aligns with business strategy, can be easily maintained and scaled, and uses technology efficiently. EA is a key component of a company’s IT strategy. Enterprise Architecture entails the development of the following four “architectures:” - Business Architecture: defines business strategy and organization, key business processes, and governance and standards. - Application Systems Architecture: provides an outline for deploying individual systems. This includes the connections between application systems and their relationships to vital business processes. - Data Architecture: relates to the structure of logical and physical data assets as well as any related data management resources. - Technology Architecture: documents the hardware, software, and network infrastructure required to support the deployment of mission-critical applications. Why is Enterprise Architecture Important for a Company to Have? Enterprise Architecture helps to describe movable pathways to achieve specific business goals against a time horizon while staying flexible in adopting beneficial new tools and technologies. As technology evolves, EA enables organizations to maintain a balance between IT assets and business processes. It provides a unified view of how information systems work across the enterprise. Key Benefits of Enterprise Architecture There are many benefits to Enterprise Architecture. It can help with: - Ensuring that a company’s IT strategy aligns with its business goals - Eliminating knowledge and documentation gaps in the company’s current IT capabilities by developing formal business process models - Improving communication between stakeholders across different departments within the organization - Improving productivity by eliminating redundancies in workflows and processes - Reducing costs by increasing interoperability between systems Enterprise Architecture at Nolij At Nolij, our EA team works on some of the nation’s most complex systems at the Department of Defence. The Nolij EA team has been integral in defining key workflows for Covid-19 vaccination distribution within the Defence Health Agency. Here are some of our key EA services. - Nolij provides EA lifecycle support, including artifact development, alignment, planning, integration, federation, review, and publication. - Nolij develops and documents requirements, and defines program charters, guides, methodologies, and modelling guidebooks. - We lay the groundwork for EA analysis and planning, and develop documentation for systems architecture, data architecture, and solutions architecture. The eventual goal of any Enterprise Architecture strategy is to enhance the timeliness and reliability of business information. Employed effectively, EA can be highly beneficial in standardizing and consolidating organizational processes for increased consistency and efficacy.
<urn:uuid:95f8ffcc-0500-4d80-a82c-8202b9b96b92>
CC-MAIN-2022-40
https://nolijconsulting.com/what-is-enterprise-architecture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00532.warc.gz
en
0.895545
643
2.859375
3
One form of authentication is continuous authentication. It involves granting users access to corporate resources as long as they continue to authenticate themselves. It is based on the level of risk and contextual information about the user, such as their role, location, and type of device. Unlike traditional authentication mechanisms, this mechanism is enforced from login through the end of the user session. Continuous authentication is a process that uses technology to identify you, so you don’t have to authenticate yourself each time you use the Internet. Continuous authentication looks at your current context and then dynamically determines whether or not you should continue to authenticate. If risk factors change, such as the user’s location, posture, or device, your system will automatically authorize them. Continuous authentication provides security for hybrid workforces by allowing authentication to the corporate network while restricting access if suspicious activity is detected. Authentication methods can be categorized according to different factors. Here are some common authentication methods: Passwords: Two-factor authentication (2FA) is an increasingly popular method of protecting people’s accounts, especially for high-value ones like their email or social media accounts. When the correct password is entered, the system recognizes the user. The downside of passwords is that users often forget them, especially if they have a long list of them to remember. An attacker can also steal them. Token: This type of property-based authentication is known as a proximity card. Token-based authentication mechanisms offer more security since they require the attacker to gain physical access to the token item. Behavioral biometrics: Biometric authentication uses behavior-based biometrics that identifies people based on their unique behavioral characteristics. Behavioral biometric authentication considers how someone uses their fingers or their phone to authenticate themselves. This type of authentication is used for online payments, e-commerce, and online banking. Physiological biometrics: Physical characteristics (fingerprints, heartbeat patterns) are often used in security-based applications like biometric authentication. Biometric technologies, which involve physical characteristics or behaviors, are gaining popularity due to their high accuracy. Multi-factor authentication (MFA): A single factor alone is not secure. Therefore, many companies implement multiple authentication methods—for instance, passwords and tokens. An excellent example of multi-factor authentication is 2-factor authentication. This method requires the user to give two types of authentication to prove their identities, such as a password and a code or a token sent to their device. Certificate-based authentication: Authentication using a digital certificate allows you to identify someone without asking them for their password. The digital certification ensures that the user’s information is kept safe, allowing for a secure sign-up. Continuous user authentication goes beyond traditional methods to take security to the next level. Authentication scores are continually assessed based on factors, such as device posture and location, that help indicate when suspicious activity or attempts at unauthorized access occur. For example, if a user logs in to the device, the system checks the user’s account information and determines whether it is valid. In addition, you can set different confidence scores according to the type of action or resource involved. Adaptive authentication allows the scanning of end-user devices both before and throughout a user session to corporate applications. An admin can define how a user is authenticated and authorized to access their apps based on location, device posture, or user risk score. With adaptive authentication, these risk factors are evaluated continuously so that admins can enforce (and adapt) policies as needed. We use AI to develop risk-based authentication that uses machine learning to gain a real-time view of the context of any login. The solution monitors and analyzes a user’s activity, taking into account location, time of day, device, sensitivity, and other factors, to create an action plan and identify potential risks. If the request doesn’t meet the requirement, the system will ask for more information. The extra information might include a temporary code, a security question, biometric data, or codes sent to a smartphone. To identify attack vectors in hybrid and remote workforces, we need first to understand what cyberattacks are. Users bring their own devices for many reasons, whether they want to use their mobile devices for work purposes or to increase productivity. An attacker can gain access to your network with poorly secured networks. They could cause data leaks, so it’s essential to secure your system correctly. Employees should only use trusted Wi-Fi networks that have not been hacked. Continuous authentication prevents unauthorized users from accessing the system by detecting access requests from non-secure networks or devices. It’s not good to let remote employees choose their passwords for remote access accounts. It can create vulnerabilities, and it’s too much work for the organization to implement. It’s dangerous for organizations to allow employees to use inadequate passwords, reused passwords, or passwords shared with coworkers. Securing passwords is the first step in preventing a data breach. A good user experience is essential to any business, increasing productivity and improving workflow. However, whenever users login to the application, they’re often required to log back in. This results in less productivity for the user. Continuously authenticated users gain access to their regular apps and resources with single sign-on. One of the biggest challenges of social media marketing is convincing potential customers that your website or service is trustworthy. A lot of businesses use continuous authentication to prevent fraud. It’s used in many different industries, including finance. The mobile analytics solutions gather information about a customer’s device, such as swipe patterns, keystrokes, GPS coordinates, and other data. This information is used to develop a user profile. When the system discovers a deviation from this pattern, it raises an alert or requests further user identity verification. Continuous authentication enables the profiles to work with the bank’s risk solution. This integration helps determine the most accurate risk score to detect fraud. The advantage of continuous risk-based authentication is that it allows security teams to match the risk to the transaction requested. When combined, the authentication system and anti-fraud technology can expand the security coverage over a more extensive attack surface. ADC+ allows you to securely access applications within your enterprise without compromising productivity. This integrated On-prem & SaaS solution offers adaptive authentication and single sign-on (SSO) to help improve your hybrid workforce’s ability to securely access apps and data related to applications.
<urn:uuid:1577ed42-3cb9-4de9-9300-e996e3782c3e>
CC-MAIN-2022-40
https://appviewx.com/education-center/continuous-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00532.warc.gz
en
0.915503
1,342
3.6875
4
What is Safe Mode? Basically, Windows “Safe Mode” is a boot option that loads only the most basic drivers needed for Windows to run at all. There are a few different sets of drivers that can be loaded, depending on which kind of “Safe Mode” you select to be started. The one we will usually choose to fight off malware is the “Safe Mode with Networking” option, since that will allow us to download and update the tools we need. When and why is it useful? Running your computer in “Safe Mode” can be very convenient for trouble-shooting, because the limited number of startups often eliminates the reason for the computer to malfunction. Since only the bare essentials start up in “Safe Mode”, there are hardly any reasons for conflicts to occur. With trouble-shooting in this context, we obviously include removing malware that hinders the normal use of programs when Windows is fully loaded. Although some types of malware (e.g. rootkits) are able to run in “Safe Mode”, most will not. So “Safe Mode” gives you the opportunity to remove malware with less of a struggle. How do I get there? The preferred, but not always available way to boot to “Safe Mode” is by using msconfig. After running msconfig you can find the “Safe boot” options under the “Boot” tab. Once you have made your changes click “Apply” and “OK”. Then you should see a prompt like this one. The other method involves tapping the F8 key during boot (for Windows 8 while pressing the Shift key). If successful you will be shown a menu including these options. As mentioned earlier. When you are fighting malware you will usually want the “Safe Mode with Networking” option. Select the one you want and hit “Enter”. Why is it not “safe”? Despite the name, “Safe Mode” is not very safe. In fact, you are probably safer in normal mode. Active protection software, like for example your anti-virus and Malwarebytes Anti-Malware will not be running in “Safe Mode” and the only software firewall that works is the built-in Windows firewall, if enabled and only in the “Safe Mode with Networking” mode, obviously. So my advice would be to use it only if necessary and then get back to normal as soon as possible. If you used the msconfig method to reboot into “Safe Mode” you will have to use msconfig to uncheck the “Safe boot” option and reboot to get back in normal mode, This post tries to explain what safe mode is, when you could need it and why it is not recommended to use it for other reasons. More elaborate explanation on how to get into safe mode, also for older Windows versions: Computerhope Article about which drivers get loaded in safe mode by Microsoft Basic instructions on how to use Malwarebytes Anti-Malware in safe mode If you came here because your computer automatically booted into safe mode, read this article at Howstuffworks.
<urn:uuid:1eed4457-09bb-4a07-8aba-fc0d59a6bac1>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2015/01/safe-mode
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00532.warc.gz
en
0.918841
687
2.8125
3
What is virtualization? Virtualization uses software to create an abstraction layer over computer hardware that allows the hardware elements of a single computer—processors, memory, storage and more—to be divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM runs its own operating system (OS) and behaves like an independent computer, even though it is running on just a portion of the actual underlying computer hardware. It follows that virtualization enables more efficient utilization of physical computer hardware and allows a greater return on an organization’s hardware investment. Today, virtualization is a standard practice in enterprise IT architecture. It is also the technology that drives cloud computing economics. Virtualization enables cloud providers to serve users with their existing physical computer hardware; it enables cloud users to purchase only the computing resources they need when they need it, and to scale those resources cost-effectively as their workloads grow. For a further overview of how virtualization works, see our video “Virtualization Explained” (5:20): Benefits of virtualization Virtualization brings several benefits to data center operators and service providers: - Resource efficiency: Before virtualization, each application server required its own dedicated physical CPU—IT staff would purchase and configure a separate server for each application they wanted to run. (IT preferred one application and one operating system (OS) per computer for reliability reasons.) Invariably, each physical server would be underused. In contrast, server virtualization lets you run several applications—each on its own VM with its own OS—on a single physical computer (typically an x86 server) without sacrificing reliability. This enables maximum utilization of the physical hardware’s computing capacity. - Easier management: Replacing physical computers with software-defined VMs makes it easier to use and manage policies written in software. This allows you to create automated IT service management workflows. For example, automated deployment and configuration tools enable administrators to define collections of virtual machines and applications as services, in software templates. This means that they can install those services repeatedly and consistently without cumbersome, time-consuming. and error-prone manual setup. Admins can use virtualization security policies to mandate certain security configurations based on the role of the virtual machine. Policies can even increase resource efficiency by retiring unused virtual machines to save on space and computing power. - Minimal downtime: OS and application crashes can cause downtime and disrupt user productivity. Admins can run multiple redundant virtual machines alongside each other and failover between them when problems arise. Running multiple redundant physical servers is more expensive. - Faster provisioning: Buying, installing, and configuring hardware for each application is time-consuming. Provided that the hardware is already in place, provisioning virtual machines to run all your applications is significantly faster. You can even automate it using management software and build it into existing workflows. For a more in-depth look at the potential benefits, see "5 Benefits of Virtualization." Several companies offer virtualization solutions covering specific data center tasks or end user-focused, desktop virtualization scenarios. Better-known examples include VMware, which specializes in server, desktop, network, and storage virtualization; Citrix, which has a niche in application virtualization but also offers server virtualization and virtual desktop solutions; and Microsoft, whose Hyper-V virtualization solution ships with Windows and focuses on virtual versions of server and desktop computers. Virtual machines (VMs) Virtual machines (VMs) are virtual environments that simulate a physical compute in software form. They normally comprise several files containing the VM’s configuration, the storage for the virtual hard drive, and some snapshots of the VM that preserve its state at a particular point in time. For a complete overview of VMs, see "What is a Virtual Machine?" A hypervisor is the software layer that coordinates VMs. It serves as an interface between the VM and the underlying physical hardware, ensuring that each has access to the physical resources it needs to execute. It also ensures that the VMs don’t interfere with each other by impinging on each other’s memory space or compute cycles. There are two types of hypervisors: - Type 1 or “bare-metal” hypervisors interact with the underlying physical resources, replacing the traditional operating system altogether. They most commonly appear in virtual server scenarios. - Type 2 hypervisors run as an application on an existing OS. Most commonly used on endpoint devices to run alternative operating systems, they carry a performance overhead because they must use the host OS to access and coordinate the underlying hardware resources. “Hypervisors: A Complete Guide” provides a comprehensive overview of everything about hypervisors. Types of virtualization To this point we’ve discussed server virtualization, but many other IT infrastructure elements can be virtualized to deliver significant advantages to IT managers (in particular) and the enterprise as a whole. In this section, we'll cover the following types of virtualization: - Desktop virtualization - Network virtualization - Storage virtualization - Data virtualization - Application virtualization - Data center virtualization - CPU virtualization - GPU virtualization - Linux virtualization - Cloud virtualization Desktop virtualization lets you run multiple desktop operating systems, each in its own VM on the same computer. There are two types of desktop virtualization: - Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a central server and streams them to users who log in on thin client devices. In this way, VDI lets an organization provide its users access to variety of OS's from any device, without installing OS's on any device. See "What is Virtual Desktop Infrastructure (VDI)?" for a more in-depth explanation. - Local desktop virtualization runs a hypervisor on a local computer, enabling the user to run one or more additional OSs on that computer and switch from one OS to another as needed without changing anything about the primary OS. For more information on virtual desktops, see “Desktop-as-a-Service (DaaS).” Network virtualization uses software to create a “view” of the network that an administrator can use to manage the network from a single console. It abstracts hardware elements and functions (e.g., connections, switches, routers, etc.) and abstracts them into software running on a hypervisor. The network administrator can modify and control these elements without touching the underlying physical components, which dramatically simplifies network management. Types of network virtualization include software-defined networking (SDN), which virtualizes hardware that controls network traffic routing (called the “control plane”), and network function virtualization (NFV), which virtualizes one or more hardware appliances that provide a specific network function (e.g., a firewall, load balancer, or traffic analyzer), making those appliances easier to configure, provision, and manage. Storage virtualization enables all the storage devices on the network— whether they’re installed on individual servers or standalone storage units—to be accessed and managed as a single storage device. Specifically, storage virtualization masses all blocks of storage into a single shared pool from which they can be assigned to any VM on the network as needed. Storage virtualization makes it easier to provision storage for VMs and makes maximum use of all available storage on the network. For a closer look at storage virtualization, check out "What is Cloud Storage?" Modern enterprises store data from multiple applications, using multiple file formats, in multiple locations, ranging from the cloud to on-premise hardware and software systems. Data virtualization lets any application access all of that data—irrespective of source, format, or location. Data virtualization tools create a software layer between the applications accessing the data and the systems storing it. The layer translates an application’s data request or query as needed and returns results that can span multiple systems. Data virtualization can help break down data silos when other types of integration aren’t feasible, desirable, or affordable. Application virtualization runs application software without installing it directly on the user’s OS. This differs from complete desktop virtualization (mentioned above) because only the application runs in a virtual environment—the OS on the end user’s device runs as usual. There are three types of application virtualization: - Local application virtualization: The entire application runs on the endpoint device but runs in a runtime environment instead of on the native hardware. - Application streaming: The application lives on a server which sends small components of the software to run on the end user's device when needed. - Server-based application virtualization The application runs entirely on a server that sends only its user interface to the client device. Data center virtualization Data center virtualization abstracts most of a data center’s hardware into software, effectively enabling an administrator to divide a single physical data center into multiple virtual data centers for different clients. Each client can access its own infrastructure as a service (IaaS), which would run on the same underlying physical hardware. Virtual data centers offer an easy on-ramp into cloud-based computing, letting a company quickly set up a complete data center environment without purchasing infrastructure hardware. CPU (central processing unit) virtualization is the fundamental technology that makes hypervisors, virtual machines, and operating systems possible. It allows a single CPU to be divided into multiple virtual CPUs for use by multiple VMs. At first, CPU virtualization was entirely software-defined, but many of today’s processors include extended instruction sets that support CPU virtualization, which improves VM performance. A GPU (graphical processing unit) is a special multi-core processor that improves overall computing performance by taking over heavy-duty graphic or mathematical processing. GPU virtualization lets multiple VMs use all or some of a single GPU’s processing power for faster video, artificial intelligence (AI), and other graphic- or math-intensive applications. - Pass-through GPUs make the entire GPU available to a single guest OS. - Shared vGPUs divide physical GPU cores among several virtual GPUs (vGPUs) for use by server-based VMs. Linux includes its own hypervisor, called the kernel-based virtual machine (KVM), which supports Intel and AMD’s virtualization processor extensions so you can create x86-based VMs from within a Linux host OS. As an open source OS, Linux is highly customizable. You can create VMs running versions of Linux tailored for specific workloads or security-hardened versions for more sensitive applications. As noted above, the cloud computing model depends on virtualization. By virtualizing servers, storage, and other physical data center resources, cloud computing providers can offer a range of services to customers, including the following: - Infrastructure as a service (IaaS): Virtualized server, storage, and network resources you can configure based on their requirements. - Platform as a service (PaaS): Virtualized development tools, databases, and other cloud-based services you can use to build you own cloud-based applications and solutions. - Software as a service (SaaS): Software applications you use on the cloud. SaaS is the cloud-based service most abstracted from the hardware. If you’d like to learn more about these cloud service models, see our guide: “IaaS vs. PaaS vs. SaaS.” Virtualization vs. containerization Server virtualization reproduces an entire computer in hardware, which then runs an entire OS. The OS runs one application. That’s more efficient than no virtualization at all, but it still duplicates unnecessary code and services for each application you want to run. Containers take an alternative approach. They share an underlying OS kernel, only running the application and the things it depends on, like software libraries and environment variables. This makes containers smaller and faster to deploy. Check out the blog post "Containers vs. VMs: What's the difference?" for a closer comparision. In the following video, Sai Vennam breaks down the basics of containerization and how it compares to virtualization via VMs (8:09): VMware creates virtualization software. VMware began by offering server virtualization only—its ESX (now ESXi) hypervisor was one of the earliest commercially successful virtualization products. Today VMware also offers solutions for network, storage, and desktop virtualization. For a deep dive on everything involving VMware, see “VMware: A Complete Guide.” Virtualization offers some security benefits. For example, VMs infected with malware can be rolled back to a point in time (called a snapshot) when the VM was uninfected and stable; they can also be more easily deleted and recreated. You can’t always disinfect a non-virtualized OS, because malware is often deeply integrated into the core components of the OS, persisting beyond system rollbacks. Virtualization also presents some security challenges. If an attacker compromises a hypervisor, they potentially own all the VMs and guest operating systems. Because hypervisors can also allow VMs to communicate between themselves without touching the physical network, it can be difficult to see their traffic, and therefore to detect suspicious activity. A Type 2 hypervisor on a host OS is also susceptible to host OS compromise. The market offers a range of virtualization security products that can scan and patch VMs for malware, encrypt entire VM virtual disks, and control and audit VM access. Virtualization and IBM IBM Cloud offers a full complement of cloud-based virtualization solutions, spanning public cloud services through to private and hybrid cloud offerings. You can use it to create and run virtual infrastructure and also take advantage of services ranging from cloud-based AI to VMware workload migration with IBM Cloud for VMware Solutions. Sign up today for an IBM Cloud account.
<urn:uuid:f29ada32-e56e-4215-8fe4-92f3cf3ceb15>
CC-MAIN-2022-40
https://www.ibm.com/cloud/learn/virtualization-a-complete-guide
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00532.warc.gz
en
0.883241
2,951
3.8125
4
Cybersecurity issues are becoming a day-to-day struggle for businesses. Trends show a massive increase in hacked and breached data from sources that are increasingly common in the workplace, like mobile and IoT devices. Beaches themselves are becoming more common and severe, as hackers are learning how to circumvent the latest technologies. Recent research suggests that most companies have unprotected data and poor cybersecurity practices in place, making them vulnerable to data loss. To keep your organization and customers secure, you will need to know the latest movements in cybersecurity. Cybercrime by the Numbers The past few years have seen large companies such as Uber, Equifax, and Under Armour fall prey to data breaches. In 2017, there were an average of 130 security breaches, many of which are well-publicized and large in scale. This number is expected to grow by 27% in 2018. The Yahoo! hack, one of the biggest data breaches of all time, left three billion user accounts on the table for cybercriminals. This increase in cyber hacking comes with a very high price for companies who have not invested sufficiently in their cybersecurity architecture. According to Accenture, a malware attack on average costs companies $2.4 million, and 50 days in time to rectify the incident. Computer hacking has also become an international operation. In 2017, one-fifth of all cyberattacks originated in China, while 11 percent were based in the United States and 6% originated from Russia. Ransomware on the Rise Ransomware, a cyber attack that effectively holds a computer and its data hostage until a ransom amount is paid, is one of the more crippling attacks on organizations, having nearly quadrupled in recent years. A ransomware attack occurs every 14 seconds. On average, it takes 23 days to resolve a ransomware attack, and during this time, computer data is inaccessible. This makes it essential for companies to maintain an information technology infrastructure and invest in cybersecurity. The United States ranks as the most targeted in the world with 18.2% of all attacks. Beware of Phishing Most cyber attacks today originate from phishing emails, which are scam emails sent from reputable companies in an attempt to obtain information from users. Phishing emails can also contain email attachments containing malware. In recent years, we have seen over one-third of malicious files sent to be Microsoft Office files such as Word, PowerPoint, and Excel documents. Last year, spear-phishing emails were the most widely-used infection method employed by 71% percent of groups to stage cyber attacks. Increased Business Risks Cyber hacking costs are expected to total $6 trillion annually by the year 2021. Therefore, it’s crucial to both invest in reliable internet security software and conduct comprehensive employee training on cybersecurity best practices. With 71% of cyber attacks resulting from phishing emails, it is important to make sure employees can spot a phishing email to avoid this common cyber attack. In addition, over 65% of larger companies do not prompt their employees to change their passwords on a regular basis. Not only does this expose companies to potential cybercrime, but can result in unauthorized data access from employees who no longer work with the company. Over 40% of companies also have sensitive files such as credit card numbers and health records that are not protected. Companies should identify and boost security for higher-value assets, such as sensitive corporate data essential for operations and subject to regulatory penalties. This will make it difficult for hackers to access this information and reduces the damage that they can cause if they access the data. From increasing costs of cyber attacks and the surge in the variety of attacks, there is no question that the cybersecurity situation is dire. The world of cybercrime is vaster than ever in 2018, but you can minimize your organization’s risk by investing in information technology and developing a plan to protect the most sensitive data. To give a picture of the state of cybersecurity, Varonis put together key security statistics from leaders in the industry to show the gravity of the situation for leaving your company unsecure.
<urn:uuid:8cf1ba3f-aa66-4d00-b652-cf4b83f8ea0a>
CC-MAIN-2022-40
https://www.cpomagazine.com/cyber-security/cybersecurity-in-2018-what-you-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00532.warc.gz
en
0.963226
825
2.796875
3
What about this course? This course will prepare CCNP candidate/learner to master the topic of the Border Gateway Protocol (BGP). The course will start with coverage of the basics of the Border Gateway Protocol (BGP) to include the topics of iBGP vs. eBGP, characteristics of Path Vector routing protocols, the BGP tables, BGP message types, the implementation of eBGP and iBGP, the BGP Best Path Selection Algorithm, transit and non-transit Autonomous Systems (ASes), iBGP route reflection, Multiprotocol BGP (MP-BGP), IPv6 over an IPv4 transport, BGP configuration validation, securing BGP neighbor relationships, BGP neighbor states and troubleshooting peering issues, eBGP multihop, BGP attributes, BGP communities, BGP route filtering, BGP peer groups, and BGP for IPv6. Instructor for this course CCNA (RS CCNP (RS & WRLS) CNACI CWNE #131 JNCIA WRLS & SC) This course is composed by the following modules BGP Protocol Overview BGP Autonomous System Numbers Trasmission Control Protocol (TCP) Finite State Machine (FSM) Soft Reconfiguration vs. Route Refresh eBGP Peerings (IPv4) :: Part 1 eBGP Peerings (IPv4) :: Part 2 eBGP Peerings (IPv6) :: Part 1 eBGP Peerings (IPv6) :: Part 2 iBGP Peerings (IPv4) :: Part 1 iBGP Peerings (IPv4) :: Part 2 iBGP Next Hop Processing iBGP Peerings (IPv6) Troubleshooting eBGP/iBGP Peerings Prefix Lists & Route Maps Path Selection & Path Attributes AS PATH Packet Capture MED :: Part 1 BGP Bestpath MED Missing-As-Worst BGP Peer Groups BGP Peer Templates AS PATH Access Lists Communities :: Part 1 Communities :: Part 2 Communities Use Case :: Part 3 v6 over v4 v4 over v6 Common Course Questions If you have a question you don’t see on this list, please visit our Frequently Asked Questions page by clicking the button below. If you’d prefer getting in touch with one of our experts, we encourage you to call one of the numbers above or fill out our contact form. Do you offer training for all student levels? Are the training videos downloadable? I only want to purchase access to one training course, not all of them, is this possible? Are there any fees or penalties if I want to cancel my subscription?
<urn:uuid:26bdeae8-3adc-46ce-87b1-32393659e5ff>
CC-MAIN-2022-40
https://ine.com/learning/courses/the-ccnp-candidates-guide-to-bgp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00532.warc.gz
en
0.684206
661
2.578125
3
Introduction to IBM SPSS Text Analytics for IBM SPSS Modeler Getting usable analytics results from unstructured, text-based data requires a very different set of strategies than traditional numeric data does. Despite this, however, there are many insights you can gain from text-based sources through the right application of predictive technology. Ironside’s 2-day Introduction to IBM SPSS Text Analytics for IBM SPSS Modeler introduces you to the text analytics module available in Modeler. You’ll learn the steps involved in working with text-based data from the initial read all the way to final category creation. By the course’s conclusion, you’ll have all the knowledge you need to generate powerful predictive models from text that will lead to actionable intelligence for your organization. This course is intended for anyone who needs to report on or generate predictive models from text data. Students should have knowledge equivalent to having taken the IBM SPSS Modeler Fundamentals course, and practical experience coding is beneficial but not required. - Introduce the concept of text mining and how it differs from standard data mining. - Understand how SPSS Modeler reads text and the options you have for manipulating text data within it. - Recognize the different text mining model approaches available to you and become comfortable using them. - Learn the processes for text mining, the steps in a text mining project, and its relationship to the standard data mining/CRISP-DM process. - Recognize and work with the text mining nodes available in SPSS Modeler and read text from documents and web feeds into Modeler. - Describe the concepts behind linguistic analysis and develop a text mining concept model - Use the Interactive Workbench to extract types and concepts and update the modeling node. - Edit the linguistic resources available to you, including preparation, synonym definitions, exclusion definitions, and text re-extraction. - Fine tune your resources with advanced functionality like fuzzy grouping exceptions, adding non-linguistic entities, extracting non-linguistic entities, and forcing words to take particular parts of speech. - Perform text link analysis using the appropriate node and the visualization pane. - Understand clustering concepts and create clusters and categories from clusters. - Become familiar with the different categorization techniques available. - Create and assess categories both manually and automatically, use conditional rules to create categories, and create text analysis packages. - Manage your linguistic resources using the Template Editor to build and manage libraries, templates, and backup resources. - Use text mining models containing quantitative and qualitative data and score new data.
<urn:uuid:68be3d8f-47b3-435d-885e-7bc5a53023c9>
CC-MAIN-2022-40
https://www.ironsidegroup.com/training-information/course-catalog/data-science-training__trashed/introduction-to-ibm-spss-text-analytics-for-ibm-spss-modeler/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00532.warc.gz
en
0.842149
561
2.625
3
Strictly speaking, HTTPS is not a protocol in and of itself, but it is rather HTTP encapsulated in TLS/SSL. TLS, or SSL, as it is commonly referred to, provides websites and web applications with encryption of data being transmitted and authentication to verify the identity of a host. HTTPS is usually synonymous with shopping carts and Internet banking, but in reality, it should be used whenever a user is passing sensitive information to the web server and vice-versa. TLS/SSL may significantly consume server resources depending on the site’s traffic. Consequently, for most sites it is not required to serve the entire site using HTTPS. WordPress’s login form and admin area, on the other hand, are probably the most sensitive areas of a WordPress site. It is therefore strongly advised that TLS/SSL is not only implemented, but enforced in such areas. WordPress provides an easy way to enforce TLS/SSL on both wp-login and wp-admin pages. This is achieved by defining two constants in your site’s Note - You must already have TLS/SSL configured and working on the server before your site will work properly with these constants set to true. To ensure that login credentials are encrypted during transit to the web server, define the following constant in To ensure that sensitive data in transit (such as session cookies) is encrypted when using the WordPress administration panel, define the following constant in Part 8 in the Series on WordPress Security will discuss: Restricting Direct Access to Plugin and Theme PHP files Read the entire article on How to prevent a WordPress hack Get the latest content on web security in your inbox each week.
<urn:uuid:7bb2d380-9018-4d83-8149-d5a6b5bd1500>
CC-MAIN-2022-40
https://www.acunetix.com/blog/articles/wordpress-security-tips-enabling-https/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00532.warc.gz
en
0.885284
393
2.671875
3
Application delivery is the process of bringing an application (typically a web application) to end users while the application’s data processing and computing is executed inside a datacenter or cloud environment. It may also be referred to as application delivery as a service. This enables a great user experience for remote workers by providing access to application functionality without requiring it to be installed on each user’s local desktop or server. Application delivery helps ensure both remote and in-office workers have secure anywhere access to the web and cloud apps they need to be productive. An essential element of application delivery is the application delivery controller (ADC). ADCs are purpose-built network appliances that improve the security, performance, and reliability of applications delivered via the web. As demand increases for older load balancing appliances to handle more complex application delivery requirements, modern ADCs help legacy applications adapt to the networks and protocols in place today. This ensures all apps perform optimally and securely, and are always available. Explore additional application delivery topics: Secure your apps with a zero-trust security solution An application delivery platform is a suite of tools that make it easier to deliver applications reliably and securely to end users. The tools in an application delivery platform handle application services such as traffic management in datacenters and cloud environments, load balancing, and security controls. A strong application delivery platform is important to ensuring remote and in-office workers have anywhere access to the web and cloud apps they need to be productive—all while ensuring high performance, availability, and security. An application delivery controller employs algorithms and policies to determine load balancing, or how inbound application traffic should be distributed. The most basic form of traffic management is round robin, which forwards client requests to each server in the order the requests are submitted. This method assumes all application servers are identical without considering account health or responsiveness. More sophisticated traffic management can implement additional policies that direct the ADC to check certain criteria before sending an inbound request to an application server. Modern application delivery controllers can inspect packet headers for keywords or requested file types and direct the request to the appropriate server based on this information. Load balancing is the efficient distribution of application traffic across multiple servers. Load balancers work by monitoring the health of backend resources and only sending traffic to servers that are capable of fulfilling the application request. This traffic management is important to application delivery because it works inside an ADC to broadly improve the performance of web and microservices-based apps—no matter where they’re hosted. Load balancing helps distribute traffic across a cluster of servers to optimize utilization, improve responsiveness, and increase availability. Application delivery management refers to technology that provides holistic visibility into how applications are delivered across multi-cloud environments. Because many enterprises rely on multiple cloud and on-premises environments, gaining useful insight into how applications are deploying and functioning can be challenging. A strong application delivery management solution makes it simpler for IT to monitor delivery everywhere—streamlining operations, speeding up the troubleshooting process, and improving security. An application delivery network refers to services deployed via a network that help ensure high availability, security, and visibility for web applications. ADNs work by speeding up datacenter load times and increasing IT visibility across the application delivery process. An ADN is closely related in design and purpose to a content delivery network (CDN); the primary difference is that CDNs speed the delivery of static content (like high-res images) and ADNs accelerate dynamic content (like web apps). Get an in-depth look at the must-haves for delivering fast, high-performance applications in the cloud. Cost savings and efficiency: One benefit of modern application delivery is the improved efficiency of enabling anywhere access to business apps via the web instead of relying on locally installed applications for each user’s device. This is because delivering web apps to users as needed is more cost-effective than buying licenses for each user’s device. In addition, as more users access web apps on mobile devices and in the cloud, modern application delivery empowers these mobile workers to be productive anywhere rather than requiring them to work inside an office’s local area network (LAN). When an organization’s application delivery offers a great web app experience everywhere, that organization spends less on customer support, hardware, and upkeep. Improved productivity: For employees to do their best work, they need high performance and availability from their web and cloud applications. Modern application delivery makes this possible by ensuring a seamless application experience. So even when an application server fails, a good application delivery solution will automatically failover the in-use application to a healthy server without the end user noticing the difference. Better mobile performance: Modern application delivery also offers application performance benefits across mobile networks. Web pages built for high-speed internet links can fail to deliver the same user experience on mobile devices connecting over a bandwidth-constrained network. Modern application delivery controllers can address this by optimizing web content delivery over mobile networks using domain sharding. This involves connection-layer optimization being applied to one domain, and then breaking down content on each page into a sequence of subdomains that permit a larger number of channels to be opened simultaneously. This improves page load time and performance. It is also possible to optimize delivery of web pages that contain large images by compressing these files. This reduces download times to improve the end user experience. Stronger user security: As increased mobility and more diverse cloud environments enlarge an organization’s attack surface, modern application delivery can play an important role in security. Because application delivery controllers authenticate each user attempting to access an application, they are a common entry point or gateway to an organization’s network. If an application is SaaS-based, the ADC can validate a user’s identity using an on-premises active directory data store. This eliminates the need to store credentials in the cloud, improving security and enhancing the user experience by providing single sign-on (SSO) capabilities across multiple applications. Other common security features in an application delivery solution include a web application firewall, converged load balancing, and advanced Layer 7 protection. When choosing an application delivery solution, organizations should focus on: A single code base: Many organizations have a multi-cloud strategy in place and rely on cloud services like Microsoft Azure and Amazon Web Services (AWS) for their applications. This can lead to application components being distributed across different cloud environments, leading to fragmentation and other application management challenges. With this in mind, it’s important to adopt an application delivery solution with a single code base across all ADCs. With a single code base across your ADC portfolio, you can ensure operational consistency for monolithic and microservices-based applications across multi-cloud. This gives you greater agility and speed in your application strategy. Global server load balancing: Load balancing is a critical service in any high-traffic datacenter, but your application delivery controller should also be able to redirect traffic to a cluster of servers located in a different datacenter. This is known as global server load balancing. The servers in the other datacenter can be front-ended by another ADC, which works in tandem with the first appliance. Each application delivery controller can detect which datacenter is closest to a given user, and then route the client request to a server in that datacenter. This process minimizes latency and round-trip times for the user’s request, ensuring a better application experience. Sophisticated security: With so many reported vulnerabilities coming from an organization’s applications (not networks), it’s important to choose an application delivery solution that can protect applications and APIs from known and zero-day attacks. Look for ADCs that include an integrated web application firewall, bot management, and volumetric distributed denial of service (DDoS) protection. It’s also important your application delivery solution provides good price-per-performance for SSL. This works by terminating SSL and TLS first and then service chaining, simplifying your SSL and TLS termination by only doing it once instead of encrypting all traffic individually. Always-on application availability: Many employees are no longer forced to use company-owned equipment inside an office to get their work done. They often use personal devices to work whenever and wherever they choose. To support employees working at any time, IT departments must ensure workplace servers and applications are always available. However, servers can fail for many reasons such as mechanical problems, over-utilization, and security breaches. If a server goes down, applications running on it will become unusable or inaccessible. This makes it important for your ADCs to ensure high application availability by balancing application workloads across a cluster of active servers in multiple sites. This enables seamless failover of applications, translating to an uninterrupted user experience even if an application server goes down. Citrix ADC offers the industry’s most comprehensive application delivery solution for monolithic and microservices-based applications, making it easy for organizations to deliver a better user experience. It can be deployed on premises or through application delivery as a service.
<urn:uuid:4dd8d06e-66e0-4351-893b-a9dab5086c1e>
CC-MAIN-2022-40
https://www.citrix.com/en-sg/solutions/app-delivery-and-security/what-is-application-delivery.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00532.warc.gz
en
0.912382
1,859
2.890625
3
“Many parents are rightfully concerned about their kid’s participation in social networks. There are a number of areas to be concerned with. Who are the kids talking to? Parents might worry about the friends their kids are making online and what kind of people, even their kid’s own age, they are associating with. Some parents will be concerned about how much time their kids are spending online,” says Randy Abrams, Director of Technical Education at ESET. ESET’s seven golden rules for parents and children for online security 1. Updated antivirus and security software is a necessity. 2. Be vigilant and monitor your child’s internet connection: set a password and allow children to surf the web only during the times when you can periodically check on their online activities. Set clear rules about the use of computers. 3. Instruct kids on internet privacy. They should never supply personal data and details to strangers on the web and social networks. 4. Control the web camera as it can be easily misused by criminals and strangers. Turn off or unplug your webcam when you don’t use it. There is malware that can access your webcam without you knowing about it. Check that the web camera is off when it should be. Have children use camera only for approved communication: with known friends and family. 5. Monitor browser history. Deleted history might be a reason to sit up and have a talk. 6. On Facebook, if you or your child shares the wall with “Everyone” or “Friends of friends” then you have lost control of who has access to all data. If one uses apps on Facebook, and is not careful, then one may end up sharing also all private data with the world. 7. The information posted on the internet does not go away. Do not assume that when you delete a photo or even the whole social network account that you have automatically deleted all the data forever. Pictures and information might be already saved on someone else’s computer. Children and parents should think twice about which pictures and details to put on the Internet.
<urn:uuid:012d9e33-5103-49ab-b91b-71dd48bfab8c>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2011/06/01/social-networking-safety-tips-for-kids/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00732.warc.gz
en
0.946562
437
2.8125
3
The responsibility doesn't just fall to doctors and nurses, but also to IT decision-makers, who play a pivotal role in identifying technologies that will enable healthcare organizations to overcome these challenges. In some cases, this will mean adopting new technologies, while in others, it will hinge on the capacity to rethink how existing technologies are deployed, such as by using color printing to enhance patient experience. Making healthcare more efficient is a challenge several sectors are scrambling to solve. In 2009, the Health Information Technology for Economic and Clinical Health (HITECH) Act passed into law. It gave the Department of Health and Human Services (HHS) the authority to establish programs to improve quality, safety and efficiency in healthcare through the promotion of health IT. These initiatives speak to an interest across the industry in reducing the amount of unnecessary paper circulating around hospitals and the offices of doctors and other care providers. Printing represents a significant expense for healthcare organizations, but one that tends to go unnoticed or overlooked, leading to inefficiencies and needless costs as a result. However, despite efforts to promote Electronic Health Records (EHRs), the Healthcare Information and Management Systems Society (HIMSS) estimates that less than 5% of hospitals have reached an environment in which paper charts are no longer used. Printing is still a core part of healthcare administration and will continue to be for the foreseeable future. While transitioning to EHRs and other digital technologies is important, printing remains a valuable tool that ensures productivity and improves customer care. Along with digital innovations, organizations need to streamline their management of paper documents. Let’s look at four ways new color printing capabilities can help with these efforts. All patients go through the admission and discharge process when accessing medical care. Workflow in hospitals isn't just a matter of efficiency — it's a matter of life and death. Patient charts must accurately and clearly reflect patient needs in order to prevent care from being delayed and workers from being confused. Healthcare providers can use color printing to enhance patient experience by color-coding their workflow, helping patients move smoothly from admission to discharge. Color makes it easier for clinicians and administrators to quickly grasp key information and ensure all relevant documents are included. It also makes patient reports clearer and easier to read. Labels help clinicians, administrators and technicians stay organized. Adding color to labels keeps patients' key information consistent throughout the time they're under care. For example, if a patient has an allergy, a worker can place a yellow label on any associated documents or equipment to simplify communication and avoid mistakes. Doing so also saves staff time (and, thus, hospital money) spent searching for information on labels. Physicians deliver the best care when they directly interact with patients. A Northwestern University study cataloged six practices that led to dehumanization in hospitals. It found doctors can improve the doctor-patient relationship by addressing patients directly rather than going straight to patient charts.1 However, patients can grow weary of telling numerous clinicians the same information over and over. Providers can use color printing to personalize their interactions with patients by color-coding patient wristbands. This can make a patient's name stand out on his or her wrist and instantly and clearly convey the patient's special alert risks, such as if he or she is prone to falling. Another way for color printing to enhance patient experience is by color-coding medication labels. According to the Food and Drug Administration, it's fairly common for errors to creep into medication or IV labeling, and the results can range from costly to disastrous. Color-coded medication labels can improve organization by distinguishing the most important information, such as "Keep refrigerated." Color-coding patient reports, wristbands, labels and documentation makes content more intuitive, readable, clear and understandable. In doing so, color printing to enhance patient experience increases efficiency in healthcare. When deployed intelligently, color printing can help clinical staff stay organized and productive while also improving the quality of patient care and reducing costs.
<urn:uuid:9eec0749-991e-446a-9b95-615150832818>
CC-MAIN-2022-40
https://www.insight.com/en_US/content-and-resources/2017/05232017-hospitals-turn-to-color-printing-to-enhance-patient-experience.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00732.warc.gz
en
0.937662
808
2.578125
3