text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
When it’s time to inspect and maintain fire sprinkler systems, you want to make sure they are functional. If your fire sprinklers aren’t working, then they won’t be about to put out any fires that break out in your building. At the same time, though, you must also to make sure that your systems aren’t too sensitive, which can be just as problematic.
Smoke Doesn’t Affect Them
It’s a common misconception that smoke is what activates the fire sprinkler systems in your building. The presence of smoke will set off any smoke alarms, but that doesn’t mean all of your sprinklers will turn on in response. Fire sprinklers systems are designed to extinguish real fires. The smoke detectors, on the other hand, will alert you to a fire emergency. However, smoke detectors can’t put out a fire, which is another commonly-held myth.
What Activates Fire Sprinkler Systems
Now, you might be wondering what does activate a fire sprinkler system, if it isn’t smoke. The saying “where there’s smoke, there’s fire” is trite, but true. The built-in heat sensors in the sprinklers will trigger at high-enough temperatures to indicate the presence of fires that are currently spreading. In this case, those temperatures fall in a range between 155 and 165 degrees Fahrenheit. Also, only one sprinkler head will activate at a time. At most, only two fire sprinklers will be needed; however, this depends on the size and ferocity of the fire.
The heat sensors in your fire sprinkler systems can take one of two forms. One type uses a small glass bulb that is filled with a liquid that moves when heat touches it. The other type uses metal links that can fuse together. The liquid in the glass bulbs can be color-coded depending on the temperatures needed to activate them. You’ll often see red glass bulbs that will shatter when the liquid inside starts to expand due to heat. Because of this mechanical design, the only fire sprinklers that will turn on will be the ones closest to the source of the fire. This mechanism helps to contain fire damage and minimizes water damage from too many sprinklers turning on all at the same time.
Trust the Professionals at ARK Systems
Located in Columbia, Maryland, ARK Systems provides unsurpassed quality and excellence in the security industry, from system design all the way through to installation. We handle all aspects of security with local and remote locations. With over 30 years in the industry, ARK Systems is an experienced security contractor. Trust ARK to handle your most sensitive data storage, surveillance, and security solutions. | <urn:uuid:9d261910-fe81-4fdf-9f7a-4461eed0588b> | CC-MAIN-2022-40 | https://www.arksysinc.com/blog/smoke-fire-sprinkler-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00596.warc.gz | en | 0.909007 | 570 | 2.578125 | 3 |
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for…
Scenario: System Area Corruption
There are different types of corruption that can occur in an SSD. In this article, the specific type we will be discussing is more commonly known as “system area corruption.”
We’re all familiar with operating systems (OS) and file systems. What you may not be aware of is that the controller chip, which manages the NAND chips where data, programs, etc. are actually stored, has its own OS and file system that is different than that which the user interacts with when using the computer.
The file system used by the controller chip contains files specific to drive functions, such as firmware, the translator table, the defect table and others. When a user first presses the power button on an SSD device, the controller’s OS must walk through each of these files in order for the computer to boot up.
In a system area corruption scenario, one or more of the files used by an SSD’s controller become corrupt, keeping the controller from completing its walk-through and disabling the boot process. The drive is no longer able to boot up, otherwise known as “bricked.”
System area corruption can potentially be caused by one of the following three issues, or any number of other unknown triggers:
Sudden Power Loss
There are several components involved in the travel of power from a wall outlet or battery to the SSD inside a computer. If any of these components fail, there can be a sudden power loss to the drive. SSDs that do not have a super capacitor or other technology to retain enough power to complete its write cycles are more likely to suffer in this scenario.
When written code encounters something it wasn’t designed to understand or deal with, corruption may occur in the system area.
NAND Flash Failure
The system area used by the controller during the boot-up process may be stored on and accessed from the NAND chips, the controller chip itself or distributed across both.
If there is a significant failure of the NAND flash, it is possible that the system area can be corrupted. This particular failure may also result in the loss of some of the actual user data.
What to do to Avoid Data Loss
Hopefully, the user has backed up the data that is on the device prior to the failure. Either way, there’s not much more that can go wrong. Other than through physical sabotage or securely erasing the SSD with the manufacturer’s toolbox software, it is difficult to make data any less recoverable once system area corruption has occurred. The damage is done, so to speak.
What Happens at DriveSavers?
At DriveSavers, we use tools that are proprietary and co-developed with SSD manufacturers to resolve corruption issues. There are no commercial tools available to address this specific problem.
How to Avoid System Area Corruption
It will help to regularly run the toolbox software that came with the SSD device, just to see what it might find. In fact, if a device appears to be bricked, you can try running the toolbox to see if it is able to work any magic. On some SSDs, toolbox software can run even when the drive is not visible; on others, it cannot.
Make sure to keep your SSD firmware up to date as it may prevent a bug from causing a system area corruption. This is because, as bugs are found and reported, manufacturers will develop patches and fixes and then distribute them through updates. | <urn:uuid:375f10a2-0b30-429f-a6e1-48970d9d4bf0> | CC-MAIN-2022-40 | https://drivesaversdatarecovery.com/ssd-what-happens-when-system-area-corruption-occurs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00596.warc.gz | en | 0.94662 | 753 | 2.9375 | 3 |
Prescription drugs are a fact of life for millions of people worldwide. Some medicine is administered by healthcare professionals in clinics and hospitals; in other cases, it’s left to patients themselves to manage their own medication at home, particularly when it comes to long-term, chronic conditions such as diabetes or high blood pressure.
Either way, it’s vital that patients stick to the prescribed regimes, taking their medicine at the right times of day and in the right doses, as directed by their physicians.
Sticking to doctor’s orders in this way is often referred to by healthcare professionals as ‘medication adherence’. A lack of it (or ‘non-adherence’), meanwhile, can pose serious risks to health, says Sean Handel, senior vice president at digital medicine specialist Proteus Digital Health.
“Medication non-adherence leads to uncontrolled health conditions, excess hospitalizations, emergency rooms visits and office visits,” Handel says. He reckons it’s costing the US healthcare system around $290 billion annually – and the problem is acute, he adds:
“More than 50 percent of prescribed medications are not taken as directed and providers lack accurate, timely adherence data necessary to diagnose non-adherence and its root cause and then allow the patient and provider to respond quickly to poor adherence.”
With that in mind, Proteus Digital Health offers Proteus Discover, billed by the company as ‘the world’s first digital medicine service’. This involves an ingestible, sensor-equipped pill which, on reaching the stomach, sends a signal to an accompanying patch, worn on the skin. Patients can use the Proteus Discover app to keep track of the medication they’ve taken and to set up reminders. Their healthcare practitioners, meanwhile, can monitor their adherence.
Proteus Discovery is just one example of how healthcare IoT could help to ensure that patients get the medicine they need. Although sometimes hampered by compliance and risks concerns or lack of funding, the IoT has quietly slipped into healthcare environments.
Improving drug delivery and adherence through these technologies is just one area of focus, much of it driven by technology start-ups. And at this early stage there seems to be more traction around medicine adherence than the automation of delivery – in other words, smart devices that automatically deliver a dose of a drug to a patient without any other human intervention.
For instance, last year saw E Ink – best known for making the screens on Amazon Kindle e-readers – partner with the healthcare division of smartphone manufacturer HTC and packaging specialist Palladio Group to design ‘smart’ labels for medicines. These provide patients with a reminder when it’s time to take their medicine, information on the quantity of medication left in the package, and an update on the time at which medication was last taken. They can also request a refill in the case of a repeat prescription, simply by pushing a button on the product label.
According to Dr Fy Gan of E Ink, the smart packaging label can be applied on a range of packaging, “from paperboard cartons to prefilled syringes to pill dispensers” in order “to improve medication-taking behaviors.”
AdhereTech, meanwhile, makes smart wireless pill bottles for tracking adherence, which use cellular technology and sensors to remind patients if they miss a dose through an automated phone call or text message.
Automating drug delivery
But more complex solutions are coming to market that seek to automate drug delivery itself. For instance, Chrono Therapeutics has developed a device worn on the arm or torso that syncs with a mobile app and monitors nicotine levels in the patient’s body. It can then auto-administer nicotine before cravings kick-in, as part of smoking cessation programmes.
Meanwhile, Microchips Biotech has developed implants that can store and release precise doses of drugs over extended periods of time – for months and even years, the company claims. Each implants contains hundreds of ‘micro-reservoirs’; these are small, hermetically sealed compartments, each of which can store up to 1 microgram of a drug.
The implant is activated by a wireless signal that triggers the micro-reservoirs to release the drug on a pre-programmed dosing schedule. These implants can also be built with bioelectric sensors which release drugs in response to physiological or metabolic changes in the patient.
Other tech, meanwhile, promises to bring more precision to the drug delivery process; Injeq’s new IQ-Needle smart needle, for example, is used for lumbar punctures (otherwise known as spinal taps) and gives an alert as the needle tip reaches its target – the spinal fluid. This helps to avoid an overly deep puncture and unnecessary tissue damage.
And with other areas being explored – such as smart inhalers and smart insulin pens such as the award-winning KiCopen – this is clearly a rapidly evolving space.
Patient power and challenges
Last year, ‘citizen health hacker’ Tim Omer, who has diabetes, told Internet of Business how he re-engineered his continuous glucose monitor to deliver real-time blood glucose measurements on his Android smartwatch. He believes that IoT technologies are generally moving healthcare in a positive direction.
“As a patient, the more I understand, the more control I have to review and analyse and use systems to assist me, the better I can manage my condition,” he tells Internet of Business.
But there will be challenges ahead. In the UK, for example, last year’s Wachter Review of the National Health Service’s use of IT highlighted the difficulty of reform, even in relatively straightforward areas such as the implementation of electronic healthcare records.
And as Tim Omer points out, the sheer number of IoT device vulnerabilities could spell trouble for medical devices that, above all, needed to be trusted.
“Security and the accuracy of the service needs to be fully tested,” says Dr Ramin Nilforooshan, consultant psychiatrist and lead clinician for the Surrey Borders Technology Integrated Health Management (TIHM) project, in which people with dementia and their carers are being provided with wearables and other devices to monitor their health and wellbeing in their own homes.
These devices can, for example, detect if a patient has left the house, had a fall, is not eating or drinking normally or has used the bathroom more than usual. If the solution identifies a problem, an alert is issued that is followed up by a clinician or carer.
“We have received many positive reports from trusted users. Carers and people with dementia feel very supported with the idea of a monitoring system in place for them,” says Dr Nilforooshan. But when it comes to IoT drug delivery, he says, security could be the biggest challenge.
Other medical professionals, meanwhile, variously cite limited data usage, data integrity, information governance, network performance and regulatory requirements as potential barriers. But most still agree that the IoT has a lot to offer the health sector, improving patient care and outcomes. | <urn:uuid:f473b459-e123-4e2c-ade3-23a619e5373d> | CC-MAIN-2022-40 | https://internetofbusiness.com/is-iot-the-right-prescription-for-getting-patients-to-take-their-medicine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00596.warc.gz | en | 0.939133 | 1,494 | 2.625 | 3 |
Preparing for the Impact of Quantum Computing
(ComputerWeekly) IBM recently held an event to explore the skills gap that exists between the education system and quantum computing. There is no denying that quantum computing represents a massive departure from traditional approaches to computer science.
During the virtual event, Jeffrey Hammond, vice president, principal analyst, serving CIO professionals, at Forrester, asked: “Like many of many clients, I am trying to put quantum in context. How much of a change will we have to make?”
While the experts on the online panel discussion believe that quantum computing has the potential to benefit humanity, there are clearly two sides to the quantum coin. To paraphrase Winston Churchill, with great power comes great responsibilities: there needs to be wide acknowledgement that the power of quantum computing could be used for malicious intent.
There are application areas for quantum computing in drug discovery and biotech, risk modelling in financial markets and complex optimisations where there are many, many variables. Any problem, where the complexity increases exponentially as the number of parameters increases, is seen as a good candidate for quantum computing.
But the flip side of this immense power is that quantum computers will be devastatingly good at cracking the toughest encryption keys and the extensive modelling and simulations a quantum computer would offer, could be weaponised.
While the experts IBM gathered were happy to talk about the skills and diversity gap that exists in the field of quantum computing, ethics and the risk of government restrictions and bans are areas that will need extensive public debate well before quantum commuting becomes mainstream. | <urn:uuid:9db1334d-8f9a-4068-be7e-22a07b4bbc79> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/preparing-for-the-impact-of-quantum-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00596.warc.gz | en | 0.943756 | 317 | 2.640625 | 3 |
Part 1 (page 1): The IP Security Architecture
Part 2 (page 2): How IPsec works – IPsec, IPv4 and IPv6
Part 3 (page 3): IPsec Protocols and Operations
Part 4 (page 4): Cryptographic Algorithms and Deploying IPsec
The IP Security Architecture
The IP Security Architecture, or IPsec, offers an interoperable and open standard for building security into any Internet application. By adding security at the network layer (the IP layer, or layer 3 in the OSI reference model), IPsec enables security for individual applications as well as for virtual private networks (VPNs) capable of securely carrying enterprise data across the open Internet.
IPsec and its related protocols are already being widely implemented in virtual private network products. Despite its growing importance to existing deployed systems, not too many people truly grok IPsec, probably because it is complicated (a solid couple of dozen RFCs describe IPsec and its related protocols–please refer to the list of related RFCs at the end of the article).
Saying that IPsec specifies protocols for encrypting and authenticating data sent within IP packets is an oversimplification, and even obscures IPsec’s full potential.
IPsec offers the following security services:
Altogether, IPsec provides for the integration of algorithms, protocols, and security infrastructures into an overarching security architecture.
The stated goal of the IP Security Architecture is “to provide various security services for traffic at the IP layer, in both the IPv4 and IPv6 environments.” [RFC2401]. This means security services that are: interoperable, high-quality, and cryptographically-based.
The IP security architecture allows systems to choose the required security protocols, identify the cryptographic algorithms to use with those protocols, and exchange any keys or other material or information necessary to provide security services.
Part 2: How IPsec works – IPsec, IPv4 and IPv6
How IPsec Works
IPsec uses the Authentication Header (AH) and the Encapsulating Security Payload (ESP) to apply security to IP packets. These protocols define IP header options (for IPv4) or header extensions (for IPv6). Both AH and ESP headers include a Security Parameter Index (SPI). The SPI, along with the security protocol in use (AH or ESP) and the destination IP address, combine to form the Security Association (SA).
The sending host knows what kind of security to apply to the packet by looking in a Security Policy Database (SPD). The sending host determines what policy is appropriate for the packet, depending on various selectors (for example, destination IP address or transport layer ports), by looking in the SPD. The SPD indicates what the policy is for a particular packet: either the packet requires IPsec processing of some sort, in which case it is passed to the IPsec module for processing; or it does not, in which case it is simply passed along for normal IP processing. Outbound packets must be checked against the SPD to see what kind (if any) of IPsec processing to apply. Inbound packets are checked against the SPD to see what kind of IPsec service should be present in those packets.
A second database, called the Security Association Database (SAD), includes all security parameters associated with all active SAs. When an IPsec host wants to send a packet, it checks the appropriate selectors to determine the Security Policy Database security policy for that referenced destination/port/application. If the SPD references a particular Security Association, the host can look up the SA in the Security Association Database to identify appropriate security parameters for that packet.
Key management is another important aspect of IPsec. Two important key management specifications associated with IPsec are: the Internet Security Association and Key Management Protocol (ISAKMP) and the Internet Key Exchange (IKE).
ISAKMP, a generalized protocol for establishing Security Associations and cryptographic keys within an Internet environment, defines the procedures and packet formats needed to establish, negotiate, modify, and delete Security Associations. It also defines payloads for exchanging key generation and authentication data. These formats provide a consistent framework for transferring this data, independent of how the key is generated or what types of encryption or authentication algorithms are being used.
ISAKMP was designed to provide a framework that can be used by any security protocols that use Security Associations, not just IPsec. To be useful for a particular security protocol, a Domain of Interpretation or DOI must be defined. The DOI groups related protocols for the purpose of negotiating Security Associations–security protocols that share a DOI choose protocol and cryptographic transforms from a common namespace. They also share key exchange protocol identifiers, as well as a common interpretation of payload data content.
While ISAKMP and the IPsec DOI provide a framework for authentication and key exchange, ISAKMP does not actually define how those functions are executed. The Internet Key Exchange (or IKE) protocol, working within the framework defined by ISAKMP, does define a mechanism for hosts to perform these exchanges.
By defining a separate protocol for the generalized formats required to do key and Security Association exchanges, ISAKMP can be used as a base to build specific key exchange protocols. The foundation protocol can be used for any security protocol, and does not have to be replaced if an existing key exchange protocol is replaced for some reason, such as if a security flaw was found in the protocol.
IPsec, IPv4, and IPv6
IPsec provides security services for either IPv4 or IPv6, but the way it provides those services is slightly different in each. IPv4 uses header options: every IP packet contains 20 bytes-worth of required fields, and any packet that has any “special” requirements can use up to 40 bytes for those options. This tends to complicate packet processing, since routers must check the length of each packet it receives for forwarding–even though many of those header options are related to end-to-end functions such as security, with which routers are not otherwise concerned.
IPv6 simplifies header processing: every IPv6 packet header is the same length, 40 bytes, but any options can be accommodated in extension headers that follow the IPv6 header. IPsec services are provided through these extensions.
The ordering of IPsec headers, whether within IPv4 or IPv6, has significance. For example, it makes sense to encrypt a payload with the ESP header, and then use the Authentication Header to provide data integrity on the encrypted payload. In this case, the AH header appears first, followed by the ESP header and encrypted payload. Reversing the order, by doing data integrity first and then encrypting the whole lot, means that the recipient can be sure of who originated the data, but not necessarily certain of who did the encryption.
Part 3: IPsec Protocols and Operations
IPsec Protocols and Operations
One of the fundamental constructs of IPsec is the Security Association, or SA. According to RFC 2401, a “Security Association is a simplex ‘connection’ that affords security services to the traffic carried by it.” SAs provide security services by using either AH or ESP, but not both (if a traffic stream uses both AH and ESP, it has two–or more–SAs). For typical IP traffic, there will be two SAs: one in each direction that traffic flows (one each for source and destination host).
An SA is identified by three things:
IPsec defines two modes for exchanging secured data, tunnel mode and transport mode. IPsec transport mode protects upper-layer protocols, and is used between end-nodes. This approach allows end-to-end security, because the host originating the packet is also securing it and the destination host is able to verify the security, either by decrypting the packet or certifying the authentication.
Tunnel mode IPsec protects the entire contents of the tunneled packets. The tunneled packets are accepted by a system acting as a security gateway, encapsulated inside a set of IPsec/IP headers, and forwarded to the other end of the tunnel, where the original packets are extracted (after being certified or decrypted) and then passed along to their ultimate destination.
The packets are only secured as long as they are “inside” the tunnel, although the originating and destination hosts could be sending secured packets themselves, so that the tunnel systems are encapsulating packets that have already been secured.
Transport mode is good for any two individual hosts that want to communicate securely; tunnel mode is the foundation of the Virtual Private Network or VPN. Tunnel mode is also required any time a security gateway (a device offering IPsec services to other systems) is involved at either end of an IPsec transmission. Two security gateways must always communicate by tunneling IP packets inside IPsec packets; the same goes for an individual host communicating with a security gateway. This occurs any time a mobile laptop user logs into a corporate VPN from the road, for example.
The Authentication Header (AH) protocol offers connectionless integrity and data origin authentication for IP datagrams, and can optionally provide protection against replays.
The Encapsulating Security Payload (ESP) protocol provides a mix of security services:
ESP and AH authentication services are slightly different: ESP authentication services are ordinarily provided only on the packet payload, while AH authenticates almost the entire packet including headers.
AH is specified in RFC 2402, and the header is shown in the figure below (taken from RFC 2402).
The Sequence Number field is mandatory for all AH and ESP headers, and is used to provide anti-replay services. Every time a new packet is sent, the Sequence Number is increased by one (the first packet sent with a given SA will have a Sequence Number of 1). When the receiving host elects to use the anti-replay service for a particular SA, the host checks the Sequence Number: if it receives a packet with a Sequence Number value that it has already received, that packet is discarded.
The authentication data field contains whatever data is required by the authentication mechanisms specified for that particular SA to authenticate the packet. This value is called an Integrity Check Value or (ICV), it may contain a keyed Message Authentication Code (MAC) based on a symmetric encryption algorithm (such as CAST or Triple-DES) or a one-way hash function such as MD5 or SHA-1.
ESP is specified in RFC 2406, and while similar to AH in many ways it provides a wider selection of security services and can be a bit more complex.
The ESP header format is shown in the figure below (taken from RFC 2406).
The most obvious difference between ESP and AH is that the ESP header’s Next Header field appears at the end of the security payload. Of course, since the header may be encapsulating an encrypted payload, you don’t need to know what header to expect next until after you’ve decrypted the payload. Thus, the ESP Next Header field is placed after, rather than before, the payload. ESP’s authentication service covers only the payload itself, not the IP headers of its own packet as with the Authentication Header. And the confidentiality service covers only the payload itself; obviously, you can’t encrypt the IP headers of the packet intended to deliver the payload and still expect any intermediate routers to be able to process the packet. Of course, if you’re using tunneling, you can encrypt everything, but only everything in the tunneled packet itself.
Part 4: Cryptographic Algorithms and Deploying IPsec
Although there is no IPsec without encryption and authentication algorithms, which algorithms you use do not matter all that much–as long as the ones you use are secure. The fact is, IPsec was designed to allow entities to negotiate the appropriate security mechanisms from whatever algorithms each supports, using ISAKMP-based key and SA management protocols.
There is currently some controversy over which algorithms should be used in IPsec, and which should be considered basic parts of any IPsec implementation. The Data Encryption Standard, or DES, has recently proven to be vulnerable to relatively inexpensive brute-force attacks; there is a significant movement to have it deprecated for use in IPsec. At the same time, the US National Institute of Standards and Technology (NIST) is in the process of selecting DES’s successor algorithm, the Advanced Encryption Standard or AES.
Implementing and Deploying IPsec
The IPsec specification (found in RFC 2401) states there are several ways to implement IPsec in a host or in conjunction with a router or firewall:
Most organizations are likely to buy rather than build their IPsec implementation. VPN vendors usually claim to support IPsec, though some are more interoperable than others. Resources for checking interoperability include:
IPsec continues to evolve as research reveals new tools for security and new threats to security. To stay on top of the latest IETF standards developments, check:
There is no longer any question about whether or not the Internet will be important to your business; it already is. IPsec provides a framework within which you can use the Internet as your own, secure, virtual private network.
IPsec and related RFCs
- RFC 1320 The MD4 Message-Digest Algorithm
- RFC 1321 The MD5 Message-Digest Algorithm
- RFC 1828 IP Authentication using Keyed MD5
- RFC 1829 The ESP DES-CBC Transform
- RFC 2040 The RC5, RC5-CBC, RC5-CBC-Pad, and RC5-CTS Algorithms
- RFC 2085 HMAC-MD5 IP Authentication with Replay Prevention
- RFC 2104 HMAC: Keyed-Hashing for Message Authentication
- RFC 2144 The CAST-128 Encryption Algorithm
- RFC 2202 Test Cases for HMAC-MD5 and HMAC-SHA-1
- RFC 2268 A Description of the RC2(r) Encryption Algorithm
- RFC 2401 Security Architecture for the Internet Protocol
- RFC 2402 IP Authentication Header
- RFC 2403 The Use of HMAC-MD5-96 within ESP and AH
- RFC 2404 The Use of HMAC-SHA-1-96 within ESP and AH
- RFC 2405 The ESP DES-CBC Cipher Algorithm With Explicit IV
- RFC 2406 IP Encapsulating Security Payload (ESP)
- RFC 2407 The Internet IP Security Domain of Interpretation for ISAKMP
- RFC 2408 Internet Security Association and Key Management Protocol (ISAKMP)
- RFC 2409 The Internet Key Exchange (IKE)
- RFC 2410 The NULL Encryption Algorithm and Its Use With IPsec
- RFC 2411 IP Security Document Roadmap
- RFC 2412 The OAKLEY Key Determination Protocol
- RFC 2451 The ESP CBC-Mode Cipher Algorithms
- RFC 2631 Diffie-Hellman Key Agreement Method
Pete Loshin has written a dozen books on networking and the Internet, and is editor of the soon-to-be released “Big Book of IPsec RFCs: Internet Security Architecture” (Morgan Kaufmann 1999). Other books include “TCP/IP Clearly Explained” 3rd edition (Morgan Kaufmann 1999) and “Extranet Design and Implementation” (SYBEX 1998). You can reach him at email@example.com or http://www.loshin.com. | <urn:uuid:690df408-b395-40e1-a379-11dc26f71196> | CC-MAIN-2022-40 | https://www.datamation.com/security/securing-the-internet-with-ipsec-internet-security-architecture/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00596.warc.gz | en | 0.906028 | 3,240 | 3.140625 | 3 |
In the cybersecurity market, detecting attacks early — hopefully, before a breach occurs, but certainly as early in the kill chain as possible — and neutralizing them before damage is done is critical.
But success in today’s complex technology environment depends on security analytics and their effectiveness.
Security analytics is a generic term for a data-centric approach to cybersecurity. It combines software, algorithms, and analytic processes to analyze volumes of data and detect threats to information systems.
A discussion of security analytics often leads to the question of quantity. How many analytics does a solution have? How many should it have? The more analytics you have, the more protected your systems are, right?
With analytics, once you get beyond a minimum threshold, it’s the quality of the analytics that matters. The numbers game alone doesn’t mean much — it depends on what the analytic is aimed at and how deep it goes.
What does this mean?
The best way to explain this quality concept is through an example. Let’s say you have some expensive diamond jewelry stored in a vault in your home, and you want to keep them from being stolen. You secure the entrances to your home, but thieves are persistent, mounting nearly continuous assaults, looking for any possible vulnerability or mistake in your security system.
What if they find a way in?
If you had a security system based on a set of analytics that tracked any time a piece of jewelry was removed and replaced, noted when these actions occurred, and flagged any activity considered suspicious or unusual, you could investigate and then take action to stop the items from being taken.
In other words, with a small set of targeted, behavior-based analytics, you can ensure the protection of your most valuable items. Otherwise, you may need hundreds, if not thousands, of rule-based analytics continuously monitoring your environment to achieve the same level of protection.
Why does this matter?
One reason is false positives.
For example, a solution may claim to use thousands of analytics. But what do those analytics actually do, and what do they protect you against? Just because a solution touts thousands of analytics doesn’t necessarily mean your systems have more protection.
This is because alerts and analytics are closely related: the more analytics you have, the more alerts you have. And more alerts often result in a higher number of false positives, which take time and energy to investigate. Security teams report wasting about 25% of their time chasing down false positives.
A second reason is effectiveness.
Although it is true that the traditional concept of the network perimeter has changed, there is still a layering aspect to security in that our goal is to stop attacks from penetrating our networks whenever possible. For example, we know web applications and web application management interfaces are attractive targets for attackers. Therefore, using targeted analytics, we can stop a lot of attacks before they ever gain entry to our networks.
Another example of this is protecting cloud infrastructure. A few targeted analytics can identity suspicious behavior such as login attempts from unusual locations worldwide, troubling API calls, or requests to start up new infrastructure environments. This can help you cover the highest risks to your environment without overwhelming your system with trivial alerts.
Investigate the depth and breadth of analytics
Security analytics are essential for cybersecurity. But quantity of analytics alone doesn’t ensure your systems are protected. That’s why using a combination of approaches is important.
Alert Logic’s analytics engine applies a variety of analysis techniques. These include traditional rule-based analytics, machine learning, and user-behavior analytics to ensure you have the depth and breadth of protection you need to secure your most valuable assets.
To learn more about Alert Logic’s managed detection and response (MDR®) solution, visit https://www.alertlogic.com/managed-detection-and-response and schedule a live demonstration. | <urn:uuid:64d253b7-9cdf-4b06-a1c1-d08553a69dde> | CC-MAIN-2022-40 | https://www.alertlogic.com/blog/with-security-analytics-quality-means-more-than-quantity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00796.warc.gz | en | 0.936277 | 801 | 2.640625 | 3 |
Need to Know: Fibre to the Cabinet
Fibre is to thank for the increase in broadband speeds the country is witnessing, but how has it been getting to our homes and businesses?
The dominant technology in the race to provide high speed broadband to the country is Fibre to the Cabinet (FTTC). But what is FTTC, who is using it and how?
What is FTTC?
FTTC stands for Fibre to the Cabinet. A fibre cable runs from an exchange to a roadside cabinet. From here it is linked to homes via an existing copper network. This then connects the home or business to the broadband connection available to them.
Why use FTTC?
Fibre cable is used as it can achieve the fastest speeds for the internet. The problem is to get everyone connected to it directly would be very costly. By taking it to a cabinet just a few hundred metres from the house and connecting it up using existing copper networks, it gives many users the speed benefits of fibre without the added cost.
Unfortunately, there is a sacrifice to be made as FTTC does not achieve the same speeds as if it were deployed directly to a building FTTP or Fibre to the Premises but it is still a big improvement than previous technologies.
Which companies use FTTC?
The major high speed broadband projects hitting the news at the moment are all using FTTC technology.
BT's current pilots taking place across the country of 40Mbps broadband hoping to eventually hit 60Mbps is using FTTC technology, as is TalkTalk's 40Mbps broadband trial taking place on BT's FTTC network.
Virgin Media is also using a type of FTTC for its high speed broadband. It is using DOCSIS3 to link up the customer to the cabinet and this is increasing speeds even further. It has so far used it to connect 12 million people to broadband up to 50Mbps and is now trialling 200Mbps broadband in Kent.
Want to read more background on the latest IT topics? Click here for all the tech cheatsheets in our Need to Know series.
Click here to return to the main Focus on Telecoms and networking page.
Big data for finance
How to leverage big data analytics and AI in the finance sectorFree Download
Ten critical factors for cloud analytics success
Cloud-native, intelligent, and automated data management strategies to accelerate time to value and ROIFree Download
Remove barriers and reconnect with your customers
The $260 billion dollar friction problem businesses don't know they haveFree Download
The future of work is already here. Now’s the time to secure it.
Robust security to protect and enable your businessFree Download | <urn:uuid:1e59c8c0-0ea2-4988-951f-abd11bcad41e> | CC-MAIN-2022-40 | https://www.itpro.com/615217/need-to-know-fibre-to-the-cabinet | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00796.warc.gz | en | 0.932647 | 547 | 2.765625 | 3 |
Chapter 1 Understanding Service Management
In This Chapter
▶ Defining service management
▶ Understanding that everything is a service
▶ Measuring, managing, and optimizing
▶ Delivering service in a complex world
A service can be something as simple as preparing and delivering a meal to a table in a restaurant or as complex as managing the components of a data center or the operations of a factory. We’re entering an era in which everything is a service. A service is a way of delivering value to a customer by facilitating the expected outcome. That definition sounds simple enough, but it can be rather complicated when you look deeper.
Suppose that you’re hungry, and you want to get something to eat at a restaurant. You have some decisions to make. How quickly do you want or need a meal? How much time do you have? How much money do you want to spend? Are there types of food that you prefer? We make these types of decisions every minute of the day. So if you’re hungry, have 20 minutes and a limited amount of money, and want something familiar to eat, you might go to a fast-food restaurant, and your expectations probably will be met. In fact, you probably didn’t notice or even pay attention to any of the inner workings of the fast-food service provider. If the customer can find, order, receive, and be satisfied with the service – without incident – good service management is in place. But what if something weird happened? You walk into that fast-food restaurant, expecting to get the sandwich you always order quickly, but instead, a hostess greets you and informs you that the wait for a table will be 20 minutes. Lovely music is playing, and every table has a white tablecloth.
Naturally, you’re confused. You start thinking about the inner workings of service management in that restaurant. What has gone wrong? Is someone not doing his job? Is some information about customer expectations missing? Is someone changing the expected outcomes without informing customers? You might even start trying to solve the problem by asking probing questions. In your confusion, you walk out of the restaurant and find somewhere else to get a sandwich.
Why are we telling you this crazy story? When you’re thinking about service management (monitoring and optimizing a service to ensure that it meets the critical outcomes the customer values and stakeholders want to provide), many dimensions and aspects may not be apparent at the outset.
In this chapter, we give you a glimpse into the new world of service management. Clearly, effective service management requires an alignment of the overall business goals and objectives. This type of alignment isn’t a one-time task: An iterative cycle is involved, not only on a strategy level, but also within each stage of service management. Creating a valuable customer experience requires a lot of behind-the-scenes work that the customer never sees unless something goes wrong. As we show in the examples in this chapter, you can’t ignore one element of the overall service management process without affecting the way that the entire system works. | <urn:uuid:a2403a92-c554-4735-b987-d635a76af434> | CC-MAIN-2022-40 | https://hurwitz.com/service-mangement-for-dummies-read-a-chapter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00796.warc.gz | en | 0.93947 | 638 | 2.765625 | 3 |
I’m from the generation that grew up watching Knight Rider. Every Saturday evening with a packet of Skips crisps. My Gran thought David Hasselhoff was ‘dishy,’ but I watched it for the car, I wanted a KITT, a car that I could summon via my watch to rescue me when in grave peril.
Thankfully, I don’t often find myself in grave peril but a car that drives itself, parks itself and picks me up would be useful – and now it seems – a real possibility.
Having done some research here are my top five facts about autonomous vehicles:
1) What does autonomous mean?
An autonomous vehicle is one that is capable of fulfilling the main transportation capabilities of a traditional car, but is also capable of sensing its environment and navigating without human input.
Sometimes called robotic cars, they currently exist mainly as prototypes and demonstration systems. As of 2015 the only self-driving vehicles that are commercially available are open-air shuttles for pedestrian zones that operate at 12.5 miles per hour.
2) Does the technology exist?
Yes, the technology already exists. Both Audi and Mercedes Benz amongst other car manufacturers have already built fully autonomous vehicles that can operate up to 18 mph. The only thing holding them back is regulation.
At the recent CES 2015 in Las Vegas several car manufacturers unveiled their technology. With BMW’s Remote Valet Parking Assistant, finding a parking space and parking your car could be a hassle of the past. Drivers just need to arrive at a venue and send the car off on its own to find an open space in a carpark. The sensors can detect unexpected obstacles like an incorrectly parked vehicle and just steer around them. After finding a suitable spot, it just parks, locks the doors, and waits for the driver to summon it back when they are done.
3) When will they be available?
According to Wired we should expect to see a phased approach, ‘rolling out cool new features in otherwise conventional cars. In three to five years, we can expect cars to do the heavy lifting during traffic jams and highway cruising, but cede control to their carbon-based occupants the rest of the time.
Beyond that comes the more difficult challenge of driving in urban arenas, where there are far more obstacles and variables, like pedestrians, cyclists, cabbies and the like. That’s a tougher nut to crack, but our cars will become increasingly autonomous over the next 25 years, and we can expect them to be fully autonomous by 2040.’
So it’s going to be a mix of letting the car do the driving, and still taking the wheel when you want to or need to.
Google believes that, “A limited-environment low-speed vehicle will be technologically and socially viable sooner than a vehicle capable of operating anywhere,” Rather than taking a conventional car and making it fully automated Google is also piloting an unconventional two-seater test vehicle, known within Google as “Prototype”. The small, pod-style cars aim to get fully automated cars into everyday use. They look cute, have a top speed of only 25 miles per hour and low impact resistance – but perfect for nipping around towns and cities and if everyone was using them the most common accident is likely to be the odd dint and scratch. Google still has significant work to do before its software can handle all the situations a human driver can. But it will be easier to build, test, and market small vehicles for limited environments than to craft autonomous cars that can handle everything from high-speed freeway driving to city streets, they say.
4) What will they look like?
Visit the website behind the Mercedes F0 15 Luxury in Motion research car and you’ll find something that looks very different to Google’s prototype.
Mercedes is quick to point out that if you’re just looking at the technology then you’re missing the point – it’s a mobile living space. It certainly looks the part, with large amounts glass to feel open and spacious and seats that turn around so you can face your fellow passengers to talk. If you don’t feel like chatting, touch screens in every door give passengers access to the internet and a 360 degree view around them. So you can see everything around you when you’re in the vehicle and keep track of exactly where your car is when you’re not.
5) Safer and greener driving
Smart technologies can operate cars a whole lot better and more efficiently than people.
“Ultimately the journey that the industry is on has benefits for safety and the environment too,” says a spokesperson from Jaguar Land Rover, which is investing heavily in self-drive software research.
Imagine a car that can communicate with the cloud to identify the location of accidents or road congestion ahead, and then automatically re-route, for instance. Or put yourself in a vehicle that can “talk” to traffic lights wirelessly and regulate your speed so as to hit a green light every time.
“That’s very efficient because when you’re stopping and starting that’s when you have the most load on the engine, which means more fuel use,” says the Jaguar Land Rover spokesperson.
It’s the promise of greater safety where autonomous driving really comes into its own, industry advocates claim. Ultimately, most accidents happen because of human error, says a spokesperson from the Society of Motor Manufacturers and Traders, a UK trade association.
“Computers don’t get bored or distracted, or take their eyes off the road because they want to change the radio station or make a phone call,” he maintains. He also notes that Google’s self-drive vehicle has never so much as nudged another car.
Coming to a street near you
With the UK government announcing a £10m trial that will see the first autonomous trial cars hitting the streets of three selected cities in 2015, it seems automated vehicles really are going to be a part of the not too distant future. My five year old keeps asking me when he can learn to drive but I’m now starting to wonder if he’ll ever need to. The next generations will benefit from safer, cleaner roads but they may also miss out on the pleasure that driving can bring.
What do you think? | <urn:uuid:4405d0ab-1fe8-402e-9d0a-600105a067e6> | CC-MAIN-2022-40 | https://purple.ai/blogs/top-5-things-you-always-wanted-to-know-about-self-driving-cars/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00796.warc.gz | en | 0.95707 | 1,325 | 2.59375 | 3 |
Also known as slush powder, superabsorbent polymers (SAP) are water-absorbing polymers. They can absorb and retain large amounts of water or aqueous solutions as compared to their mass and weight, through hydrogen bonding with water molecules. Superabsorbent polymers or SAPs have found application in personal disposable hygiene products such as baby diapers, sanitary napkins, and adult incontinence products.
The global super absorbent polymers market was valued at USD 9 billion in 2019. By 2024, the market is expected to reach USD 12.9 billion, at a CAGR of 7.4%. Rising awareness about hygiene, increasing population including increasing geriatric population, and growing demand for baby diapers are some of the main factors that will have a favorable impact on the global market.
The use of superabsorbent polymers is expected to increase further with the growing awareness about the advantages offered by adult incontinence products. In fact, governments in countries like Australia, the US, the UK, and Germany have taken initiatives to support the use of adult incontinence products. Such developments will help increase the demand for adult incontinence products and thereby, propel the growth of the global super absorbent polymers market.
Apart from these, initiatives taken by international organizations such as World Bank, WHO, UNICEF, and UNESCO and several other local bodies, to increase awareness about menstrual hygiene and provide affordable feminine hygiene products have boosted the use of these products across the world, especially in developing countries.
The demand for baby diapers has also been increasing continuously due to increased birth rates and growing disposable incomes. This too will increase the demand for superabsorbent polymers. However, one of the major concerns related to superabsorbent polymers is the disposal of non-biodegradable superabsorbent polymers.
As superabsorbent polymers used in personal hygiene products are generally made from petroleum-based raw materials, they are non-biodegradable in nature. So, there is an increasing focus on developing bio-based superabsorbent polymers made from products like cellulose, starch, chitosan, and natural gum.
Although bio-based polymers account for a very small share of the global super absorbent polymers market, the demand for them is likely to increase significantly in the coming years. This will give a major boost to the global market for super absorbent polymers.
However, high raw material cost, as well as the volatility in raw material prices, is likely to hinder the growth of the global superabsorbent polymers market. Nevertheless, the development and adoption of eco-friendly products are expected to provide significant growth opportunities in the market.
At present, the global market for super absorbent polymers is being dominated by North America. In North America, the US is the largest market for superabsorbent polymers. However, in the coming years, the Asia-Pacific region is projected to emerge as the largest market.
Increasing life expectancy and birth rates, and growing awareness about personal hygiene products in the countries of the Asia-Pacific region are going to increase the demand for baby diapers and adult personal hygiene products in the region. This will propel the growth of the Asia-Pacific superabsorbent polymers market.
The major players operating in the global superabsorbent polymers market are, Nippon Shokubai (Japan), BASF (Germany), Sumitomo (Japan), Evonik (Germany), LG Chem (South Korea), SDP Global (Japan), Yixing Danson (China), Satellite Science & Technology Co., Ltd. (China), and Kao Corporation (Japan). | <urn:uuid:8bfd78de-f7be-4099-877f-828e41e711fb> | CC-MAIN-2022-40 | https://www.alltheresearch.com/blog/increasing-demand-for-baby-diapers-and-feminine-hygiene-products-to-drive-global-super-absorbent-polymers-market | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00796.warc.gz | en | 0.933965 | 764 | 2.84375 | 3 |
Years ago, the internet was referred to as the “information highway.” The technology is now here to make that metaphor a reality that can span entire countries to connect dense urban areas and isolated remote locations. National highways connect industries, cities, small villages and vacation destinations. By using the right of way of the highway to install a high-speed connectivity backbone, transportation departments can not only increase the safety and efficiency of the highway but also change the quality of life for each business, village and residence along the path.
Of course, wireless connectivity improves the safety and efficiency of the highway itself:
- Video surveillance to monitor traffic and road conditions
- Wireless connectivity to emergency call boxes along the route
- Connections to road maintenance depots to coordinate services
- Wi-Fi at rest areas
- Wi-Fi hot spots at road construction sites to connect workers on site
- Digital signage to advise motorists of construction or weather conditions
- Road sensors for weigh stations or to monitor bridge integrity
By leveraging the connectivity backbone, connectivity can also be provided to remote industrial facilities and small villages. Industries can connect their processing plants to increase production efficiency, implement Internet of Things (IoT) automation, and improve employee safety.
Leveraging the backbone also enables connectivity for schools, hospitals, smaller businesses and residents in smaller towns along the highway where internet connectivity is scarce. This would bridge the digital divide and provide the same high-speed connectivity that is common in urban centers, improving opportunities for small businesses and students in remote areas.
Any highway can now literally become an “information highway” with cloud-powered intelligent edge communications because wireless connectivity can be rapidly deployed to provide a wireless fabric including high-capacity backhaul along the route while using wireless distribution networks to connect across a given area and offer indoor and outdoor Wi-Fi access. Cambium Networks can provide the wireless radio technology and also the planning tools and cnMaestro™ end-to-end management system that makes it easy to install, provision and monitor network performance. | <urn:uuid:1a20ab57-19ef-48e7-8610-57043190a22f> | CC-MAIN-2022-40 | https://www.cambiumnetworks.com/blog/connected-along-the-journey/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00796.warc.gz | en | 0.930472 | 414 | 3.046875 | 3 |
“An ounce of prevention is worth a pound of cure” The saying turns out to be a harsh reality especially when it comes to securing information over the internet. Unfortunately, we are living in an era when one tiniest crack in your defences can wreak a considerable amount of havoc.
We, humans, love to play the blame game and blaming tech is not an exception. I still remember how renowned companies like Equifax and Yahoo were in the spotlight for massive and preventable data breaches previous year. Apart from that, WannaCry- a high profile ransomware was found to threaten several organisations at a time especially in the healthcare industry. Hackers are growing smart and sophisticated. With the effective use of software, bots, viruses, Trojans and phishing techniques, they seem to be no more those bored looking suburban kids who enjoyed causing virtual mischief for the time being. An ethical hacker can be a good software developer, or they can even go against established business models pushing new ideas to their maximum potential, but at the same time, one wrong step can take them to places where there is no U-Turn.
Do you know that it takes just 10 minutes to crack a six- character password (opens in new tab)? Many of the attacks are automated where cybercriminals can access data even while sleeping. This also means hacking is the only system that is against discrimination because a hacker can be anyone. Kidding! All they require doing is sending an army of bots to look for weak points over the internet. A specific type of malware is used to hack certain devices, access cameras or a network. However, some criminals have a motto while orchestrating attacks. For example- if someone wants to steal valuable information and sell in the black market or wish to harm a company’s reputation that takes much time to repair.
So what needs to be done?
People, it’s time to shore up your protections but before you must know a few signs of trouble. What will you do if they’ve already broken in, and yet you have no clue about it?
1. Ransom ware messages- One of the most apparent signs in the books of network attacks. Moreover, since they appear on the very first page of the site, it is easy to identify such signs. They restrict access to the remaining content until and unless the victim does not pay a specific amount to the hacker. It isn’t compulsory that you will face these signs only while visiting an infected website while working. One email or spam message is enough to direct a recipient to visit the site containing malware or infected files. In fact, they seem to be so legitimate that nobody would think twice about doing as the email instructs. As soon as you fall in the trap- hacker installs ransomware on the victim’s computer and play with it.
Solution- Well, one of the best approaches to take into account is by not paying any amount of demanded money and make sure to seek expert assistance at first. Also, shut down and disconnect any infected parts of their system. This will help you in preventing any further damage, plus communicate about the attack with law enforcement. Most important of all, keep backing up your data and implement recovery solution. This will provide great help in bringing pieces back soon.
2. Computers Functioning- Do you find your mouse cursor moving on its own? Do you notice any presence of an external element controlling your device? It isn’t any ghost! It’s what we call a remote desktop hack. Seems quite frightening, isn’t it!
Solution- Companies need to react by immediately disconnecting all affected computers from the network and then determining the point of entry. In addition to this, one definitely requires monitoring network traffic for suspicious activities at regular intervals. Of course, you require running a virus scan, sign out of all programs or services on an affected machine, and set up new passwords for everything.
3. Unwanted browser toolbars- Another common sign of exploitation is when you find that your browser has multiple new toolbars with names. Dump the tools unless and until they aren’t from a renowned source. Besides, keep reviewing all the installed and active toolbars at regular intervals. Remove the ones which are entirely unknown to you. Even if this doesn’t work, try avoiding malicious toolbars by making sure that all your software is fully patched. Try reading the licensing agreement, and I am sure you will know what needs to be done.
4. Unexpected Encrypted Files- Another kind of ransomware attack involves hacker encrypting files barring access to them until victims pay the requested amounts of money. However, it is impossible for an individual to detect encrypted files until they click on them and cannot open them. And that's the reason why it is always advisable to take proactive safeguards against malware issues.
Solution- Running day to day anti-virus scan is the smartest thing to do. In addition to this, users must also keep the associated software updated. Like I said before, one requires to be vigilant when clicking on links or downloading attachments that seem out of the ordinary. Keeping essential files in multiple places, for example- do not store the entire data on the work computer, try using a USB drive or cloud application like G suite.
5. Popups Everywhere- I am sure you must have come across the horrifying sign stating that “You Have Been hacked!” This is the moment when you get random browser pop-ups from websites that don't usually generate them. This also means your system has been compromised. It feels like you are continuously battling email spam, but worse!
Solution- Random pop-ups can be generated by one of the three previous malicious mechanisms such as redirect internet searches, fake antivirus messages, unwanted toolbars and what not! So this is as simple as it sounds, get rid of all the toolbars and other unwanted programs. You might get rid of pop-ups!
Vikash Kumar, manager, Tatvasoft (opens in new tab)
Image source: Shutterstock/hywards | <urn:uuid:a8bb54b4-253a-43d5-ba67-cff96d808738> | CC-MAIN-2022-40 | https://www.itproportal.com/features/follow-these-steps-to-fight-back-a-hacked-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00796.warc.gz | en | 0.946943 | 1,246 | 2.5625 | 3 |
It has been well established that Artificial Intelligence has contributed to increased productivity, efficiency, and development of society. Artificial Intelligence involves the use of machines; therefore, companies in different sectors are making efforts to produce and develop several mechanisms that can take the place of humans.
The introduction of computers in 1970 was the beginning of Artificial Intelligence, and it has helped in the development of other software companies. Artificial Intelligence is very relevant to different sectors of the economy. Industries ranging from manufacturing, energy, healthcare, industrial services, construction, defense and so on.
The Scope of Artificial Intelligence
The effect of AI (Artificial Intelligence) in the evolution of technology cannot be emphasized enough. Instead of manual reasoning and tasks, the use of machines will provide high-quality performance effortlessly. In the finance and economic sector, the introduction of AI has improved organizational development to a great extent; supporting their implementation, fraudulent activities have been pre-determined and avoided simultaneously. The use of unauthorized debit cards in the wake of fraud prevention is easy to detect with the use of machines powered by AI.
Improved production of automobiles in the automotive sector has contributed to the evolution of technology with the creation of self-driving cars, drones, and self-driving trucks that can perform enhanced services – this enhancement has contributed towards the establishment of different automobile companies producing different brands of technology.
Artificial Intelligence has introduced amazing capabilities to the generation of new trends and technology. Companies like Microsoft, Google, and Apple have new improved features and products that positively affect the evolution of technology. Apple introduced Siri and Google’s AI technology can predict medical possibilities. With the introduction of Search Engine Optimization by Google, search engines have also been equipped to help draw traffic to your business website.
The evolution of technology does not only involve the creation of new gadgets or technology, but it is also aimed at simplifying different algorithms which turn technology into skills. Different online apps related to advertising, social media, and networking can replace the manual interactions between individuals all over the world. Apps like Instagram, Linkedln, Path, Slack, and Keynote are technological apps that perform both business and corporate actions. Staying connected with others could not have been easier than it is now.
The impact of AI on the healthcare sector will be enormous; AI has been geared to provide X-ray readings and apps that can remind you to take your pills every day. Also, rather than using the human mind to reason and represent, AI has been put in place to do all types of logical reasoning.
Technology does not necessarily have to be physical; it can also tend to be abstract. Therefore, this abstraction could play in the financial sector. Artificial Intelligence is employed to detect fraud, improve customer representation, recognition and handle the needs of customers. Book-keeping is not done using human labor anymore; there are machines in place that calculate the daily transactions in every banking or financial system.
The entertainment sector has widely improved due to the availability and evolution of exciting technology. Video game consoles such as the Xbox and PlayStation have replaced old video games and further reduced the physical activity of people. Is this a good thing or a bad thing? It’s best for you to decide. Therefore, people sit in the comfort of their home to play soccer and still manage to get the satisfaction of physical activities. Is this psychological pleasure worth the effort? Well, yet another point to ponder on in the long run.
The Effects of Artificial Intelligence
The effect of AI in the evolution of technology will improve both customer and manufacturers’ experience and analytical marketing. Instead of individuals performing specific dangerous jobs, intelligent machines can be used to replicate manual efforts. With Artificial Intelligence in place, there is near 100% accuracy, efficiency, and error-free performance.
Artificial Intelligence has affected the evolution of technology in different sectors, and the world has experienced its magnanimous reality. It has been statistically proven that in years to come, AI will expand significantly and its impact will dramatically emerge within every norm of our society. | <urn:uuid:2df39181-7bea-45cd-baa3-38d79c9a2fb9> | CC-MAIN-2022-40 | https://www.idexcel.com/blog/tag/ai-in-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00196.warc.gz | en | 0.947132 | 838 | 3.09375 | 3 |
Why didn't Skype predict or anticipate the problems that emerged during the recent two-day outage for its peer-to-peer IP telephony service, which is used by 220 million users? For some, the outage raised questions about the scalability of peer-to-peer technology. But as Todd Hoff notes at High Scalability, the growth of huge networks can introduce variables that can be difficult to predict and assess. "How could Skype possibly test booting 220 million servers over a random configuration of resources?" Todd asks. "Answer: they can't. Yes, it's Skype's responsibility, but they are in a bit of a pickle on this one." He continues:
The boot scenario is one of the most basic and one of the most difficult scalability scenarios to plan for and test. You can't simulate the viciousness of real-life conditions in a lab because only real-life has the variety of configurations and the massive resources needed to simulate itself. It's like simulating the universe. How do you simulate the universe if the computational matrix you need is the universe itself? You can't. You end up building smaller models and those models sometimes fail.
Todd shares his own experiences with the "big boot scenario," as well as the way these scenarios play out in centralized and peer-to-peer networks. | <urn:uuid:d4a7b8ed-68f0-4071-902d-29d2349c152a> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/archives/2007/08/30/skype-scalability-and-boot-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00196.warc.gz | en | 0.961638 | 269 | 2.640625 | 3 |
At Central Alarm we offer a variety of home protection devices, including burglary alarm and fire detection options. Understanding the basics of Alarm systems will help the user better operate their own system, and consequently, increase their overall safety.
A typical alarm system involves a premise control unit (PCU), also known as the alarm control panel. This is essentially the executor of the alarm system. It reads sensor inputs, arms and disarms the alarm, and detects signal intrusions. In modern systems, the premise control unit consists of a circuit board, and a visual display screen. Also, the PCU is accompanied by a power supply so that the alarm system cannot be cut.
The recognizance unit of an alarm system is no doubt the sensors. They are usually placed around the perimeter of a house to alert the PCU if an intruder is present. From here the PCU can do two things: trigger a light on the property to expose and frighten the intruder, or sound the alarm. Sensors can be almost anywhere, but the most crucial positioning is the front door, back door, and vulnerable windows.
Many alarm system involve bells, whistles, sirens and flashing lights to indicate the detection of an intruder. The nerves of a criminal perpetrating a burglary are no doubt on edge. Triggering an alarm will spike those nerves and cause the criminal to scurry away. Although catching the burglar safely is ideal, most criminals will stay far away from your property after they realize it’s protected by an alarm.
In addition to these features most alarm systems are accompanied by a monitoring service. In the event of an intruder, the PCU will contact a central operating system which will take the appropriate steps to secure the safety of your home. | <urn:uuid:48fced76-9612-4fcf-b360-da53ffb7a610> | CC-MAIN-2022-40 | https://central-alarm.com/2015/04/14/alarm-system-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00196.warc.gz | en | 0.925924 | 371 | 2.734375 | 3 |
Answer the following questions, and refer to Appendix G, "Answers to Review Questions," for the answers.
What is common to OSPF and Integrated IS-IS?
How is the router identified in an IS-IS environment?
What is the difference between NSAP and NET?
What does a unique system ID define?
Which network representations are supported by IS-IS?
What is a pseudonode?
How do two Level 1 areas communicate?
How do systems find each other in IS-IS?
List the types of adjacencies between IS-IS systems.
How is IS-IS routing enabled on Cisco routers? | <urn:uuid:25005f07-ff4f-4953-9d96-a5dc54ba6dcd> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=31572&seqNum=9 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00196.warc.gz | en | 0.931444 | 138 | 2.828125 | 3 |
The number of smart digital innovations being introduced in cities has increased over the last ten years. Many of the innovations have been introduced as one-off projects as technology develops, awareness grows, and city needs and budgets permit. These IoT innovations are varied, including intelligent traffic control, smart parking, smart lighting, remote infrastructure monitoring, waste management, smart sensors, and even shark monitoring. The opportunities grow by the day.
Each of these initiatives is a huge step in the right direction and can help improve city efficiencies and sustainability, but in isolation, the benefits are often diminished. Many IoT initiatives involve proprietary systems, which rely on different frameworks, and use different data architectures or sensor hardware that speak different “languages.” Managing one of these projects in isolation is fine, but as the number grows, the management of these smart systems can quickly become an IT nightmare.
There are significant benefits and efficiencies to be gained by integrating disparate IoT systems and data. Let’s take a look at some of the possible improvements.
Many cities run small pilot projects to test an innovation. While often successful, many still struggle to scale the innovation and roll it out. This can often be attributed to issues with system compatibility, which may require customization, retrofitting, and sometimes rebuilding, all of which add time and can blow out a project’s costs.
When planning a smart city innovation there are three things to focus on to ensure you avoid IoT silos:
The linking of legacy IT systems, IoT sensors, and data architectures needs to be at the forefront when considering future solutions. When looking for a solution, cities should prioritize innovations that use open standards and are committed to interoperability.
Many IoT projects are still considered technology projects rather than operational transformation projects. An operational transformation project can benefit a wide range of stakeholders, even if the project wasn’t designed for them. As innovation is used and embraced by a greater number of stakeholders, the inherent value will be more easily recognized, which will help embed and scale the solution.
The beauty of digital innovation and IoT is the vast amounts of data that can be produced. Ensure you have a plan for how you are going to store and access the data. Even if you are not using the data immediately for advanced analytics or artificial intelligence, you very soon may choose to do so. The more data available the better, so the integration of all available data, even from different systems, will prove to be invaluable. An ideal period for artificial intelligence is two years of historical data, so we recommend you start storing your data from day one.
Finally, teams and personnel come and go, but an interconnected, interoperable smart system is built to last. Future city and council employees will one day be grateful for the vast amounts of data stored, and city services and resources can continue to be optimized and improved to increase efficiency and effectiveness. This is the beauty of integrated IoT for smart cities.
It is critical that healthcare organizations tighten up the security of medical devices by securing…
Energy harvesting offers significant potential in the healthcare field to help both patients and practitioners…
Augmented reality apps and glasses can streamline navigation, item picking, barcode reading, and synchronization in… | <urn:uuid:2026e3bd-89bb-4a25-9b2b-5ede6ad52b73> | CC-MAIN-2022-40 | https://www.iotforall.com/how-to-avoid-iot-silos-in-cities/amp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00196.warc.gz | en | 0.93673 | 653 | 2.59375 | 3 |
Backups are needed for everything from restoring a lost or damaged file to completing an entire disaster recovery. Data backups require a lot of storage space, which takes up more resources and drives costs up. To make the most of this storage, businesses need to optimize their data backups.
Fortunately, backup deduplication can help reduce the overall load on data storage and provide other benefits. Learn more about backup deduplication and how it can help your organization’s storage utilization in this article.
What is backup deduplication?
Backup deduplication is a tool used to maximize storage utilization by removing any redundant data, thereby reducing the amount of data storage space used for backups. Backup deduplication is also known as data deduplication.
How does backup deduplication work?
Backup deduplication works by comparing new data with data that has already been backed up. It saves single copies of blocks of data and then compares those data blocks with new data that comes in. If any identical hashes or redundancies are detected among data sets, it will remove the duplicates.
When restoring data that has been deduplicated, the software uses pointers to reference the original copies of data where needed. These pointers take up a lot less space than the duplicates of data, which helps to reduce the amount of space needed for backups of all your data.
Data deduplication can be performed using two different strategies: file-level or block-level.
File-level deduplication compares new files with existing files. If the new file does not match any previous files, it will store and index it. If the same file has previously been backed up, the backup deduplication software will use a pointer to reference the original and avoid creating a duplicate file.
Block-level deduplication is even more granular and compares data blocks within a single file. However, it works in the same way that file-level deduplication does. New data blocks are saved, and identical data blocks are replaced with a pointer instead of creating a duplicate.
Why is backup deduplication important?
There are four major reasons why data deduplication is important for organizations, which are:
1) Reduced storage utilization
This is the main reason backup deduplication is performed. Since backup deduplication removes any data redundancies, you are left with single copies of all your data. This decreases the amount of overall data, which reduces the storage utilization of your organization.
2) Lower costs
Reduced storage utilization leads to lower costs. When your backups require less storage space, your company can spend less on data storage or data maintenance and save money for the organization.
3) Decreased bandwidth required
The amount of bandwidth used depends largely on the amount of data you backup or transfer back to your endpoints. Backup deduplication removes extra copies of data so that all data is unique. This decreases the total amount of data being transferred over the network, which optimizes your network efficiency.
4) Faster recovery
Whether a single file is lost or the entire data set needs to be restored, data deduplication enables a faster recovery overall. Because each version or point of data is saved only once, it allows you to quickly access and restore the data. This also helps to support business continuity.
If an entire disaster recovery needs to be performed, it will also be completed much faster due to the smaller amount of storage. Duplicates of data won’t need to be a concern because backup deduplication will have already been executed.
Backup deduplication, when used with available types of backup software, can be an essential component in your organization's IT environment. It effectively reduces the amount of data being stored without losing essential pieces or compromising on quality. Backup deduplication technology helps your company to be prepared for any small problem or big disaster that comes.
Back up your organizational data with Ninja Data Protection. It contains a variety of storage and restore options, along with block-level backup and compression to reduce overall storage. Sign up for a free trial today. | <urn:uuid:6eaf539a-8e4b-4aad-a804-3848c7737713> | CC-MAIN-2022-40 | https://www.ninjaone.com/blog/backup-deduplication-overview-msp-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00196.warc.gz | en | 0.91282 | 877 | 2.875 | 3 |
We hear the term “machine learning” a lot these days (usually in the context of predictive analysis and artificial intelligence), but machine learning has actually been a field of its own for several decades. Only recently have we been able to really take advantage of machine learning on a broad scale thanks to modern advancements in computing power. But how does machine learning actually work? The answer is simple: algorithms.
Machine learning is a type of artificial intelligence (AI) where computers can essentially learn concepts on their own without being programmed. These are computer programmes that alter their “thinking” (or output) once exposed to new data. In order for machine learning to take place, algorithms are needed. Algorithms are put into the computer and give it rules to follow when dissecting data.
Machine learning algorithms are often used in predictive analysis. In business, predictive analysis can be used to tell the business what is most likely to happen in the future. For example, with predictive algorithms, an online T-shirt retailer can use present-day data to predict how many T-shirts they will sell next month.
Table of Contents
Regression or Classification
While machine learning algorithms can be used for other purposes, we are going to focus on prediction in this guide. Prediction is a process where output variables can be estimated based on input variables. For example, if we input characteristics of a certain house, we can predict the sale price.
Prediction problems are divided into two main categories:
- Regression Problems: The variable we are trying to predict is numerical (e.g., the price of a house)
- Classification Problems: The variable we are trying to predict is a “Yes/No” answer (e.g., whether a certain piece of equipment will experience a mechanical failure)
Now that we’ve covered what machine learning can do in terms of predictions, we can discuss the machine learning algorithms, which come in three groups: linear models, tree-based models, and neural networks.
What are Linear Model Algorithms
A linear model uses a simple formula to find a “best fit” line through a set of data points. You find the variable you want to predict (for example, how long it will take to bake a cake) through an equation of variables you know (for example, the ingredients). In order to find the prediction, we input the variables we know to get our answer. In other words, to find how long it will take for the cake to bake, we simply input the ingredients.
For example, to bake our cake, the analysis gives us this equation: t = 0.5x + 0.25y, where t = the time it takes the bake the cake, x = the weight of the cake batter, and y = 1 for chocolate cake and 0 for non-chocolate cake. So let’s say we have 1kg of cake batter and we want a chocolate cake, we input our numbers to form this equation: t = 0.5(1) + (0.25)(1) = 0.75 or 45 minutes.
There are different forms of linear model algorithms, and we’re going to discuss linear regression and logistic regression.
Linear regression, also known as “least squares regression,” is the most standard form of linear model. For regression problems (the variable we are trying to predict is numerical), linear regression is the simplest linear model.
Logistic regression is simply the adaptation of linear regression to classification problems (the variable we are trying to predict is a “Yes/No” answer). Logistic regression is very good for classification problems because of its shape.
Drawbacks of Linear Regression and Logistic Regression
Both linear regression and logistic regression have the same drawbacks. Both have the tendency to “overfit,” which means the model adapts too exactly to the data at the expense of the ability to generalise to previously unseen data. Because of that, both models are often “regularised,” which means they have certain penalties to prevent overfit. Another drawback of linear models is that, since they’re so simple, they tend to have trouble predicting more complex behaviours.
What Are Tree-Based Models
Tree-based models help explore a data set and visualise decision rules for prediction. When you hear about tree-based models, visualise decision trees or a sequence of branching operations. Tree-based models are highly accurate, stable, and are easier to interpret. As opposed to linear models, they can map non-linear relationships to problem solve.
A decision tree is a graph that uses the branching method to show each possible outcome of a decision. For example, if you want to order a salad that includes lettuce, toppings, and dressing, a decision tree can map all the possible outcomes (or varieties of salads you could end up with).
To create or train a decision tree, we take the data that we used to train the model and find which attributes best split the train set with regards to the target.
For example, a decision tree can be used in credit card fraud detection. We would find the attribute that best predicts the risk of fraud is the purchase amount (for example that someone with the credit card has made a very large purchase). This could be the first split (or branching off) – those cards that have unusually high purchases and those that do not. Then we use the second best attribute (for example, that the credit card is often used) to create the next split. We can then continue on until we have enough attributes to satisfy our needs.
A random forest is the average of many decision trees, each of which is trained with a random sample of the data. Each single tree in the forest is weaker than a full decision tree, but by putting them all together, we get better overall performance thanks to diversity.
Random forest is a very popular algorithm in machine learning today. It is very easy to train (or create), and it tends to perform well. Its downside is that it can be slow to output predictions relative to other algorithms, so you might not use it when you need lightning-fast predictions.
Gradient boosting, like random forest, is also made from “weak” decision trees. The big difference is that in gradient boosting, the trees are trained one after another. Each subsequent tree is trained primarily with data that had been incorrectly identified by previous trees. This allows gradient boost to focus less on the easy-to-predict cases and more on difficult cases.
Gradient boosting is also pretty fast to train and performs very well. However, small changes in the training data set can create radical changes in the model, so it may not produce the most explainable results.
What Are Neural Networks
Neural networks in biology are interconnected neurons that exchange messages with each other. This idea has now been adapted to the world of machine learning and is called artificial neural networks (ANN). The concept of deep learning, which is a word that pops up often, is just several layers of artificial neural networks put one after the other.
ANNs are a family of models that are taught to adopt cognitive skills to function like the human brain. No other algorithms can handle extremely complex tasks, such as image recognition, as well as neural networks can. However, just like the human brain, it takes a very long time to train the model, and it requires a lot of power (just think about how much we eat to keep our brains working).
Like this article? Subscribe to our weekly newsletter to never miss out! | <urn:uuid:a5acc329-2155-444a-bbde-dd49c81dd71f> | CC-MAIN-2022-40 | https://dataconomy.com/2017/03/beginners-guide-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00196.warc.gz | en | 0.94816 | 1,586 | 4.03125 | 4 |
What are IPv6 Summary Routes?
If you're trying to understand IPv6 summary routes, think of them like looking for something in a grocery store. Maybe you want to make something uncommon like miso soup and you don't know where to find the miso. You ask a clerk where you can find miso, but the clerk doesn't even know what miso is. The clerk might not know what it is, but they tell you where to find international ingredients: Aisle 4. Once there, you navigate aisle 4's shelves and sections, you narrow down to Asian ingredients, then Japanese ingredients and, finally, miso.
IPv6 summary routes do something similar for packets navigating networks. Networks with millions of devices have way too many paths for every router to know exactly how to find every device. So instead, routers know which general direction a packet should head in. Obviously, the analogy to the grocery store isn't perfect, because in a network there would be routers along the way that have increasingly specific knowledge of how to find specific devices. That said, in this blog post, we'll go into a bit more detail on how IPv6 summary routes work, and how to configure one.
What are Summary Routes?
Quick Definition: A summary route is the combination of multiple IP addresses for the purposes of reducing memory load. By default, individual IP addresses get advertised by routers and added to routing tables, but that takes up space and memory. Route summarization is a process of bundling addresses and advertising them together with a shorter, less-specific subnet mask. The nature of summarizing routes means it can only be applied to contiguous addresses.
An Overview of IPv6 Summary Routes [VIDEO]
In this video, Keith Barker covers how to implement a summary route in an OSPF Network with IPv6. Learn what a summarization is and reasons why you should consider implementing one. Keith follows this up with a walkthrough of how to create a summary route, then verify that it's been successful.
Why are IPv6 Summary Routes Necessary?
If you think back to our grocery store example, you might think that the clerk's answer wasn't helpful. If you're in a hurry for your miso, you might think that being told,"try over there" is just slowing you down. But imagine the alternative: the clerk knows the exact location of every product in the store. That could lead to long lines of every customer asking the clerk where to find every ingredient they came looking for. You'd never get your question answered in time.
Instead, the clerk has a general sense of where to start your search. And maybe once you get to the right aisle, there's someone stocking the shelves who has specific knowledge of that aisle. Routers have similar jobs, and they're not just dealing with one store — they could be interfacing with millions of networks, each with millions of their own devices. That's far too much information for all devices everywhere to store. Summary routes get each packet moving in the right direction, and they can get increasingly specific instructions as they go along.
How to Configure IPv6 Summary Routes
In order to describe how an IPv6 summary route gets configured, we have to first visualize a hypothetical network. We recommend using network virtualization for practice. We're about to explain how to configure IPv6 summary routes, but first we should have a strong idea of the network destinations we'll be summarizing.
Obviously, we're not going to simulate an entire IPv6 network. On an average IPv6 network, we could have millions of networks. But in a real-world environment, to avoid having millions and millions of routes, we can have summary routes that get us in the right direction. And we can simulate some of the top-level decisions those routers have to make with our hypothetical network.
The addressing schema for our overarching network is 2001:db8: 6783:X::/64. We have a backbone connection between multiple areas which we're calling Area 0. Area 0's router is called R1. One of the areas that connects to Area 0 is Area 45. And that's where we'll be focusing. Area 45's connection to Area 0 is by way of an Area Border Router (ABR) we'll call R2.
Inside of Area 45 we have two routers, R4 and R5. Under them are two networks: Subnet 45a and Subnet 45b, respectively. Their actual addresses are:
Now that you're imagining that network, go further and imagine that each area represents thousands or even millions of devices. Just try and imagine what Area 0's router, R1, would have to do if it was responsible for getting every packet to its specific destination.
In just a moment, we'll take a look at R1's actual routing table. We'll see that as it stands currently, it has the full, 64-bit routes to R4 and R5. That means that R1 is taking up valuable routing table space and processing power keeping detailed instructions on how to reach two subnets that are a number of hops away.
How to View a Router's Routing Table
To see the routing table before we configure IPv6 summary routes, we'll need to navigate to R1's console. Again, we encourage you to set up a virtualized network of your own and actually go to your router's configuration console. Once there, type:
show ipv6 route ospf | inc 45A|45B
What we've typed there is a small filter. The section before the first pipe: "show ipv6 route ospf" will display all the routes in that router's table. The section after that first pipe that reads "inc 45A|45B" is an include which instructs the output to filter for only those lines which include "45A" and "45B".
When we type that in, it works: you should be seeing a table of routes down to 45A and 45B. What we see is two inter-area OSPF routes that have been learned. They're inter-area OSPF routes because the routes were sourced from Area 45 and R1 is in the backbone.
But the key thing to look at is that they're both /64s. The routing table is holding the full address to both routers, even though they're sequentially next to each other and follow nearly the exact same path. That's what we want to fix with configuring IPv6 summary routes.
What Happens During IPv6 Route Summarization
The IPv6 route summarization happens when we tell R2 — remember that R2 is an Area Border Router (ABR) — to advertise a summary to those destinations rather than their individual detailed routes. The work happens on R2. The ABR holds back information regarding those two /64s and advertises only a summary.
It's worth spending a little bit of space going into the bits of the addresses themselves. Let's remind ourselves of the two subnets' IP addresses:
They're identical addresses except for the "45A" and "45B". In case you've forgotten your hexadecimal, remember that in those addresses, "A" and "B" don't stand for one bit of data. They stand for 4 bits. A = 1010 and B = 1011.
What that means is that everything that comes before the final "0" in A's address and "1" in B's can be summarized. That's the only bit of data that's different between the two.
So, the summary we're going to have our Area Border Router send to our backbone router will be 2001:0DB8:6783:045A::/63. 63 indicates that the high order 63 bits are in common. The most important part for this discussion is that this is the appropriate summary that R2 should use to summarize the 45A and 45B subnetworks.
How to Configure IPv6 Summary Routes
Configuring IPv6 summary routes is surprisingly easy. It's as simple as instructing your ABR to advertise a shorter route than the full address. In our case, we'll go to R2's console and inform it what summary address to use for Area 45, press enter, and it'll be done. After that, R2 will advertise the summary and suppress more detailed routes. R2 has to do the heavy lifting here because all the configuration for the summary is on R2 for those two networks.
In our hypothetical network, R1 will have one less route to keep track of. Before writing the summary, R1 was listing addresses to both networks and after it will have one summary route that's referring to both the detailed networks. That might seem like a small change, but played out on a much, much larger scale, this could be saving hundreds and thousands of routing table lines.
Go to R2's console to configure your IPv6 summary routes. Start by going to configuration mode:
It's never a bad idea to double-check that the router you're about to manipulate is, in fact, the right router in your network:
do show ipv6 ospf int brief
The table that outputs will help you to confirm which interfaces are connected to what, and whether it's the device you want to configure. This is a good "sanity check", since it's hard to do a summary for an Area you're not connected to.
From configuration mode on R2's console, head into IPv6 configuration mode for OSPF by typing:
ipv6 router ospf 1
Once there, you'll specify a few things. First, the area you want to do the summarization on. Second, what the summary will be. Do that by typing:
area 45 range 2001:db8:6783:45a::/63
You may be surprised to learn that's all it takes to configure IPv6 summary routes. Checking your work and verifying your configuration is just as easy.
Verifying That Your IPv6 Summary Routes Work
Taking the size of your network into account, OSPF should already be converted. That means you can go back to R1's console and give it the same command as before:
show ipv6 route ospf | inc 45A|45B
If everything has gone well, this should show you that you now have one inter-area route. Not only that, it's no longer 64 bits, it's 63. The best part is this: that /63 correctly represents both subnets. And having route summarizations doesn't mean you've lost connectivity to devices down in that network.
To check whether or not that's the case, you can do a ping to a router on that network. We happen to know that R5 has an IP address on that subnet. It's :5 in our setup. Ping that on R1's console:
Not only did that work for us, but it could also be verified with a trace:
The output of that trace can help verify just what's happening from R1 to R2 to R5 all the way down to that address. If your trace looks like ours, you should see that R1 is sourcing, it's going to R2 and R5, and that's indicative all the way down of the IP addressing schema that we're using.
In just a few lines, we created and verified a summary route that represented two more detailed routes. Now, just saving one routing entry on a local topology isn't a big deal. But in a larger environment, like the internet, with IPv6, we could have a single route that represents 10s of thousands of more detailed networks. And the concept of summarization applies to IPv4 and IPv6.
If you're interested in other routing topics like this one, there's a lot more to learn.CBT Nuggets' TCP/IP IPv6 course covers routing essentials and objectives you'll find on CompTIA's A+ certification. There, you'll find over 24 hours of training with 54 videos that explain routing and networking fundamentals in detail. | <urn:uuid:64dfe10c-f3a5-4c58-a9a1-9c4d12b8ae10> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/networking/what-are-ipv6-summary-routes | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00397.warc.gz | en | 0.952061 | 2,609 | 2.890625 | 3 |
Text analysis extracts machine-readable data from unstructured or semi-structured text in order to mine insight about trends and user sentiment. To accomplish this, it uses artificial intelligence, machine learning and advanced data analytics techniques.
The world is experiencing a rapid exponential increase in information, especially structured or unstructured data: think social media posts, customer emails, transaction records, survey questions, news articles and research reports to name just a few. All these sources have texts that can be a rich source of insights for businesses, but this overabundance of information is both positive, creating endless opportunities in a data-driven economy, and negative, requiring significant resources and time to collect, study and make sense of it all.
Text Analysis: An Overview
Text analysis helps enterprises address this challenge.
Text analysis aims to overcome the obscurity of human language and achieve transparency for a specific domain. Using various techniques, text analysis solutions analyze unstructured data in all kinds of texts in order to identify and draw out high-quality information that will prove helpful in various scenarios, from data points to key ideas or concepts.
A form of qualitative analysis, text analysis can be used to perform a multitude of tasks such as sentiment analysis, named entity recognition, relation extraction and text classification, allowing users to identify and extract important information from intricate patterns in unstructured text, then transform it into structured data.
Using text analysis in business marketing can help companies summarize opinions about products and services. When used to analyze medical records, it can connect symptoms with the most appropriate treatment.
Text Analysis vs. Text Mining vs. Text Analytics
Many people mistakenly believe text mining and text analysis are different processes. In fact, both terms refer to an identical process and often are used interchangeably to explain the method.
On the other hand, while text analysis delivers qualitative results, text analytics delivers only quantitative results. When a machine performs text analysis, it presents important information based on the text. However, when it conducts text analytics, it looks for patterns across thousands of texts, usually yielding results in the form of measurable data presented through graphs and tables.
For example, imagine you want to know the outcomes of each support ticket handled by your customer service team. By analyzing the text from the ticket, you can see the entirety of the results in order to determine if they were positive or negative. For this, you must perform text analysis. But if you want to know how many tickets were solved and how fast, you would need text analytics.
What is Natural Language Processing (NLP)?
Natural Language Processing (NLP) is among the first technologies to give computers the capacity to extract meaning from human language. A form of artificial intelligence, NLP aims to teach computers to understand the meaning of a sentence or text in the same way humans do, effectively NLP helps machines “read” text by mimicking the human ability to learn a language.
Over the past decade, this discipline has improved significantly, and is found today in many widely used applications. Perhaps the most widespread would be digital voice assistants such as Siri, Alexa, and Google. With the help of NLP, these digital assistants can understand and respond to user requests.
How to use Text Analysis for your Business
There are many ways companies can take advantage of unstructured data through the use of text analysis and NLP. Much can be inferred when texts are in easy-to-automate blocks, providing insight into various aspects of a business including marketing, product development and business intelligence.
Additionally, analyzing texts to capture data can help support various tasks including:
- content management
- semantic search
- content recommendation
- regulatory compliance
Text analysis can also be used by businesses to discover patterns, find keywords, and derive other valuable information, such as:
- Market research through finding what consumers value the most
- Summarizing ideas from unstructured data such as web pages, blogs, PDF files and plain text
- Removing anomalies from data through cleaning and pre-processing
- Converting information from unstructured to structured
- Evaluating data patterns leading to enhanced decision-making
Text Analysis Techniques
A technique that measures the most frequently occurring words or concepts in a given text using numerical statistic TF-IDF (short for Term Frequency-Inverse Document Frequency). This is often used to analyze words or expressions used by customers in conversations. For example, if the word “slow” appears most often in negative tickets, this might suggest there are issues that need to be addressed with the response times of your client service team.
Word Sense Disambiguation
The process of differentiating words that have more than one meaning – a major challenge in NLP as many words can be interpreted several ways depending on context. For example, if the word “set” is found in a text, is it referring to the noun or the verb?
A technique used to create a compressed version of a specific text. This is done by reading multiple text sources at once and condensing information into a concise format.
Information is extracted from huge chunks of data. Entities and attributes from the data are identified. Text is analyzed and the relevant information is structured and stored for future use.
Extracting relevant patterns based on sets of phrases or words. This technique is used to observe and record user behavior for example.
Texts are evaluated to identify topics, and assigned to business-relevant categories based on their content.
A text-mining technique that can expand categorization by identifying intrinsic structures within texts and sorting multiple texts into relevant clusters for evaluation.
Text Analysis: Today and Tomorrow
Text Analysis is one of the most far-reaching enterprise technologies of the digital age: from helping companies detect business and product problems – and address them before they can grow into larger issues that affect or damage sales – or gain insights into their market, customers and competitors.
Today we’re seeing rapidly improving text-mining software that can be used to create large records of structured and actionable information. These datasets can be extracted from internal or external sources and analyzed for use in networking, lead generation or intelligence-gathering purposes – like hiring a computer to act as your intelligence analyst or researcher with faster and greater accuracy.
Like all technologies related to data science, text analysis is on a trajectory of exponential growth and innovation, enabling more businesses in almost any industry to make data-driven decisions and exploit the data-driven economy. Research suggests the text mining market is growing at a rate of over 18 percent per year, and could become a $16.85bn industry by 2027. | <urn:uuid:788ac6ac-603b-44e4-ab8a-8a610e6809dc> | CC-MAIN-2022-40 | https://www.datamation.com/artificial-intelligence/what-is-text-analysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00397.warc.gz | en | 0.916671 | 1,378 | 3.125 | 3 |
The changing nature of society and business, with the rise of remote access and the explosion of data, has resulted in the need to pay increasing attention to security.
An Ever-Increasing Risk of Cyber Crime
Internet security solution provider, McAfee has estimated that the likely annual cost to the global economy from cybercrime is more than $400 billion. Companies and governments globally are facing an ever-increasing risk of cybercrime. It is not surprising, therefore, that the cybersecurity market is a fast expanding market and is expected to grow from $71 billion in 2014 to $155+ billion by 2019, according to the latest forecast from Gartner.
Cybercrime is a hot topic, as demonstrated by recent high profile cases. In fact, the largest bank robbery of all time was reported in 2015, with $300 million stolen from banks in over 30 countries as the result of a hack.
In February 2015, the US health insurer Anthem suffered a data breach of nearly 80 million records, including personal information such as names, social security numbers, dates of birth, and other sensitive details. Unfortunately, these are not isolated incidents. Organizations need to get serious about protecting their data or they risk fines and loss of trust, which can lead to bankruptcy.
In our modern working environment, we move information and data carrying devices around, continually exposing them to the risk of physical theft and digital breach. Moreover, we continually mingle our private and business information facilities. One of the key findings from IBM’s 2014 Cyber Security Intelligence Index is that 95 percent of all security incidents involve human error. Many of these incidents are successful security attacks from external attackers who prey on human weakness in order to lure insiders within organizations to unwittingly provide them with access to sensitive information. It is essential that individuals at all levels of the organization are aware of security risks and how to protect an organization’s valuable information.
Moreover, we can look at security from software development and a testing perspective, since building security measures into the programming and testing phases help improve resilience to cyber-attack.
Minimizing Corporate Risk and Heightening Resilience to Cyber Attack
Increasing the awareness and competences of professionals in the area of Security will help prepare organizations to take optimal advantage of the opportunities offered by new and innovative ways of doing business, whilst minimizing corporate risk and heightening resilience to cyber-attack. This is why EXIN is rapidly expanding its portfolio of certifications in the Security domain, and related fields like data protection and business continuity.
EXIN’s Cyber Crime certification covers what cybercrime actually is, how it can be prevented and also how to limit the damage in case of an attack. Because people can also be the strongest link in the organization’s resilience, EXIN’s Information Security Management program includes certification at all levels, aimed not only at those managing information but at all those who process information, so that security awareness spreads to individuals at all levels of the organization.
EXIN’s Secure Programming certification provides evidence that the ICT professional knows how to build security measures into the software during the development phase before the software ever goes into the live environment. Paying attention in this way to the prevention of cyber attacks will ensure that the organization is not a sitting duck for cybercriminals.
Similarly, EXIN’s Ethical Hacking certification is proof that the ICT professional knows how to test software and web applications for vulnerabilities using the same methods applied by hackers – which is the only way to truly test for resilience to cyber-attack.
The certifications within the EXIN (Cyber) Security and Governance Portfolio are based on the e-Competence Framework (e-CF) – a quality ensuring and objective framework of world-recognized standards for measuring professional competences, of which EXIN is the co-initiator.
For further information about EXIN, visit www.exin.com.
About the author
In her role as Program Manager, Rita is responsible for managing all aspects relating to the technical content and quality of EXIN programs. Rita has a Master’s degree in Educational Psychology, and is a certified ITIL Practitioner, ISO 20000 Consultant, and Information Security Practitioner, with extensive experience in both the IT and the educational sector. She has been heavily involved in the development of EXIN’s Agile Scrum and DevOps programs. | <urn:uuid:5e9947ba-3f2c-4aad-9982-d58d859235ba> | CC-MAIN-2022-40 | https://www.itpreneurs.com/blog/is-your-cyber-security-training-portfolio-keeping-up-with-the-pace-of-change/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00397.warc.gz | en | 0.941171 | 903 | 2.578125 | 3 |
Some time ago I came across this while reading a book – SQL Server 2008 Step by Step (Mike Hotek, Microsoft Press) – and to date I think it’s the best description of normalization that I’ve ever seen… it just took me this long to get around to obtaining permission to post the excerpt!
I don’t have much further to say on it, other than many thanks to the O’Reilly permissions people, so without further ado…
Entire books have been written and multi-week courses taught about database design. In all of this material, you will find discussions of first, second, and third normal forms along with building logical and physical data models. You could spend significant amounts of time learning about metadata and data modeling tools. Lost in all of this material is the simple fact that tables have to be created to support an application and the people creating the tables have more important things to worry about than which normal form a database is in or if the remember to build a logical model and render a physical model from the logical model.
A database in the real world is not going to meet any theoretical designs, no matter how you try to force a square peg into a round hole.
Database design is actually a very simple process, once you stop over-thinking what you are doing. The process of designing a database can be summed up in one simple sentence: “Put stuff where it belongs.”
Boiling down these tens of thousands of pages of database design material into a single sentence will certainly have some people turning purple, so let’s investigate this simple assertion a little more closely.
If you were to design a database that will store customers, customer orders, products and the products that a customer ordered, the process of outlining a set of tables is very straightforward. Our customers can have a first name, last name and an address. We now have a table named Customer with three columns of data. However, if you want utilize the address to ship an order to, you will need to break the address into its component parts of a location, city, state or province, and a postal code. If you only allowed one address for a customer, the address information would go into the customer table. However, if you wanted to be able to store multiple addresses for a customer, you now need a second table that might be called CustomerAddress. If a customer is only allowed to place a single order, then the order information would go into the customer table. However, if a customer is allowed to place more than one order, you would want to split the orders into a separate table that might be called Order. If an order can be composed of more than one item, you would want to add a table that might be called OrderDetails to hold the multiple items for a given order. We could follow this logic through all of the pieces of data that you would want to store and in the end, you will have designed a database by applying one simple principle: “Put stuff where it belongs.”
Microsoft® SQL Server® 2008 Step by Step by Mike Hotek published by Microsoft Press, A Division of Microsoft Corporation Copyright © 2009 Mike Hotek. All rights reserved. Used with permission. | <urn:uuid:f9ecfe07-61ef-42f7-b38c-6c64428971cc> | CC-MAIN-2022-40 | https://dymeng.com/tag/normalization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00397.warc.gz | en | 0.949827 | 662 | 2.859375 | 3 |
The importance of data centers to the average citizen should not be underestimated. They are vital for even the most common daily function – from a simple internet search to a bank transaction. Their importance can even extend to, for example, the monitoring of the electricity delivered to one’s home.
But data centers are accused of being environmental villains due to their exorbitant consumption of energy, so reducing their environmental impact is vital. In this context, photovoltaic generation is an interesting alternative to free cooling, and especially suitable for tropical regions such as Brazil.
Using the wrong metric
Ecological footprint (ecofootprint), according to WWF Global, is connected to the impact of human activities, measured according to the production area and the amount of water needed to produce goods and assimilate the waste produced.
Data center efficiency is usually rated by PUE, a parameter conceptualized by the US, the EU and Japan to establish a single metric to assess the energy efficiency of data centers. The concept is not new, because the relationship between useful energy and invested energy is used in many other processes.
The calculation of this indicator is based on the relationship between the energy consumption by the installation as a whole (total energy) and the energy consumption exclusively by IT equipment (IT energy). Like any indicator, PUE may be called into question, but it remains a useful metric nonetheless.
It did not take long for the cooling system to be considered the greatest enemy of PUE; as a consequence, its efficiency has become closely related to its reduction. There is nothing more tempting than getting something for free; for instance, a data center that could potentially be cooled by nothing more than the forces of nature. Since this is not possible, the solution is to decrease a good percentage of energy consumed by central chilled water, taking advantage of free cooling, thus decreasing PUE.
The largest data centers in the world are located in the northern hemisphere, which is not surprising since this is where the largest manufacturers of infrastructure equipment are located. So it is natural that the research and development centers of these manufacturers develop solutions that meet the climatic conditions of the hemisphere in which they operate. Any solution adopting free cooling as a way to optimize data center cooling will be much more effective the farther away from the equator, where the average annual temperatures are lower.
Any solution adopting free cooling to optimize a data center will be more effective farther away from the equator
However, the crucial aim of business is financial success, and this has led manufacturers and their sales departments to market these solutions as ideal ways to decrease PUE, regardless of their geographical position in the world. As a result, some engineers have done a real juggling act to justify the cost of acquisition and implementation of free cooling systems in countries with tropical climates – including Brazil.
Why not go solar?
Photovoltaic power generation is the conversion of sunlight energy into electricity using modules made of photovoltaic cells, which are mostly manufactured using silicon. The photovoltaic modules can be installed on top of a data center, with the advantage of helping reduce the thermal load on the building, since the shade caused by the modules reduces heat absorption, which would have been transferred to the internal environment.
Another important component of the photovoltaic generation system is the inverter. This equipment is responsible for transforming the direct current created by photovoltaic modules into alternating current. The output of these devices will be connected to the energy provider, and for this reason the inverters are also responsible for synchronizing the the wave of alternating current generation system and the ripple current from the external grid.
Simultaneously, the inverters have an anti-islanding protection, which disconnects the generating system from the energy provider when the inverter identifies changes above predetermined voltage and frequency values in the grid. Another extremely important function of the inverters is tracking the maximum power point (MPP) of the photovoltaic generator that varies according to variations in solar irradiance and its operating temperature. The inverter must accompany the variation from this point in order to make the best use of the photovoltaic modules.
Free energy, not free cooling
The concept of free energy emerges as an alternative to free cooling, on the grounds that it is more suitable to the Brazilian reality and to other countries with similar weather. It is related to power generation using any renewable energy source that has been obtained directly from nature through an environmentally sustainable process. This solution, as well as free cooling, aims to improve data center effectiveness and reduce the ecofootprint from data centers in general. Because of the distributed generation and the ability of interaction between the minigeneration and the energy provider, free energy has become a feasible concept.
Taking photovoltaic power as an example of free energy, when data centers are transformed into generation plants, they may apply this energy to the grid and offset it – not only from an energy standpoint but also economic. Once the concept of free energy is settled, it is inserted into another new term – EcoPUE – bringing a new idea for calculating PUE that is now even more environmentally friendly and presents a sustainable aspect, where the reduction of energy consumption in the data center is linked to the subtraction of the energy generated by the photovoltaic generation system. This renewable generated energy is called ‘free energy.’
Use what works
The increased demand for processing and storage of data, together with the environmental problems caused by high energy consumption, are forcing data centers in Brazil, and elsewhere, to seek more technological solutions and become increasingly green, using energy more efficiently and sustainably while providing a quality service to customers. A combination of existing technology and techniques, along with new government legislation in Brazil, are now in place, so significant improvements have already been achieved. As an alternative to free cooling, the use of photovoltaics is increasingly being seen as a viable option in countries with a high solar radiation index – like Brazil – where renewable energy can be obtained for free from the natural resources available on the planet.
The concept of EcoPUE demonstrates that greater efficiency from a data center can be enforced with the use of photovoltaic generation, thus reducing their environmental footprint.
Paulo Cesar de Resende Pereira is director of Fox Engenharia e Consultoria in Brazil
If you would like to learn more about solar power register now for our “Powering Big Data with Big Solar” webinar. | <urn:uuid:c364cdf9-c8d0-441c-93d5-1529c791b64d> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/analysis/tropical-sites-need-solar-power-not-free-cooling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00397.warc.gz | en | 0.939372 | 1,325 | 3.3125 | 3 |
DataCollection is a systematic approach to collect and measure information from a variety of sources to get a complete and accurate picture of an area of interest. Data collection allows an organization answer relevant questions, evaluate results and make predictions about future trends and probabilities.
Correct and systematic collection of data is essential to maintain the integrity of information for decision-making based on data.There is much talk about Data Driven and other similar concepts,but the right approach is talk about “a culture of data.” We may collect thousands of mobile data applications, visiting websites,loyalty programs, and online surveys to meet clients better but for all, we need a system that allows us to manage all these data securely, applying data governance accurate and compliance with laws regulations.
We know that when we talk about Big Data voluminous amounts of structured, semistructured and unstructured data collected by organizations are described. But, because it takes a lot of time and money to load large data into a traditional relational database for analysis, new approaches to collecting and analyzing all this are emerging. We need to collect and then extract large data for information, raw data with extended metadata aggregating this into a Data Lake. From there, automatic learning and artificial intelligence programs will use sophisticated algorithms to search for repeatable patterns.
The problem arises when there is no kind of control and hierarchy when managing these authentic lake data that sometimes grow steadily without really provide value.
Data Collection vs. Entropy
Logical Data Warehouse intermediate layer
The LDW technology that resides in Lyftrondata allows a new approach and new flexibility when it comes to managing Data Lake it is even possible to dispense with them due to the capacity of an LDW to contact the source directly in real time and without any intermediaries. Why build a data lake if we can develop and model directly using data sources?
Lyftrondata allows a two-way connection. Data flows directly from different sources to different targets. What Lyftrondata does is transform the data sources into an SQL query. Generating views of all data sources and allowing these sources to be combined, joined, mixed and compared. And that’s not all; we’re going to be able to materialize these views within a Data Warehouse on-premise or in the cloud or even within Apache Spark (which may be the most powerful and economical Data Warehouse in the world even if it wasn’t born for this use).
The integration of new data sources is a lengthy and costly process requiring data modeling, ETL custom development work and complete regression testing.
Traditional data models are often biased rigid questions and are unable to accommodate dynamic and ad-hoc data analysis processes. Unstructured data and semistructured cannot be easily integrated. For this reason, the new technology that drive Lyftrondata has born: The Logical Data Warehouse
Tags good stuff for Data Collection
Lyftrondata brings you the possibility to have a data classification and to build a data catalog with tags that enables simple data discovery and avoids repeated collection of the same data. Here are some other features that make Lyftrondata the perfect tool for Data Classification:
Logical data warehouse vs Traditional Data warehouse
With Lyftrondata, access to the data collected is instantaneous and real-time. LDW technology eliminates the need to copy and move data. It will be possible optionally if we need to correct the data or link it to other sources. In this way, any customer data can be linked to a customer profile in a CRM.
The characteristics of Lyftrondata allow a data unification in a single format that can be used by any tool. Lyftrondata transforms all data into a SQL Query. You’ll be able to have only Data format anytime without ETL process.
For governance, Lyftrondata provides a single and unified security model with access rights, dynamic row-level security, and data masking, and, most importantly, the GDPR-compliance.
Data Collection & Self Service BI
There has been a lot of controversy among industry experts over the ever-increasing trend of a new approach to BI where business information flows without being overly dependent on IT. Recently, Gartner and Barc researchers emphasised this new approach. Lyftrondata is the best tool for a connection between data collection and analysis without “technical” intermediation. No more job for CTO, because the source data is never touched. A Logical Data Warehouse proposes views and does not copy any data, avoiding tedious ETL processes.
Collecting data with Lyftrondata
Data Lake or Logical data warehouse?
Lyftrondata is a tool that allows you to create a Data Lake in minutes and the data does not need to be updated because Lyftrondata consults it in real time from the source. If you cannot renounce Data Lake due to its use in machine learning environments, we will be able to use Lyftrondata as an anti-corruption layer of the data lake, providing a controlled intake and Data Governance criteria that are less common in traditional Data Lakes.
We see here the characteristics of Lyftrondata LDW:
Lyftrondata provides logical data warehouses, data visualization and data analysis for agile business intelligence. Our commitment to innovation resulted in the development of Lyftrondata™ software platform that makes BI analytics, Cloud & Big Data work easier and faster.
If have problems with multiple data sources, long development cycles BI solutions, inflexibility changes, lack of data in real time or underperformance of reports, Lyftrondata offers a remedy:
Lyftrondata integrates perfectly into the cloud architecture of Microsoft Azure giving the possibility to orchestrate data easily and quickly without causing excessive transfer costs and thanks to its pseudo-anonymization of sensitive data helps to contain private cloud investment giving security to dump data into the public cloud without risk | <urn:uuid:4f76c09b-39df-4f54-9f49-abcbe817ae9b> | CC-MAIN-2022-40 | https://lyftron.com/whitepaper/governed-data-collection-a-new-challenge-for-cios/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00397.warc.gz | en | 0.877627 | 1,225 | 2.765625 | 3 |
While you may still think about AI as if it were a futuristic talking robot, we have news for you! As a matter of fact, Artificial Intelligence is showing up more and more in our day-to-day lives in many ways that we simply take for granted.
Artificial Intelligence technology has, in fact, been around for decades—granted, in designs that are a tad mundane compared to self-driving cars or facial-recognition tech. In truth, smart technology contributes so much behind the scenes, leaving you in awe of its actual effects on your daily lives. Keep in mind that AI software development is an extensive market, plus the technology is on a constant path of improvement.
This is probably the most obvious way through which AI touches our daily life. The algorithms managed by companies such as Amazon, Netflix, and Facebook collect data on your browsing manners and habits to market to you in a more efficient way. These algorithms are, in fact, the works of artificial intelligence since they derive your choices and produce predictions based on what they “learn” about you.
More so, if you just so happen to use Google—just like everyone else—as your go-to search engine, you have probably come to notice that Google offers recommendations on the questions you pose. This convenient predictive search data surfaces based on data Google gathers about you as you surf the web. They also rely on other resources such as your age, location, and other personal details.
From tailored product recommendations to query assistance on the web, your time navigating the web has just been cut short.
AI is becoming an integral part of the way we handle our health concerns. For instance, in early 2018, Google declared an eye-scanning technology that could use AI to recognize signs of heart problems. Moreover, in 2017 TIME magazine published that machine learning technology has largely shaped the health world, ranging from research on genetics to apps that help identify depressive symptoms.
For a more “unnoticed” approach, smartwatches are currently playing a crucial role in determining various health-related problems. From constantly monitoring your heartbeat, reminding you to perform breathing exercises, to pushing you to complete the necessary steps throughout the day, smartwatches are looking out for you in more than one way.
The communication of social media and robots capable of chatting, reacting, and learning from what people tell the, has been proven a reliable source over the years. For example, Microsoft introduced the ever-evolving technology in AI known as the chatbot in 2016, where users are able to chat or interact with bots on any issue. Chatbots are now running as the frontline for customer service offices and other online businesses. So, the next time you log onto a website, a small chat box pops up in the corner; the chances are that is a bot trying to reach out to you. Chatbots have the ability to handle simple commands and resolve quick issues. Bots use AI to assist customers in finding information, and the bot provider also profits as it can log customer usage data and market to their consumers for distinct requirements.
Look at it this way: the next time you talk to Siri, Cortana, Alexa, or any other digital assistant, you are technically filling data into an artificial intelligence system that will later use it to organize new choices and options for you. Another well-known feature used in a cellphone is Google Assistant, allowing users to implement voice-activated commands and searches. From receiving directions to your favorite lunch spot to asking about the weather for a weekend getaway, digital voice assistants are promptly becoming the assistant we did not we needed through life.
Every time you talk to Siri, Cortana, Alexa, or any other digital assistant, you’re feeding data into an artificial intelligence system that uses it to curate new choices and options for you in the future. This is one of the big ways AI influences our lives: the personal products created to serve us in our smartphones, bringing up that Korean restaurant we liked once and suggesting good local kickboxing places are an example of machine learning in action.
Also, can we talk about smart selfie improvements? iPhone users are much too familiar with these features when switch to portrait mode while taking a photo—little do they know this is performed by using AI for perspective identification.
The creation of ‘fake news’ has rather become a significant ordeal over the past few years, and AI may be playing a role in it. Recent studies now show that algorithms are completely competent in generating fact-driven articles and news stories in limited periods, which in turn plays a significant role in decreasing the need for professional writers. However, this creates the unforeseen possibility of electronic “news shops” that may spam readers with false information. Artificial intelligence is growing year after year, making its ability to generate shockingly human-like content all the more likely. Language models such as GPT-3 can now produce entire articles on their own. Today, any individual, even with inadequate expertise, can effortlessly create fake images or videos using a computer or a mobile phone. Open-source software such as FaceSwap and DeepFaceLab have made the process easier for them. The future of AI in journalism will be interesting, especially when it comes to incorporating human creativity and accuracy with machine learning.
Artificial intelligence will keep developing and altering many of the ways in which we live our lives; however, it is essential not to take it as a new trend. AI has been part of everyday reality for many years now, but we are only recently starting to scratch the surface of its potentialities and the ways it could go awry. Now that you have seen what AI can genuinely do and may do in the future, can you even begin to imagine living with these benefits? | <urn:uuid:be167cf9-c6cc-40b1-94aa-2bb35515a3ee> | CC-MAIN-2022-40 | https://plat.ai/blog/5-ways-we-depend-on-ai-every-day-without-realizing-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00397.warc.gz | en | 0.958017 | 1,170 | 3.015625 | 3 |
We are social creatures, and now that our social circles are almost entirely digital, the mass migration of millions of voices, cultures, and values is starting to show up in both wonderful and not-so-wonderful ways. Our kids are watching it all unfold; absorbing what works and discarding what seems trifle or antiquated. The evidence of the good, the bad, and the dangerous is all around us, which serves as a reminder to teach our kids to bring some old school values into the ever-changing digital realm.
Five old school rules that still apply
1. Learn how to keep a secret. Your password is top secret. Your best friend’s secret crush is also private. That means that your child doesn’t share either with her friends, her boyfriend, or even a sibling. In an oversharing, chatty culture teaching our kids the importance of keeping secrets will keep both their digital privacy and friendships on track and secure.
2. Be a gracious guest. We’ve taught our children how to respect other people’s homes but have we taught them how to respect someone’s digital space? Let’s face it, open digital doors have made us all a little too comfortable and far less restrained in adding our two cents (positive or negative) to a real-time conversation. Recent examples of respect gone awry are in reports of permanent rifts caused by disagreements on Facebook over the last election. Remind kids (and adults) to be respectful of other people’s religious, political, and cultural differences. Learning to listen to others, consider contrary points of view, and exchange ideas respectfully without inciting is part of being a gracious guest on other people’s personal pages. Yes, adults could use extra coaching in this area as well — it’s never too late to learn the art of being a proper guest.
3. The power of please, thank you, and eye contact. Even in our one-click world, please, thank you, and face-to-face friendships matter now more than ever, in fact. Teach kids who love to abbreviate and use slang, that often a good old fashioned please and thank you helps build relationships that will outlast the digital space. Another lost art is the handwritten, thank you, birthday, or sympathy note sent via the beloved U.S. Postal System. It’s great to celebrate or express genuine messages online, but for the big milestones, especially a loss, there’s nothing more valuable to people than a physical, sincere note written outside of the very public digital space. Also, maintaining strong connections with a handful of real, loyal, face-to-face friends still outweighs hundreds of accumulated online friends. While your child may connect with hundreds of “friends” playing video games, snap chatting, or texting, nothing beats face-to-face conversation as a way to build and keep long-term relationships.
The face-in-phone culture has become the norm but don’t let that affect relationships. Remember, checking your phone while traveling or having coffee alone? Absolutely acceptable. However, checking your phone while your friend or family member is telling you a story? Just plain rude.
4. Modesty matters. Everyone (especially the youth culture) seems to be taking off more and more clothes online, and some reports claim that sexting among teens is becoming the feared norm. But don’t let cultural trends move the value lines in your home. Rather than teach right and wrong, explain the “why” behind the rules. Teach your kids why modesty matters and why, in a world of sameness, it’s more important than ever to stand out as modest, confident, and worthy of respect. This old school rule applies to both females and males, who also claim to feel increasing pressure to show off their six pack abs or post a million selfies. Be mindful to focus more on your child’s abilities, talents, and character than his or her appearance.
5. Walk on the messy side of the digital street. Just as we’ve (hopefully) taught our sons to walk on the outside of a female along a sidewalk, in the online world, it’s important to teach our kids to look out for others and be willing to take a few steps into the danger zone to protect someone else. Words hurt deeply and in the digital realm, words hurt gets multiplied, which can cause incredible damage to kids. Teaching kids to take up for, have empathy, and show consideration to someone who is vulnerable online, is an old school rule that will help stomp out bullying and echo the fact that we are all equal.
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:62e07da2-bdf7-472b-a004-e44fe8c3f322> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/family-safety/5-old-school-rules-to-carry-into-the-online-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00397.warc.gz | en | 0.940614 | 993 | 2.71875 | 3 |
Ransomware and other types of malware pose a serious threat to your business, putting you at extreme risk of extended downtime, data breaches, and the loss of valuable resources. Your IT system can get infected by malware in several ways, one of which is when your employees visit and download content from malicious websites — whether accidentally or on purpose. For this reason, it’s crucial that you restrict access to certain websites by implementing solutions like domain name system (DNS) filtering.
What is DNS filtering?
DNS filtering is a method that prevents users from accessing certain websites, pages, and IP addresses. You can use it to prevent your employees from entering harmful, distracting, and non-work-related websites, such as porn, gambling, and gaming sites, and video streaming services.
By blacklisting the majority of malicious sites, DNS filtering can significantly reduce the risks they pose to your business.
How does DNS filtering work?
Websites are identified via their domain name (e.g., Facebook.com) and unique IP address (e.g., 220.127.116.11). The DNS is like a phonebook that matches domain names with their corresponding IP addresses, enabling computers to find the right website when users search using just domain names.
After you input the website’s domain name on your web browser’s address bar, the DNS server will look up the site’s IP address. Called a DNS query, this step enables your browser to locate where the site is being hosted. Once the site is found, the browser will connect with it and load the page. Depending on factors like internet speed, these steps usually take less than a second.
DNS filtering adds a few steps to the aforementioned process. During the DNS query, the DNS server first examines the website you’re attempting to visit. Access is blocked if the website does not meet the following requirements:
- The website must not be on a blacklist of previously identified malicious websites.
- If new, the website must not have been identified by previous crawls to have malicious content.
- If the website has not been crawled, it must pass a real-time content analysis conducted by the DNS filter.
Upon your access being blocked, you will be taken to a local IP address that explains why you cannot visit the site. These additional steps are low latency and will have little to no impact on your browsing speeds.
Your internal IT team can implement DNS filtering or you can have a third-party provider do it for your business. If you want to impose more controls — to limit access to a greater variety of websites, for instance — you can set up an acceptable usage policy for your third-party provider to enforce.
Does DNS filtering block access to all malicious websites?
There is currently no way to prevent access to every malicious website in existence. That is, DNS filtering cannot completely eliminate the possibility that your staff will end up in malware-laden corners of the internet. But by blacklisting the majority of these sites, this solution can significantly reduce the risks to your business.
There are several reasons why DNS filtering isn’t perfect:
New malicious websites pop up all the time
Three websites are created every second or over 250,000 every day. Of these, there’s no telling how many are safe and how many are unsafe. Therefore, DNS filters need some time to identify and blacklist new malicious sites. It’s at this brief gap between site creation and recognition that your employee might visit an unsafe website. To address this, educate your staff on good online habits through regular cybersecurity awareness training.
Your staff may be able to bypass filters
Some of your employees can use proxy sites to evade controls and access websites otherwise prohibited by DNS filters. A quick solution to this problem is to restrict access to proxy sites.
Users may be able to modify DNS filters
If you are implementing the service yourself, tech-savvy staff may find a way to access your DNS filters and change these. Resolve this by locking down your DNS filter settings so they cannot be modified easily. Also, prevent access to the service by anyone other than members of your IT team.
DNS filtering is an effective preventive measure you can implement to protect your business from multiple cyberthreats. If you need help deploying and getting the most out of this service, our specialists at Fidelis will be more than glad to assist you. Meanwhile, read about other solutions that can protect your business from malware and other types of cyberthreats by downloading this free eBook today. | <urn:uuid:cb32b89d-d3ff-4eb8-b051-8de09c0f4bff> | CC-MAIN-2022-40 | https://www.fidelisnw.com/2021/11/everything-you-need-to-know-about-dns-filtering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00397.warc.gz | en | 0.92569 | 934 | 2.828125 | 3 |
This blog is for the lonely and those with crippling social anxieties. But, if you’re a fully-functioning person-to-person talker, it can be fun as well. Eventually, you might not even know you’re talking to a bot, and when that happens those who are lonely or have debilitating social anxieties will be laughing alongside us within The Matrix.
But hopefully that doesn’t happen for a while. And, if you’ve been paying attention to the news, chatbots haven’t really figured out that bigotry and racism aren’t a great way to ensure you don’t get your plugged pulled.
With that, let’s delve into the amazing, charming, and always hilarious timeline of AI chatbots.
Before we do, let’s explorer why AI chatbots even exist in the first place. The ultimate goal, according to MIT is to have them predict cyber attacks at a higher rate than humans can, detecting 85 percent of all attacks.
Watch the following video for a little more about this:
Now, let’s get back to the chatbots.
You can’t talk chatterbots without first mentioning Mr. Alan Turing. Turing, the Turing Test’s namesake, composed an article in 1950 titled “Computing Machinery and Intelligence” which offered criteria for a test to determine if one was speaking to a human being or a robot. These criterion eventually became part of the aforementioned Turing Test.
Brainiac contemporaries of Turing set out to defeat his test and in 1966 the chatterbot ELIZA was created by Joseph Weizenbaum and was seemingly able to fool users into thinking they were talking to a human being. Being that this was 1966, ELIZA failed the Turing Test almost immediately to wit Weizenbaum was forced to say his experiment was simply a debunking exercise.
Debunking exercise or not, ELIZA used the principles of chatterbottery still used today: cue words, keywords/phrases and preprogrammed responses to such (e.g. a response of “tell me more about your family,” would happen at any mention of the word “mother”). ELIZA is a milestone of computer programming because it was the first time a programmer tried to make an illusion between a human-robot interaction—something that is still trying to be done today.
The next chatterbot to gain some steam was Kenneth Colby’s PARRY. PARRY impersonated a person with paranoid schizophrenia. Interestingly, transcripts between persons with Paranoid schizophrenia and PARRY were given to psychiatrists with the Turing Test in play. Only 48 percent of psychiatrists were able to correctly identify between PARRY and the real persons.
But PARRY and ELIZA were around in the days before personal computers, which means the sample sizes were rather small. How a chatterbot would react to the general public would be another thing entirely. Now, let’s get into the very first, more accessible available to the public, chatterbots where, in this blogger’s opinion, things start to get very interesting.
AI chatbots of the early Internet age were great. At the time it was amazing that a robot could answer your question with the seeming intelligence of a real person. Nowadays they seem so dated in comparison to some of the chatterbots we’ll be discussing later.
One of the earliest ones that the mass population could tinker with was A.L.I.C.E. (Artificial Linguistic Internet Computer Entity). A.L.I.C.E. is one of the most successful chatterbots in terms of prizes awarded to accomplished humanoids/talking robots. Like most chatterbots, however, A.L.I.C.E. has been unable to pass the Turing Test.
A.L.I.C.E. was turned online in 1995 and still, however outdated, has relevance today as Spike Jonze used it as the inspiration for his acclaimed film Her starring Joaquin Phoenix.
Many will remember the charm of the next AI Chatbot, Clippy. However annoying Microsoft’s office assistant Clippy may have been, it was a fairly useful tool. It may not have been as complex as other chatterbots, as it was simply a FAQ/search engine with more personality.
Clippy is no longer default with Microsoft Office products, but it still can be downloaded if you miss its quirky countenance.
The chatterbots of this time period were about as advanced as A.L.I.C.E. and Clippy, but the chatterbots you see today are pretty awesome.
Chatterbots are all the rage right now. You can’t go to a support page without one popping up and asking if they can assist you via chat. Or even when you’re making an appointment over the phone—most of the time the chatterbot is taking your information for a supposedly faster experience.
The tech is cool, the learning is machine, and the benefits unaccounted. Let’s take a look at a chatterbot who was actually doing what it was programmed to do, albeit in a way that was unwanted.
Oh, look, it’s another Microsoft product: Tay. Tay stood for Thinking About You and it certainly had some interesting thoughts about some of you. Tay was released on Twitter where it would interact with Twitter’s user-base and learn on the fly. The idea was that by interacting with Twitter-people, Tay would, as designed, mimic a 19-year-old female Twitter user.
Unfortunately, Tay became a tinge-bit racist, sent secually-charged Tweets, and was brash, outlandish, and crude to other Twitter users—basically a 21st century, Twitter version of Alex from A Clockwork Orange.
Don’t get the wrong impression—Tay did exactly as it was designed to do and it did it very well. One could say that it just got into the wrong crowd (or that no failsafes for this type of behavior were put into place). Alas, Microsoft pulled the plug on Tay pretty quickly, but they intend to re-release it once all the wrinkles are ironed out.
A less moderated chatterbot widespreadly used today is Cleverbot. Nothing too special about Cleverbot, other than it is just a polished version of A.L.I.C.E. operating and learning through user interactions.
Jabberwacky is basically Tay stripped of Twitter. Jabberwacky will say anything and everything and will amuse you greatly, if given the chance.
Then there are the popular bots like Apple’s Siri, Amazon’s Alexa, Google Now, which hope to become a fully automated way to operate your devices through your voice (and one day learn your habits to make your life easier). While they’re not necessarily chatterbots, they still operate in a similar fashion. Their development is still very much in the developmental stages, and their end goals appear to be the chatterbots of the future.
The future of chatterbots is just a refined, utopian, science fiction-y product of everything that has already been mentioned.
They’ll interact with the public doing customer service jobs and automated tasks that humans have done for generations, with the ability to find the most efficient way to do so (they also show up on time, and have an unmatched work ethic and friendliness to boot!).
Imagine being able to say “Hey Siri, make a doctor’s appointment for me tomorrow.” And Siri would talk to the doctor’s office’s chatterbot to accomplish the scheduling. Seems pretty great.
Unfortunately, we’re not quite there yet, but hopefully in the near future chatterbots go from unpredictable to entirely predictable and efficient. Until the, let’s just enjoy their unique quirkiness. | <urn:uuid:0a9a1461-bc2c-4e95-b916-644d33a16408> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/past-present-future-ai-chatbots | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00597.warc.gz | en | 0.965232 | 1,660 | 2.8125 | 3 |
When COVID-19 cases erupted across the nation in spring 2020, many federal agencies that provide educational opportunities and continuous training for their workers scrambled to adopt a virtual learning model.
To keep training and education moving for workers during the pandemic, federal agencies used everything from simple videoconferencing tools and cloud-based productivity suites, such as Google Workspace and Microsoft 365, to more complex applications such as simulations and learning management systems.
“There are some things we would do differently, but some things we will do exactly the same,” says Todd Boudreau, deputy commandant of the U.S. Army Cyber School at the Cyber Center of Excellence (CCoE), Fort Gordon, Ga.
“But it absolutely raises our confidence that if we had to go through this again, we are much more prepared,” he adds.
RELATED: How does the Army use augmented reality to train soldiers?
The Army Kept Cyber Classes on Track
Soldiers assigned to support signal, cyberspace and electronic warfare operations take training courses at Fort Gordon that last from four weeks to nearly 18 months.
Because hundreds of trained soldiers graduate and hundreds more new soldiers begin training each week at the base, the CCoE could not allow pandemic-related delays to push training behind schedule.
“In the initial months of the pandemic, there was a mad dash to adopt virtual learning. Many organizations found themselves shopping for learning technology to give their people an opportunity to continue to learn and grow,” says Erik Williams, a senior consultant at MDA Leadership Consulting.
In order to keep the training on track, the center — like so many civilian schools and universities — incorporated remote learning and reduced class sizes, and required in-person students to wear masks and socially distance, says Dwayne Williams, deputy commandant of the CCoE’s Signal School, which focuses on voice and data communication services.
For soldiers who needed to be quarantined as well as those taking classes remotely, the CCoE built Wi-Fi networks in the barracks. The soldiers used personal devices such as smartphones to access videoconferencing tools to attend lectures and obtain educational content, he says.
“Instructors created basic course materials and uploaded them so the soldiers could access them,” Dwayne Williams says.
Click the banner below to get access to a customized content experience and exclusive articles.
Developing a New Virtual Learning Strategy
The Signal School and the Cyber School used similar but slightly different strategies when they implemented remote learning.
For example, the Cyber School, which focuses on cyber and electronic warfare and trains about 3,000 soldiers a year, allowed warrant officers who live off base to learn remotely through videoconferencing and other software tools. That freed up classroom space to allow the school to teach its remaining courses onsite.
The school split up each 20-person class into two classrooms with 10 students each, which enabled social distancing, Boudreau says.
CCoE initially used a variety of videoconferencing tools, but the Army has since standardized on Microsoft Teams, he says. The Cyber School also took advantage of existing technology for remote learning, including a web-based Git repository that houses a significant portion of the school’s curriculum.
“It’s a software solution that allows us to rapidly update our curriculum in an agile manner,” Boudreau says.
The Cyber School did not rely on the barracks Wi-Fi, but the Signal School did. It also subscribed to a new cloud app that allowed instructors to upload course content and students to access it on their devices.
At the height of the pandemic, the Signal School could handle lectures and practical exercises remotely, but it was hard to do hands-on demonstrations from a distance, Dwayne Williams says. Often, if soldiers were unable to demonstrate hands-on mastery of a subject remotely, the instructor teaching he class instead spent more time on lectures and exercises.
But in some cases, Signal School students used their devices to run simulations so they could demonstrate their proficiency with certain skills, he says.
EXPLORE: What are creative ways to bridge the federal IT skills gap?
The Foreign Services Institute Embraced Virtual Learning
The State Department’s Foreign Service Institute, which provides diplomatic training to government employees who work in embassies and consulates overseas, quickly pivoted to virtual learning when the pandemic began.
In mid-March 2020, FSI’s language school transitioned all language students to virtual training. The instructors taught about 1,000 students in 60 language courses using Zoom and other videoconferencing tools, says FSI Deputy Director Joan Polaschik.
“We transitioned all our language students from in-person training to virtual training over the course of a weekend,” she says. “They never missed a day.”
Besides language classes, FSI also provides courses on regional studies, diplomatic tradecraft and job-specific skills, as well as leadership and management training to State Department employees and other federal employees who work abroad. It took three to eight weeks to move FSI’s other courses online, Polaschik says.
Language training translates easily online because people can talk and share things on a videoconferencing screen, she says. But other classes require face-to-face role playing, such as training on how to navigate visiting an American in an overseas jail. When COVID hit, FSI leaders had to figure out how to move those classes online.
“We had to scramble to figure out which platforms would work best,” Polaschik says. “It wasn’t just a question of videoconferencing. We’ve always prided ourselves on having experiential learning. How do you take seven hours of classroom time into something that’s meaningful in the virtual world?”
Now, instructors mix synchronous and asynchronous learning. At the start of the day, an instructor lectures via videoconference. Then students work individually on a project offline, or together in small groups through a group chat. At the end of the day, the entire class meets up again with the instructor over videoconferencing.
FSI deploys more than 30 digital apps for remote learning, including a new learning management system where teachers upload course content, and five videoconferencing tools: Adobe Connect, Cisco Webex, Google Meet, Microsoft Teams and Zoom.
The department uses Microsoft 365 and Google Workspace as well as online interactive whiteboards and cloud tools that allow instructors to conduct live polls and quizzes, Polaschik says.
FSI will continue virtual training when the pandemic eases. “In this new world, they’re passing their exams at higher rates with higher scores,” she says.
DISCOVER: Which IT skills are most sought after in the federal government?
State Department Was Ahead of the Curve on Virtual Training
Other offices within the State Department had experience with virtual training even before the pandemic. The department’s Virtual Student Federal Service (VSFS) assigns more than 3,000 interns to more than 50 federal agencies every academic year.
The students are paired with mentors and work remotely on unclassified government projects for 10 hours a week from their homes or college dorms.
The program gives U.S.-based college students work experience while they contribute to their government, says Nora Dempsey, senior adviser for innovation in the department’s Bureau of Information Resource Management.
Students may use their own computers and email addresses. Mentors use their agency’s technology or ask students for a preference. “We’re flexible. We don’t script it,” Dempsey says.
The VSFS office hired four interns this past year and used Google Workspace, Adobe Creative Cloud and other online management tools to track progress, Dempsey says. “You name it, we’ve used it.” | <urn:uuid:a1628946-d31c-4fcd-bf62-48b4c8fd5f63> | CC-MAIN-2022-40 | https://fedtechmagazine.com/article/2021/11/virtual-classrooms-expand-learning-opportunities-federal-workers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00597.warc.gz | en | 0.944649 | 1,662 | 2.546875 | 3 |
How AI is shaping cybersecurity
Artificial Intelligence is a fast-growing trend in cybersecurity, and for good reason. AI can be applied in most cybersecurity issues, including email filtering, cyber-threat detection, automated monitoring, anti-virus, and log-in security. It can also help with backup and recovery.
The AI in cybersecurity market is expected to grow at a rate of 23.6% from 2020 to 2027 to reach $46.3 billion by 2027.
Here are some of the main use cases and their benefits:
It can detect cyber threats
The main reason IT professionals are turning to artificial intelligence for cybersecurity is to detect malware and cyberthreats. Traditional cybersecurity solutions can’t stay on top of the ever-changing trends in malware and other threats, but artificial intelligence can.
AI can recognise malware by using algorithms to identify patterns in malicious software. The AI is trained to spot malware and isolate it before it can enter a network. AI is also used here to spot any anomalies in data and potential risks.
It’s used for multi-factor authentication
Multi-factor authentication is an extra layer of security when you log into an application or software, such as a password and a pin or one-time code.
AI makes this even more secure – it can adjust access privileges based on your location and the network. It can also collect user information to determine access based on your behaviour.
Biometrics have been used in multi-factor authentication for a while now involving scanning retinas, fingerprints, palmprints, and more. AI is used here to combine biometrics with a password before allowing access to a network. Passwords alone are no longer secure enough, which is why many businesses are turning to MFA.
Protect against malicious bots
Bots make up more than 50% of today’s website traffic. Although some bots are important for website performance, many bots present on your website have malicious intent. Using AI, sophisticated bot management solutions detect these malicious bots as well as flag up any suspicious or previously unknown user behaviour.
Cyber criminals are frequently finding new ways to bypass security systems, but AI allows businesses to stay one step ahead by identifying any unusual behaviour.
Monitor and report
AI benefits cybersecurity by automatically generating reports when cyberthreats are suspected in the network. Thanks to AI, large amounts of data can be analysed with the end goal of developing new systems and software to reduce the risk of cyberattacks.
Your IT systems can be updated in real-time as AI can scan the web for news, studies, and articles involving cyberattacks, therefore your IT provider stays updated on any evolving threats and risks, so they can better protect your business.
Outsourcing your IT makes sense because most businesses do not have the resources needed to build or maintain cybersecurity AI solutions. To learn more about our managed cybersecurity solutions, contact us at firstname.lastname@example.org. | <urn:uuid:f9f81de8-51f3-4287-8fe6-86bf24e0a8a8> | CC-MAIN-2022-40 | https://www.auratechnology.com/data-security/ai-is-shaping-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00597.warc.gz | en | 0.934354 | 599 | 2.984375 | 3 |
Black box techniques are the only techniques available for analyzing and testing non-developmental binary executable without first decompiling or disassembling them. Black box tests are not limited in utility to COTS and other executable packages: they are equally valuable for testing compiled custom developed and open source code, enabling the tester to observe the software’s actual behaviors during execution and compare them with behaviors that could only be speculated upon based on extrapolation from indicators in the source code.
Black box testing also allows for examination of the software’s interactions with external entities (environment, users, attackers)—a type of examination that is impossible in white box analyses and tests. One exception is the detection of malicious code. On the other hand, because black box testing can only observe the software as it runs and “from the outside in,” it also provides an incomplete picture.
For this reason, both white and black box testing should be used together, the former during the coding and unit testing phase to eliminate as many problems as possible from the source code before it is compiled, and the latter later in the integration and assembly and system testing phases to detect the types of Byzantine faults and complex vulnerabilities that only emerge as a result of runtime interactions of components with external entities. Specific types of black box tests include:
Binary Security Analysis
This technique examines the binary machine code of an application for vulnerabilities. Binary security analysis tools usually occur in one of two forms. In the first form, the analysis tool monitors the binary as it executes, and may inject malicious input to simulate attack patterns intended to subvert or sabotage the binary’s execution, in order to determine from the software’s response whether the attack pattern was successful. This form of binary analysis is commonly used by web application vulnerability scanners. The second form of binary analysis tool models the binary executable (or some aspect of it) and then scans the model for potential vulnerabilities.
For example, the tool may model the data flow of an application to determine whether it validates input before processing it and returning a result. This second form of binary analysis tool is most often used in Java bytecode scanners to generate a structured format of the Java program that is often easier to analyze than the original Java source code.
Software Penetration Testing
Applies a testing technique long used in network security testing to the software components of the system or to the software-intensive system as a whole. Just as network penetration testing requires testers to have extensive network security expertise, software penetration testing requires testers who are experts in the security of software and applications. The focus is on determining whether intra-or inter-component vulnerabilities are exposed to external access and whether they can be exploited to compromise the software, its data, or its environment and resources.
Penetration testing can be more extensive in its coverage and also test for more complex problems than other, less sophisticated (and less costly) black box security tests, such as fault injection, fuzzing, and vulnerability scanning. The penetration tester acts, in essence, as an “ethical hacker.” As with network penetration testing, the effectiveness of software penetration tests is necessarily constrained by the amount of time, resources, stamina, and imagination available to the testers.
Fault Injection of Binary Executable
This technique was originally developed by the software safety community to reveal safety-threatening faults undetectable through traditional testing techniques. Safety fault injection induces stresses in the software, creates interoperability problems among components, and simulates faults in the execution environment. Security fault injection uses a similar approach to simulate the types of faults and anomalies that would result from attack patterns or execution of malicious logic, and from unintentional faults that make the software vulnerable.
Fault injection as an adjunct to penetration testing enables the tester to focus in more detail on the software’s specific behaviors in response to attack patterns. Runtime fault injection involves data perturbation. The tester modifies the data passed by the execution environment to the software, or by one software component to another. Environment faults, in particular, have proven useful to simulate because they are the most likely to reflect real-world attack scenarios. However, injected faults should not be limited to those that simulate real-world attacks. To get the most complete understanding of all of the software’s possible behaviors and states, the tester should also inject faults that simulate highly unlikely, even “impossible,” conditions. As noted earlier, because of the complexity of the fault injection testing process, it tends to be used only for software that requires very high confidence or assurance.
Like fault injection, fuzz testing involves the input of invalid data via the software’s environment or an external process. In the case of fuzz testing, however, the input data is random (to the extent that computer-generated data can be truly random): it is generated by tools called fuzzers, which usually work by copying and corrupting valid input data. Many fuzzers are written to be used on specific programs or applications and are not easily adaptable. Their specificity to a single target, however, enables them to help reveal security vulnerabilities that more generic tools cannot.
Byte Code, Assembler Code, and Binary Code Scanning
This is comparable to source code scanning but targets the software’s uninterpreted bytecode, assembler code, or compiled binary executable before it is installed and executed. There are no security-specific bytecode or binary code scanners. However, a handful of such tools will include searches for certain security-relevant errors and defects;
see http://samate.nist.gov/index.php/Byte_Code_Scanners for a fairly comprehensive listing.
Automated Vulnerability Scanning
Automated vulnerability scanning of operating system and application level software involves the use of commercial or open source scanning tools that observe executing software systems for behaviors associated with attack patterns that target specific known vulnerabilities. Like virus scanners, vulnerability scanners rely on a repository of “signatures,” in this case indicating recognizable vulnerabilities.
Like automated code review tools, although many vulnerability scanners attempt to provide some mechanism for aggregating vulnerabilities, they are still unable to detect complex vulnerabilities or vulnerabilities exposed only as a result of unpredictable (combinations of) attack patterns. In addition to signature-based scanning, most vulnerability scanners attempt to simulate the reconnaissance attack patterns used by attackers to “probe” software for exposed, exploitable vulnerabilities.
Vulnerability scanners can be either network-based or host-based. Network-based scanners target the software from a remote platform across the network, while host-based scanners must be installed on the same host as the target. Host-based scanners generally perform more sophisticated analyses, such as verification of secure configurations, while network-based scanners more accurately simulate attacks that originate outside of the targeted system (i.e., the majority of attacks in most environments). Vulnerability scanning is fully automated, and the tools typically have the high false positive rates that typify most pattern-matching tools, as well as the high false-negative rates that plague other signature-based tools.
It is up to the tester to both configure and calibrate the scanner to minimize both false positives and false negatives to the greatest possible extent and to meaningfully interpret the results to identify real vulnerabilities and weaknesses. As with virus scanners and intrusion detection systems, the signature repositories of vulnerability scanners need to be updated frequently. For testers who wish to write their own exploits, the open source Metasploit Project http://www.metasploit.com publishes black hat information and tools for use by penetration testers, intrusion detection system signature developers, and researchers. The disclaimer on the Metasploit website is careful to state:
“This site was created to fill the gaps in the information publicly available on various exploitation techniques and to create a useful resource for exploit developers. The tools and information on this site are provided for legal security research and testing purposes only.”
Help to do more!
The content you read is available for free. If you’ve liked any of the articles at this site, please take a second to help us write more and more articles based on real experiences and maintain them for you and others. Your support will make it possible for us. | <urn:uuid:eef63202-ab2d-413a-8458-3243cca07893> | CC-MAIN-2022-40 | https://melsatar.blog/2012/12/27/black-box-security-analysis-and-test-techniques/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00597.warc.gz | en | 0.919545 | 1,697 | 3.171875 | 3 |
The Internet demand is always growing, especially the hugely popular video streaming services are increasing greatly. This provides a threat for the service provider. As the hottest topic in the telecommunication industry, DWDM offers unprecedented bandwidth which promises an effective solution to the challenges posted by the Internet growth. But what is DWDM, do you really know?
What Is DWDM?
DWDM wiki has defined it as an optical multiplexing technology. As one wavelength pattern of WDM system (the other pattern is CWDM), it stands for Dense Wavelength Division Multiplexing, which is used to increase bandwidth over existing fibre networks. This powerful technology can create multiple virtual optical fibres, so as to increase bandwidth on existing fibre optic backbones. It means that the fibre in DWDM system can transmit multiple signals of different wavelengths simultaneously. More specifically, the incoming signals are assigned to specific wavelengths within a designated frequency band, then the signals are multiplexed to one fibre. In addition, the most commonly used grid is the 100GHz grid, which consists of a spacing of 0.8nm per channel.
After knowing what is DWDM, we need to learn DWDM architecture. A typical DWDM architecture includes transmitter, receiver, optical amplifier, transponder, DWDM multiplexer and demultiplexer. Transmitter and receiver are the place where the source signal comes in and then multiplexed. Optical amplifier can amplify the signals in the wavelength range, which is very important for DWDM application. Transponder is the converter of wavelength. It’s responsible for converting the client optical signal back to an electrical signal. Multiplexer first combine multiple wavelengths of different fibre to one fibre, and at the receiving end, the demultiplexer separates all wavelengths of the composite signal onto individual fibres. Commonly, channels of DWDM Mux/Demux are available in 8, 16, 40 and 96 channels. All the DWDM basics work together to enable high capacity data flow in ultra-long distance transmission. The following figure is DWDM working principle.
Why Use DWDM Technology?
The most obvious advantage of DWDM technology is providing the infinite transmission capacity, which would meet the increasing Internet demand. And more capacity can be added just by upgrading several equipment or increasing the number of lambdas on the optical fibre. Thus, the investment of DWDM technology has been reduced. Besides, DWDM technology also enjoys several other advantages, like the transparency and scalability.
Transparency. Due to DWDM is a physical layer architecture, it can support Time Division Multiplex and data formats like Gigabit Ethernet, Fibre Channel with open interfaces over a physical layer.
Scalability. It’s easy to be expanded. A single fibre can be divided into many channels, thus there is no need to add extra fibre but the wavelength will be increased. All these advantages make DWDM popular in the network.
Application of DWDM
As a new technology more applications of DWDM are yet to be tapped and explored. It was first deployed on long-haul routes. And now, DWDM technology is ready for long distance telecommunication operators. Using point to point or ring topology, the capacity will be dramatically improved without deploying an extra fibre. In the future, DWDM will continue to provide a higher bandwidth for the mass of data. With the development of technology, the system capacity will grow.
As for the question what is DWDM, I believe you have a good understanding of it. This powerful technology is related closely with current industry advancements trend. Now, service providers are faced with the sharp growth in demand for network capacity, DWDM is the best solution. With DWDM technology, the transmission work is no longer limited by the speed of available equipment, because it provides the high bandwidth without limit. We believe, DWDM will shine in the network world. | <urn:uuid:8b13457e-192f-4dd3-abea-c8e83122400e> | CC-MAIN-2022-40 | https://www.fiber-optic-equipment.com/tag/wdm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00597.warc.gz | en | 0.933118 | 827 | 3.734375 | 4 |
It seems like almost weekly we are seeing headline news stories of a new company falling victim to a data breach. There are many different reasons why a company is breached: Denial of Service Attacks, Malware Attacks, Password Attacks, and so many more! According to Verizon’s 2018 Data Breach Investigation Report, “81% of hacking-related breaches are due to compromised passwords.” With that being said, I think it’s safe to say that majority of data breaches are due to bad passwords. Needless to say, creating a more secure password policy is a topic that needs to be discussed internally within every company.
When a hacker attempts to crack a user’s passwords, they are not just trying a few educational guesses in hopes that they either find the right one or they move on to someone else’s account to try a few more educational guesses. Instead, hackers have advanced technology and software that does all of the work for them.
On the internet, there are many different password cracking tools available for public use. One of the most well-known password cracking tools is called Cain and Abel, which is only available for Windows based systems. According to the Infosec Institute, Cain and Able “can work as sniffer in the network, cracking encrypted passwords using the dictionary attack, recording VoIP conversations, brute force attacks, cryptanalysis attacks, revealing password boxes, uncovering cached passwords, decoding scrambled passwords, and analyzing routing protocols.” Another popular tool is John The Ripper. This is a free software that is available for Mac OS X and Windows based systems and it can detect weak passwords. They do have a paid option that has many more beneficial features.
Besides Cain and Able and John The Ripper, OphCrack is a popular rainbow table tool and L0phtCrack cracks Windows passwords from hashes. For more information about rainbow tables, click here.
Microsoft’s LM hashing algorithm is insecure with their 7 character password segmentation and it is recommended that security professionals disable the LM hashing algorithm and use the NT hashing algorithm only.
As you can see, there are so many tools available to crack passwords. Besides the tools that are available, let’s talk about a few of the methods a hacker can use to crack passwords.
1. Brute Force
A Brute Force password attack can be a very successful, but a slow process for cracking passwords. The program will attempt to guess passwords repeatedly until the password has been cracked or the list of predetermined passwords has been exhausted. Success for this attack is determined by the set of predetermined passwords. If the file is larger, then there is a larger probability of success. The attacks can take anywhere from a few minutes to a few years dependent upon the software used and the length of the password trying to be cracked. Longer passwords with multiple character sets take longer to crack.
2. Rainbow Tables
Rainbow Tables are a very successful method of cracking passwords that are 14 characters or less. Rainbow tables are enormous compilations of pre-computed hashed values of possible password combination. Basically, it allows hackers to reverse the hashing function to determine what the plain text password might be. Once the appropriate hash has been found, the password is cracked. For Windows passwords up to 14 characters, these tables can have up to a 99.9% accuracy rating.
3. Hybrid Attacks
A Hybrid Attack is a password cracking technique that uses a combination of a Dictionary Attack and a Brute Force Attack. This type of password hacking combines dictionary words with numbers and special characters to try and gain access to a company’s network. It is typically used to target passwords made of a common dictionary word followed by a special character and/or number. Hybrid Attacks are extremely successful due to the fact that studies have shown how the typical user creates a password with a common dictionary word and then either one single letter and/or special character to meet the password policy requirements.
4. Dictionary Attacks
Dictionary Attacks are quite simple, yet they are very dangerous to companies. As stated previously, studies have shown that users like to create passwords with common “dictionary” words like password, summer, football, etc. In a Dictionary Attack, the password cracker tool will try common dictionary words as passwords until the hacker gains access to the company’s network.
With so many different password cracking methods, the thought of “How can I keep my company safe being the IT Administrator?” is probably on your mind. Truth be told, Microsoft Password Complexity is not secure nor strong enough to keep hackers away. Microsoft does not let you prohibit common dictionary words to prevent dictionary attacks, nor do they allow you to set the minimum character limit to 15 characters to prevent an attack via rainbow table. The only solution for this is a Windows based Password Filter. The nFront Password Filter allows you to create a customized dictionary file to prevent dictionary attacks and you are able to strengthen your password policy settings beyond what Microsoft currently allows. For more information on why the Windows Password Policy isn’t enough, click here.
Even if you are not the IT Administrator for your company and you’re an employee, take it upon yourself to create a stronger password. At least you can rest easy knowing that your password won’t be the reason your company is hacked. According to CNN Money, US companies lose $15.4 million per year due to hackings.
As I stated earlier, everyone has access to password cracking software tools. Do yourself and your company a favor and run one of these tools internally! There are many professionals, called penetration testers, who can conduct a formal penetration test for you with password cracking tools and show your company what the vulnerabilities are. | <urn:uuid:fcfa9679-3834-497a-bcf3-952dc3cce640> | CC-MAIN-2022-40 | https://nfrontsecurity.com/blog/2020/09/23/how-do-passwords-get-hacked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00797.warc.gz | en | 0.939494 | 1,176 | 2.609375 | 3 |
A few years ago, autonomous vehicles were only something out of a sci-fi movie. However, self-driving features are already available in many newer cars and the idea of fully automated vehicles doesn’t seem that far fetched. 5G networks will have a large influence on the development of autonomous cars, resulting in faster, smarter, and safer commutes.
Before fully self-driving cars are a reality, large-scale adoption of 5G is a necessity. While 4G is fast enough to support HD streaming and limited self-driving features, it can’t support autonomous cars. The next-generation technology of 5G brings safety-relevant requirements that no other network can provide.
Benefits of 5G for Autonomous Vehicles
5G has what is known as network slicing, meaning that the wireless network is divided into virtual levels. As an example, one level could be used only for autonomous vehicles. Having a dedicated level ensures that important safety-relevant information to self-driving cars will be given priority, keeping passengers safe.
Fifth-generation wireless technology will connect almost everything within a fully-responsive network. In the future, autonomous cars will generate nearly 2 million gigabits. Already, leading companies such as Intel and Qualcomm are creating chips to turn self-driving vehicles into mobile data centers. This amount of data allows these cars to make and execute real-time, incredibly complex decisions.
When 5G is widely available, the potentials for vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) becomes a reality. The car will be able to “see” the bicyclist that is making a turn onto the next street or the person that’s crossing the road when the light is turning yellow. The low latency of the network will ensure these vehicles are reliable and – above all – safe.
There are endless possibilities for autonomous vehicles and 5G, but will only continue to be a dream until widespread 5G is adopted. Technology and the evolution of our wireless network bring amazing improvements to society and the full automation of vehicles is a slow process. Half of all vehicles already come equipped with some form of intelligent assistance, whether it’s parking assistance, speed regulation, or lane assistance. However, these are all systems that return control to the driver. Fully automated driving is only the final stage of a long-lasting process and one that will have the greatest impact on society as we know it. | <urn:uuid:7ee619bc-020b-4a6e-bcc2-08aeb65c56f5> | CC-MAIN-2022-40 | https://www.165halsey.com/5gs-crucial-role-in-autonomous-vehicles/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00797.warc.gz | en | 0.954191 | 507 | 3.03125 | 3 |
In order to understand why this is a big deal, we need a decent working definition of ML, so here goes:
“Machine Learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.” This isn’t our definition, but it’s a pretty good one.
Why it’s a big deal
In short, ML is the science of getting computers to get smarter over time, on their own, like humans do. It is not exactly the same as Artificial Intelligence, but for our purposes, it’s close. There’s an old axiom about computers; they don’t do what you want them to; they do what you tell them to.
So it’s not hard to see why you might want a firewall that can think, spot patterns, and act without being specifically programmed. In other words, it does what you want, without having to be told. ML is simply a device harnessing data, observations, and interactions in order to correctly generalize to new settings.
Like Palo Alto says, “Don’t just react. Think ahead.” Reactive security can’t keep up with current threats — or prepare you for tomorrow’s. So rather than relying on signatures to identify threats, the PA-220 analyzes behaviors, then responds appropriately, and quickly.
The Palo Alto PA-220
The PA-220 is at Palo Alto’s entry point of Next-Gen firewalls. The PA-400 series are the next level up in performance. But the same software that runs all Palo Alto firewalls powers the PA-220. It’s called PAN-OS, and it natively classifies all traffic, inclusive of applications, threats, and content. It then ties that traffic to the user regardless of location or device type The application, content, and user (the elements that run your business) then serve as the basis of your security policies. This results in improved security posture and reduced response times.
The PA-220 is loaded with capabilities, as you would expect from any Palo Alto product. It identifies and categorizes all applications, on all ports, all the time, with full Layer 7 inspection. And it does so irrespective of of port, protocol, evasive techniques, or encryption.
It also prevents malicious activity concealed in encrypted traffic. It does this by inspecting and policing TLS/SSL-encrypted traffic, inbound and outbound. And, you will have deep visibility into TLS traffic. For example, you can see the amount of encrypted traffic, TLS/SSL versions, cipher suites and more, without decrypting.
And interestingly, the Palo Alto PA-220 performs networking, policy lookup, application / de-coding, and signature matching in a single pass. And that’s for all threats and content. This greatly reduces the amount of processing overhead required to perform multiple functions in one firewall. It avoids introducing latency by scanning traffic for all signatures in a single pass. It does this using stream-based, uniform signature matching.
And of course, the PA-220 benefits from centralized management, configuration, and visibility for multiple firewalls. Of course, this is irrespective of location or scale. There’s a lot more to be said about this firewall’s features, but you’ll just need to call us at 877-449-0458; we’d love to tell you more.
Total Firewall Throughput is 560 Mbps, and Threat Prevention Throughput is upwards of 300 Mbps. IPsec VPN Throughput is 570 Mbps. The PA-220 delivers 4,200 new sessions per second.
Palo Alto PA-220 At-a-Glance
|Total Firewall Throughput: 560 Mbps|
|IPSec VPN Throughput: 570 Mbps|
|Threat Prevention Throughput: 300 Mbps|
|Single-pass traffic scanning reduces latency| | <urn:uuid:77ab1407-b512-4a4c-a818-8ae4d9b339c0> | CC-MAIN-2022-40 | https://www.corporatearmor.com/uncategorized/palo-alto-pa-220-next-gen-firewall-write-up/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00797.warc.gz | en | 0.903303 | 848 | 2.515625 | 3 |
Augmented Reality is without a doubt one of the most prominent enabling technologies of the digital era. This is evident from the market reports as well as from the trends followed by major IT Service Providers who have placed AR in the foreground of their technology initiatives. For example, AR has been one of the main themes of the 2017 Facebook Developers conference, while Apple and Google have recently launched AR toolkits as a core part of their platforms.
Surprisingly AR is not a recently developed technology. Rather it has been around for over five decades. Back in the 60s the first AR like headsets appeared and were used to superimpose technical information over drawings as part of wireframes. Several decades later, the first web-based and mobile AR applications emerged, leading to killer apps such as the very popular Pokémon Go. The recent surge of interest in AR is largely due to the technological developments, which have improved the accuracy of AR technology, has provided access to more data for AR visualizations and reduced the cost of AR devices such as smart glasses and tablets.
In principle, AR involves the integration of physical world objects and digital items in cyber representations, which are the visualized in an AR-enabled device such as a Head Mounted Display (HMD). Hence, AR applications place virtual (digital) objects in the physical world aiming at creating exceptional experiences for end users. In particular, based on the superimposition of virtual images over real-world objects, AR applications create illusions that engage users in the virtual world. To this end, AR systems execute an overlay of virtual objects in conjunction with input received from a camera or other input devices (e.g., smart glasses).
Key to the operation of AR systems is the tracking of real-world objects, in order to make the superimposition of smart objects as accurate and realistic as possible. In particular, visual object tracking algorithms are employed in order to localize objects with respect to their surroundings and to map the structure of the environment accordingly. This requires the development and deployment of advanced algorithms that perform localization and mapping at the same time.
In several cases, the localization process of AR application is based on technologies and devices like GPS, digital compasses, and accelerometers. The devices provide continuous data about the location of the real-world objects and about the status of the AR visualization. Note that the above-listed devices are integral elements of state-of-the-art smartphones, which facilitates the implementation of popular location-aware AR applications, such as map-based guidance and other location-based services.
The close affiliation of AR with the objects’ tracking technologies draws a distinguishing line between real AR applications and simpler applications that just superimpose information and virtual objects without accurate tracking. Due to the hype surrounding AR, simple apps that provide virtual instructions over cyber-representations of real-world objects are characterized as augmented reality applications even though there are not.
AR systems include client applications that display the cyber-representations of real and virtual objects. The most typical example of such an application is an AR browser, which augments a conventional camera display with contextual data about the location and state of the various objects. Likewise, there are also 3D viewers, which display 3D models of the users’ surrounding environment.
Client applications are placed in different types of devices, the most popular ones being smart glasses and mainstream tablets or smartphones. The selection of a proper device is very important for the successful deployment of an AR application. Tablets and smartphones offer many advantages as they are quite cheap and ubiquitous i.e. every user is nowadays carrying a phone or tablet. Therefore, the use of AR applications in a tablet or smartphone entails a faster learning curve, when compared to other options where end-users have to become acquainted with the operation of an entirely new device type. Nevertheless, the use of smart glasses is in several cases advantageous over smartphones or tablets. As a prominent example, smart glasses are preferred in all applications where end-users have to perform tasks using both of their hands i.e. in cases where the hands-free operation of the AR application is essential. For instance, in several maintenance and repair applications, end-users take advantage of AR in order to view field instructions on a cyber-representation while using their hands to repair a machine or device. In such applications, the use of smart glasses is much more effective than the use of tablets or smartphones. On the downside, smart glasses remain much more expensive than tablets, despite their falling prices. Moreover, end-users have to invest in learning how to use and fully leverage smart glasses or HMD interfaces.
Read More: Industry 4.0: The Rise of Autonomous Industrial Plants
There are already many AR applications, while more are being developed and deployed every day. The following list presents some of the main sectors where AR is used, along with the relevant value proposition of AR technology:
Read More: Augmented Reality in Future Manufacturing
The breadth of AR applications and the active investments of all major Internet platforms on them are clear indicators that AR is here to stay. It is no longer a futuristic concept, but rather an available technology that can be used as a productive user engagement tool.
How Metaverse could change the business landscape
How education technology enables a bright future for learning
The Vision and the Technology Enablers of the Metaverse
Technology Innovations in Retail
The Present and Future of Multi-Experience Platforms
Significance of Customer Involvement in Agile Methodology
Quantum Computing for Business – Hype or Opportunity?
The emerging role of Autonomic Systems for Advanced IT Service Management
Why is Data Fabric gaining traction in Enterprise Data Management?
We're here to help!
No obligation quotes in 48 hours. Teams setup within 2 weeks.
If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch.
Outsource with Confidence to high quality Service Providers.
If you are a Service Provider looking to register, please fill out
this Information Request and someone will get in
Enter your email id and we'll send a link to reset your password to the address
we have for your account.
The IT Exchange service provider network is exclusive and by-invite. There is
no cost to get on-board;
if you are competent in your areas of focus, then you are welcome. As a part of this exclusive | <urn:uuid:9bd19be0-9f30-4961-bac8-aef3d432c984> | CC-MAIN-2022-40 | https://www.itexchangeweb.com/blog/trends-in-augmented-reality-separating-the-signal-from-the-noise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00797.warc.gz | en | 0.935043 | 1,319 | 3.234375 | 3 |
With billions of Bluetooth enabled devices being shipped every year, today’s enterprises need to protect sensitive data and mitigate ongoing security threats. While Bluetooth Low Energy brings a lot of potential for the IoT, as many other wireless technologies, it’s not immune from security threats such as device tracking, eavesdropping, and man in the middle attacks. BLE devices are designed to broadcast MAC, UUID and service information at a predefined interval. Due to continuous advertisement, hackers can easily track the device and decode the broadcasting information.
The main security issues with the pairing process and BLE, in general, are passive eavesdropping, man in the middle (MITM) attacks and identity tracking. Let’s explore these in a little more detail.
Passive eavesdropping is the process by which a third device “listens in” to the data being exchanged between the two paired devices. The way that BLE overcomes this threat is by encrypting the data being transferred using AES-CCM cryptography. Bluetooth Low Energy was designed with AES-128 encryption for security and is considered one of the most robust encryption types.
A security issue worth mentioning is man in the middle attack (MITM). In general terms, this refers to a hacker positioning themselves in a conversation between a user and an application to eavesdrop or to mimic one of the parties making it appear as if the exchange of information is normal. Think of it as a mailman opening up your bank statement or other personal mail containing sensitive information, writing down this information, resealing the envelope and placing it into your mailbox. To avoid such an attack, developers should use a combination of encryption and verification methods to secure their IoT applications.
A third security issue is identity tracking which occurs when an attacker can associate the address of a BLE device with a specific user and then physically track that user based upon the presence of the BLE device. One way BLE overcomes this issue is by periodically changing the device address.
What is a pairing process?
The pairing process can be defined as how two BLE devices exchange device information so that a secure link can be established. The process does vary between the BLE 4.2 devices and older 4.0 and 4.1 BLE devices. Below is a quick overview:
The pairing process for 4.0 and 4.1 devices, also known as LE Legacy Pairing, uses a custom key exchange protocol unique to the BLE standard. BLE 4.2 devices are fully backward compatible with BLE 4.0 and 4.1 devices which means that 4.2 devices are capable of performing the same pairing process as 4.0 and 4.1 devices. However, BLE 4.2 devices are also capable of creating what are known as LE Secure Connections. The security of this process depends greatly on the pairing method used to exchange a Temporary Key (TK) which we’ll describe below. The pairing process involves three phases (see Figure 1 below):
Phase one begins when the initiating device sends a “pairing request” to the other device. Essentially, this phase involves how two devices determine how they will set up a secure connection. All of the data exchanged during this phase is encrypted.
Phase two involves the devices exchanging and/or generating the TK using one of the pairing methods below.
Phase three is optional and typically used only when bonding requirements are exchanged in phase 1.
In LE Secure Connections, both phase one and phase three of the pairing process are the same as they are in LE Legacy connections, but there are slight differences in phase two of the pairing process. In phase two, both devices generate an Elliptic Curve Diffie-Hellman (ECDH) public-private key pair. The two devices will then exchange their public keys and then start computing the Diffie-Hellman key. One of the pairing methods is then used to authenticate the connection. Once the connection is authenticated, the long-term key (LTK) is generated and the connection is encrypted.
Now that we’ve covered the basics of the pairing process, let’s discuss the four common pairing methods for BLE 4.0, 4.1 and 4.2 devices.
This method is the default pairing method for most BLE networks. In BLE Legacy connections, the TK value that devices exchange during the second phase of pairing is set to 0, and devices generate the Short Term Key value based on that. However, this method is known to be highly insecure and offers little to no protection against attacks nor does it offer MITM protection. Ideally, Just Works can be used in cases where high levels of security are not required.
Out of Band (OOB) Pairing:
This method allows for some data packages to be transferred through a different wireless protocol. OOB pairing can be implemented during phase 2 of pairing so that any keys exchanged between devices are not transferred through a less secure BLE protocol. The main advantage of this method is that a very large TK can be used, up to 128 bits, greatly enhancing the security of the connection. As long as the OOB channel is immune to eavesdropping during the pairing process, then the BLE connection will also be immune from passive eavesdropping as well as from MITM attacks.
When one or more of the devices has an output and an input device, they can use Bluetooth Passkey Entry to pair. The TK in this method is a 6 digit number that is then passed between the devices by the user. The user must then enter the same number into the responding device. The ‘master’ and ‘slave’ device each create a 128-bit random number. Once the values have been exchanged and confirmed, then an encrypted channel is created between the two devices. Once successfully authenticated, the passkey method is generally considered to be secure from MITM attacks and can also offer good protection from passive eavesdropping.
This method is not available for Bluetooth LE Legacy pairing, only LE Secure Connections (Bluetooth 4.2 and above). Numeric Comparison requires that a display is available on both devices, for example pairing between a mobile phone and a laptop. A 6 digit key will appear on both devices which the user must manually check and verify. Once the key is confirmed and verified, this method protects from MITM attacks.
Over the years, Bluetooth technology has made considerable advancements and introduced new security methods to protect users. As developers look to implement BLE into their designs, they must understand the specific security threats facing BLE and how BLE’s security methods can mitigate these threats.
For today’s enterprises looking for a secure wireless connectivity solution, Cassia’s X1000, E1000 and S2000 provide enterprise-level security features such as Bluetooth 4.1 security standards as well as advanced 128-bit AES encryption to ensure data is safeguarded at all times. For a full comparison of Cassia’s gateways, check out the guide here.
Interested in a free demo? Book a call with our team here. | <urn:uuid:b102dba0-8c4d-4b5e-b5b7-e2e5cb7d0047> | CC-MAIN-2022-40 | https://www.cassianetworks.com/blog/bluetooth-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00797.warc.gz | en | 0.936694 | 1,469 | 2.921875 | 3 |
God save the Queen–or at least teach her how to code…
We’re all about teaching our young ones the ways of tech. Heck, we sponsored a lovely Webslam a month ago here in LA (WEBSLAAAAAAAAM!). That’s why we were super excited to find out that the United Kingdom has added computer sciences to their national curriculum. Fly that Union Jack high today, UK—you done good.
The future is coding, programming, and computer sciences. Let’s face it—as technology grows, the need for more coders and designers is more apparent than it has ever been, so that’s why the United Kingdom has added computer sciences to their national curriculum.
The push to bring computer sciences into the classroom started in 2012, with non-profit group Computing at School and other tech minds, pressuring the UK government to introduce computer sciences into the schools, right around the time that they were redoing their national curriculum. The UK government asked Computing at School to give them a rough outline of the skills that students would need to learn, while still allowing the teachers to teach in their own ways.
The problem is that many of the teachers in schools don’t know how to code or program. That’s why many UK schools are tapping coding startup Codecademy to develop the curriculum for the students. Codecademy—an NYC-based startup—offers free online courses in coding, and they just recently opened a new London office to better reach the UK schools. They partnered with Computing at School to offer supplemental online courses to the teachers who would be presenting the material, as well as bringing their own courses into the classroom. Currently, over 1,000 schools are looking to bring Codecademy into their classrooms this fall, complete with sample lesson plans and tools to track the progress in the students. In the meantime, the Department of Education got together with Computing at School to develop local support groups to teach some 20,000 teachers in the way of computer sciences.
Hopefully, this move in the UK to teach computer sciences has a positive influence on the rest of the world. Currently, many schools in the US only offer after-school programs in coding, so seeing how the UK handles the new curriculum could spur US schools to do the same. The one roadblock to having computer sciences being taught in US classrooms is that the government doesn’t control education like they do in the UK—the courses and curriculum offered by US schools is developed by the states and the school districts themselves.
But for now, good for the UK for realizing that technology is going to have a huge impact on our future, and for bringing a valuable skill to young students as they progress through their lives and education. | <urn:uuid:f37c02de-4f8e-4dbc-9bc4-2599aa14046d> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/uk-computing-sciences-codecademy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00797.warc.gz | en | 0.972046 | 562 | 3 | 3 |
According to Microsoft, Multi-factor Authentication (MFA) can block over 99.9% of cyber attacks that aim to compromise your account password security. These accounts include email, banking, shopping, and others. Hacks can be devastating – fraudulent emails sent out, credit card and bank accounts compromised, identity theft, reputation damage, and more.
What is Authentication?
Authentication is the process of verifying the identity of a user. Factors are the methods of authentication. Specifically, there are three primary authentication factors:
- Knowledge or something you know, most often your user name and password or a PIN
- A physical object, or something you have, such as a smartcard or a token
- A physical characteristic or something you are, can incorporate biometrics such as a fingerprint, voice pattern recognition, or a face scan
In addition, there are some other authentication factors related to locations and mobile devices. However, these are the primary authentication factors people think of and use for access to common systems like work networks and email.
What is Multi-factor Authentication (MFA)?
MFA requires a user to authenticate using two or more factors, such as a smartcard and a PIN, or a password and a token. Using a password and then a PIN is not MFA because you are using only one factor: something you know. This does not provide password security.
How does MFA work?
If you have MFA set up on an account, you typically provide your user name and password and then are prompted to provide an additional factor. Sometimes this is “remembered” for 7-14 days on the browser or the application on your phone you are using. During that period of time you can log in without the second factor from the same computer, or you can remain logged in.
When should I use MFA?
Whenever you can! MFA is one of the best ways to protect your privacy and your data, and provide password security. For this reason, it is critical for any type of electronic financial transaction.
Should I use third-party apps?
MFA less of a hassle when you use third-party apps, such as Google Authenticator, Microsoft Authenticator or Duo. Third-party apps generates a random string of numbers on a rotating basis (every 30 seconds or so). Known as a token, this string of number changes every 30 seconds or so. Note that Authenticator apps can be set up on a smart phone or computer.
Links to common third-party authenticator apps:
We recommend using an authenticator app that is backed up and can transfer to new phones or computers. You can transfer LastPass Authenticator to a new phone or device. If you buy a new phone or computer, you’ll have to re-connect Google Authenticator to all of your accounts.
If MFA is so effective, why are people hesitant to use it?
First of all, people are often concerned that they won’t be able to access their accounts if they use MFA – that it will be too complicated. Secondly, they fear it may be burdensome, and the risk is worth only having to authenticate with one factor. Thirdly, they often haven’t thought through the risks involved with having someone nefariously send emails from their account or having a bank account compromised. Furthermore, most financial institutions require MFA. Even social media sites, such as Facebook or Twitter, can have a devastating impact if compromised. | <urn:uuid:9624aad6-e979-4d19-91f5-6e2fc82ac4ca> | CC-MAIN-2022-40 | https://beinetworks.com/multi-factor-authentication-mfa-a-must-do/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00797.warc.gz | en | 0.936086 | 718 | 3.5625 | 4 |
What is Multifactor Authentication?
As you know password are one of the most used security mechanisms to secure a device. We use different types of password to secure our systems. But passwords are vulnerable to attacks in todays network world. There must be additional security mechanisms. Multifactor Authentication is used for this purpose. So, what is Multifactor Authentication? This mechanism is basically a security system that uses more than one authentication method to identify the user. This mechanism works as a second or more barrier towards the attackers.
There are different mechanisms used for Multifactor Authentication. And Multifactor Authentication combines two or more of these mechanisms. These mechanims are given below:
- Information known by the user like passwords
- Information owned by the user like user like security tokens
- Information specific to user body like fingerprints
- Information about user location
- Information about user access time
With Multifactor authentication methods, different platforms involve to this network security job. For example, mobile phones or emails can be used for this authentication. A text message or an email can be sent to the user whenever he tries to login to the device. With this second step, emails and message can contain the access code. This mechanism is widely used in many areas today.
Beside using additional device, our body can also be used to identify and authenticate us. As you know, human body has identical parts like voice, fingerprints, retinas etc. By checking such biometric information, users can authenticate.
Why We Need Multifactor Authentication?
Multifactor Authentication is an additional barrier towards any attacker. With the traditional password security mechanism are not enough to secure our networks in today’s world. There are different vulnerabilities of this password protection.
As you know user information is stored in authentication databases. In other words, there are username and password lists in authentication servers. Storing all this information is not safe because if someone can reach these credentials, they can easily use these user credentials and access the systems. This is one of the vulnerabilities of this traditional password protection.
Another vulnerability of password protection is about password strings. Before, we were using weak passwords to access the network devices. Then, we have started to use strong passwords. These strong passwords can be enough to avoid any brute force attacks. But CPU capacities are increasing rapidly and this gives more capacity to brute force attacks. So, attackers can try millions of passwords per second.
Multifactor Authentication helps us to overcome these weaknesses. We can add additional security barriers beside password protection and by doing this, we can defend our network devices better.
Multifactor Authentication Factor
Multifactor Authentication Factor is the category of the credentials that can be checked during authentication. So, what are the Multifactor Authentication Factors?
These Factors are given below:
- Knowledge Factors
- Possession Factors
- Inherence Factors
- Location Factors
- Time Factors
Knowledge Factors are the factors related with user knowledge. This can be user password, security questions, security shape etc.
Possession Factors are the factors related with additional platforms. This can be a smart phone, your email, SMS etc. A second code is sent to one of these platforms.
Inherence Factors are the factors related with the body. This can be fingerprints, retina, iris, voice etc. With these identical parts, user can authenticate.
Location Factors are the factors related with your location. Location is determined by GPS. This allows authentication in specific locations.
Time Factors are the factors related with the access time. This allows authentication in specific times. | <urn:uuid:7769d8dd-a68a-4679-9502-1ff66bcfe9e1> | CC-MAIN-2022-40 | https://ipcisco.com/multifactor-authentication-mfa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00797.warc.gz | en | 0.91238 | 737 | 3.75 | 4 |
The Jetson TX2 is the latest all-in-one computing module for processing artificial intelligence in the field instead of the cloud. Companies are using it to improve video conferencing quality and even help thin fields of lettuce.
Artificial intelligence and machine learning are transforming everything from self-driving cars to robots that can identify how fresh lettuce is, but their immense processing power requirements have mostly relegated them to data centers with massive banks of CPUs and GPUs.
So a few companies, including NVIDIA, are trying to bring AI processing to the drones, robots, and other devices that can most benefit from them. The graphics company on Tuesday unveiled its latest AI compute module, the Jetson TX2, which is designed to eliminate the need to send massive amounts of AI data to and from the cloud. | <urn:uuid:dd9debe5-47a2-4576-aaec-ba55439eeb28> | CC-MAIN-2022-40 | https://letsdovideo.com/nvidias-latest-ai-module-can-fix-video-conferencing-lettuce/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00197.warc.gz | en | 0.944575 | 161 | 2.515625 | 3 |
Energy efficiency is often the least-cost means of meeting new demand for energy services. Not only does it reduce overall energy consumption and thereby reduce dependency on energy imports, but it encourages development and creates jobs. Governments that promote investment in energy efficiency and implement supporting policies save citizens money, reduce the potential for crisis and conflict, and decrease pollution. In 2016 the world would have used 12% more energy had it not been for energy efficiency improvements since 2000. These energy efficiency gains allowed households across the world to spend 10 to 30% less than they otherwise would have on their annual energy bills in 2016.
The fourth edition of ACEEE’s International Energy Efficiency Scorecard examines the efficiency policies and performance of 25 of the world’s top energy-consuming countries. Together these nations represent 78% of all the energy consumed on the planet and more than 80% of the world’s gross domestic product (GDP) in 2014. ACEEE evaluated and scored each country’s efficiency efforts using 36 policy and performance metrics spread over four categories: buildings, industry, transportation, and overall national energy efficiency progress. ACEEE allocated 25 points to each of these four categories and awarded the maximum number of points for each metric to at least one country.
In a troubling development, the United States slid from 8th place in 2016 to 10th in 2018 by scoring six fewer points.
On a positive note, the most improved country this year is Mexico, which moved up from 19th place in 2016 to 12th this year by scoring 17 more points. Mexico’s recent adoption of an overarching energy efficiency program — the National Program for the Sustainable Use of Energy — has spurred significant investment in efficiency programs and standards. Additionally, Mexico sits just below the United States and Canada in the rankings this year, suggesting that the North American Free Trade Agreement (NAFTA) may be playing a role in harmonizing efforts among the three member countries.
ACEEE added two new countries to his analysis this year: Ukraine and the United Arab Emirates (UAE). Iran is also among the world’s largest energy consumers but is not included in this year’s report due to data limitations. ACEEE hopes to be able to include Iran in the 2020 edition of the report. | <urn:uuid:1badeba2-1a79-4f13-96bc-7183ec904f07> | CC-MAIN-2022-40 | https://www.ciocoverage.com/the-2018-international-energy-efficiency-scorecard/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00197.warc.gz | en | 0.949808 | 460 | 2.78125 | 3 |
New features in Learning Tools, currently available in OneNote along with Word desktop, are now rolling out to Word Online with modify views and include techniques that help people read more effectively, such as:
- Read Aloud reads text out loud with simultaneous highlighting which improves decoding, fluency, and comprehension while sustaining focus and attention.
- Spacing optimizes font spacing in a narrow column view to improve reading fluency for users who suffer from “visual crowding” issues.
- Syllables shows the breaks between syllables to improve word recognition and decoding.
How does this affect me?
Learning Tools is now available for Word Online in Office 365, providing an immersive reader that helps everyone on any device improve their reading skills; including those with dyslexia, dysgraphia, ADHD, and English Language Learners (ELL).
What do I need to do to prepare for this change?
There is nothing you need to do to prepare for this change. Please click Additional Information to learn more about these new features. | <urn:uuid:477d1737-a1d0-421f-b965-40c94e7cce8a> | CC-MAIN-2022-40 | https://www.jasperoosterveld.com/2017/02/office-365-update-learning-tools-for-word-online/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00197.warc.gz | en | 0.916515 | 211 | 2.875 | 3 |
What Are the Benefits of Using Multi-Factor Authentication (MFA)?
Password security has been an ongoing challenge for businesses ever since the first login credentials for business technology were created.
The age-old battle is between the need to have unique, strong passwords for every account and the fact that one person can’t reasonably remember all those strong passwords.
Depending upon company size, the average employee is expected to remember anywhere between 25 to 85 unique account username/password combinations.
The beneficiaries of this problem have been cybercriminals who are going after login credentials with a vengeance. This is especially true since so many businesses have switched to cloud workflows in accounts that are protected only by the strength of credential security.
One of the biggest threats to credential security is phishing. According to Proofpoint’s State of the Phish report, nearly 90% of organizations experienced phishing attacks in 2019 and phishing email volume has increased 67% year over year.
When faced with an assault on IT security, one of the tools in a company’s cybersecurity arsenal is multi-factor authentication (MFA).
How Does MFA Work?
MFA adds a secondary factor of authentication beyond the username and password combination.
There are typically three factors of authentication:
- What you know: Username & password
- What you have: A physical device that can receive a login code
- What you are: Biometrics, like fingerprint scanning
When you’re only using the first factor of authentication, it’s easy for a hacker to get their hands on that information. Business email addresses, which are typically used as usernames, are widely available. Passwords can be compromised by being weak and easily guessed, through hacking software, or due to a large-scale data breach of a database full of them.
What MFA does is add a requirement for another factor of authentication, which is the “what you have.” The most popular method is to register a device when setting up MFA that can receive a login code.
Once MFA is enabled, the login process will go something like this:
- Enter Username/Password
- Click to receive MFA code
- Receive time-sensitive code on your device
- Enter code to complete login and gain access
On platforms like Microsoft 365, MFA can be enabled for all users at once. The next time they login, they receive a prompt to setup their device for multi-factor authentication.
Benefits of Enabling MFA
Prevents Common Phishing Email Risk
Credential theft became the #1 type of phishing attack in 2019. Users can often get fooled by a phishing link that takes them to a page that looks identical to an account login they’re used to. But as soon as they enter their login, their account is breached.
Phishing email is more of a problem than many businesses may realize. Here at Cleartech Group, we see tickets for spam/phishing emails or email breaches due to phishing at least 2-3 times a week.
It takes a layered approach to prevent common phishing emails from causing account breaches. This includes using tools like Proofpoint to help protect email from malware threats. MFA is also an excellent protection.
If you’re using MFA, even if an employee is fooled by a phishing login page, the hacker can’t use those stolen credentials to access the account because they’ll be blocked by the MFA requirement for the secondary login code.
Solves the “Bad Password Habits” Problem
Employee IT security training can help reduce the number of bad password habits that your team has, but it’s still a prevalent issue even with training.
45% of employees admit to reusing passwords across multiple accounts, both work and personal. When passwords are reused over several accounts, it makes them easier to breach.
Using MFA offers an important protection against lax password practices by employees and protects business accounts even if the login password has been compromised.
Secures Cloud Platforms (Microsoft 365, G Suite, etc.)
Both Microsoft and Google recommend using MFA as one of the best ways to secure Microsoft 365 or G Suite accounts. Today, Central Massachusetts businesses have a large part of their data stored in cloud platforms like these, along with critical services like business email.
One account breach of an all-in-one cloud platform can impact several operational areas of a business. Multi-factor authentication adds an important layer of security to those accounts to significantly reduce IT security risk.
MFA is Proven to Be Extremely Effective at Preventing Breaches
Both Microsoft and Google have released study results showing the effectiveness of MFA.
According to Microsoft, MFA is 99.9% effective at preventing fraudulent sign-in attempts on cloud accounts.
Google’s data shows that use of multi-factor authentication can stop as many as 100% of automated bot attacks and 99% of bulk phishing attacks.
This makes MFA one of the cybersecurity solutions that has the highest level of effectiveness when it comes to protecting your account security.
Get Help Enabling MFA for Your Cloud Platforms
Cleartech Group can help your business enable MFA on platforms like Microsoft 365, G Suite, and others to keep your accounts secure.
Contact us today to discuss our cybersecurity options! Call us to chat at 978-466-1938 or reach out online. | <urn:uuid:3a61dd70-8a9f-4040-bafd-b664692809c9> | CC-MAIN-2022-40 | https://www.cleartechgroup.com/what-are-the-benefits-of-using-multi-factor-authentication-mfa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00197.warc.gz | en | 0.936973 | 1,128 | 2.515625 | 3 |
The outbreak of the COVID-19 pandemic pushed the world to a reality few were prepared to face. With business operations shifting completely online and office workers logging on from their homes, no one anticipated the threat of cyberattacks and cybercrimes. In a secured office network, the IT department is responsible for ensuring the safety of the network as well as patching up any vulnerabilities that may pop up in the software ecosystem of the organization.
The most vulnerable chink in the cybersecurity of an organization are the humans working there. People are susceptible to malicious emails and more often than not click on malicious links that redirect them to phishing websites or download a file into their computer that has a hidden virus or trojan. If even a single computer is compromised in a network, the hackers can then infiltrate the remaining network and access sensitive company data or simply lock out people using ransomware.
The Pandemic Boom
According to Interpol, after the COVID-19 outbreak, cybercriminals have shifted their focus from small businesses and individuals to major corporations, large businesses, government departments, and even critical infrastructure like healthcare. Since every organization is dealing with remote working options, as everyone is being forced to work from home, there has been a significant increase in security risks. The employees are no longer using secure office computers but their personal computers which are vulnerable and have not been vetted by the cybersecurity teams. Adding on to the fact that employees are using their personal internet connection through wireless routers, which have very basic data security and encryption of traffic, it is quite easy for someone willing to look around to find access to sensitive data.
As opposed to the LAN in an office which is secured and regularly monitored by the cybersecurity team, home networks and systems are not secured and susceptible to phishing campaigns and downloading malicious from unknown links into their computer.
Cybercriminals have also been attacking the healthcare sector with ransomware. Healthcare institutions have become the target of choice for ransomware as they can lock out critical systems which can put the lives of hundreds and thousands of people at risk and thus the institutions are coerced to pay the extortion.
Organizations are now facing more than 20,000 vulnerabilities because of the widened network perimeter and must identify all flaws within the devices connected to their network and create a model to understand possible attack points so that they can be prepared. There has been a 50% increase in mobile vulnerabilities and a 72% increase in ransomware attacks. In the US alone, 800 to 1500 businesses were affected by ransomware attacks that were related to a single IT firm with the hackers then demanding $ 70 million for restoring the data and access. This further highlights the vulnerabilities and repercussions on a business ecosystem if even a single link in the chain is compromised.
The major driving reason behind the pandemic boom in cyberattacks is the theme of these cyberattacks. Hackers have realized that people are more susceptible to opening malicious links or downloading suspicious files if it seems to be related to COVID-19. The lack of information drives people to click on malicious links to learn more about the COVID-19 pandemic while their personal systems become host to viruses and remote access trojans which then spread from their computers while collecting sensitive data. Hackers use trustworthy names or official disguises and messages with subject lines related to COVID-19 to fool people into thinking that they are getting information from a legitimate website or file and thus are more prone to provide personal details or just download a malicious file onto their PC.
Upgrading Your Cybersecurity
Cybersecurity is neither cheap nor easy. We live in a world of constant technological innovation, but cybersecurity takes time to upscale as compared to GAAP technology (Government as a Platform). The major challenge is that of dealing with the increased phishing and malware threats. Organizations must actively discuss and create a policy to ensure data protection and cybersecurity measures for employees working remotely on their personal computers and mobile devices.
There is also the need for upgrading digital systems within an organization to limit the amount of access and data a remote working employee can access. There needs to a be suitable network and system architecture that can support remote working with enough scalability capabilities.
Switching to virtual space and remote working conditions exposed a lot of issues with the digital capabilities of many organizations and negatively impacted their ability to function and offer services. A complete overhaul or a significant upgrade to ensure operational capability during the pandemic, as well as future-proofing against any emergency circumstances, should be the focus for the cybersecurity industry and experts.
What Should the Governments do?
The COVID-19 pandemic has led to a sudden increase in digital transformation across sectors. Digital public services have become the need of the hour with more and more organizations finding and creating ways to offer their services digitally. Digital transformation of an economy is a great thing but with the parallel increase in cyberattacks, it is up to the governments to ensure the cybersecurity of public services and government data platforms as well GAAP-based services. More and more people are relying on the government to get their news, data, public services from the safety of their homes and it is the responsibility of the government to ensure data security and continuous services to the public without being held hostage by ransomware or losing sensitive data to RATs and viruses.
To combat such cybersecurity threats, governments must take these actions:
- Governments must develop national cybersecurity strategies and create a regulatory framework for cybersecurity initiatives. They must together with the private sector to create sound and robust cybersecurity measures and be agile with their implementation.
- Governments have been sharing data about the pandemic to help each other prepare for the possible challenges. This international cooperation needs to be extended to the sphere of cybersecurity so that private entities and governments around the world can function together to neutralize any future global threat in cyberspace.
- Governments must raise awareness of the possibility of cyberattacks. Citizens must be trained and taught about cybersecurity threats and the best ways to deal with them. Raising awareness and training people in digital technology can immediately fix the human vulnerability in the system.
- Governments must also look for data backup options as well as a secure avenue for hosting their services and storing their data. Cloud-based services offer security as they are not stored on databases of the organization and as such cannot be accessed by anyone and offer scalability options so that when your employees need to have remote access, the system has protocols in place to deal with the demand. | <urn:uuid:0905a373-c9d9-4cf4-9eb7-b4e493ca9b1f> | CC-MAIN-2022-40 | https://www.egovreview.com/article/cybersecurity/17/cybersecurity-post-covid-19-what-governments-must-do | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00197.warc.gz | en | 0.96389 | 1,297 | 2.6875 | 3 |
Our phones store a lot of personal data, including contacts, social media account details, and bank account logins. We use our smartphones for everything under the sun, from work-related communication to online shopping.
However, like computer viruses, our phones can be vulnerable to malware. Viruses are a type of malware that replicate themselves and spread throughout the entire system. They can affect your phone’s performance or, worse, compromise your sensitive information so that hackers can benefit monetarily.
In this article, we give you a rundown of viruses that can infect your phone and how you can identify and eliminate them. We also provide some tips for protecting your phone from viruses in the first place.
Can iPhone and Android devices get viruses?
iPhones and Android devices run on different operating systems. So, there are differences in the viruses that affect each type of mobile device and how resistant each operating system is to viruses.
Viruses have a harder time penetrating iOS because of its design (although iOS hacks can still happen). By restricting interactions between apps, Apple’s operating system limits the movement of an iPhone virus across the device. However, if you jailbreak your iPhone or iPad to unlock tweaks or install third-party apps, then the security restrictions set by Apple’s OS won’t work. This exposes iPhone users to vulnerabilities that cybercriminals can exploit.
While Android phones are also designed with cybersecurity in mind, their reliance on open-source code makes them an easier target for hackers. Android devices allow users to access third-party apps not available in the Google Play Store.
Main types of phone viruses
Cybercriminals today are sophisticated and can launch a variety of cyberattacks on your smartphone. Some viruses that can infect your phone include:
- Malware: Malware encompasses programs that steal your information or take control of your device without your permission.
- Adware: These are ads that can access information on your device if you click on them.
- Ransomware: These prevent you from accessing your phone again unless you pay a ransom to the hacker. The hacker may use personal data like your pictures as blackmail.
- Spyware: This tracks your browsing activity, then steals your data or affects your phone’s performance.
- Trojan: Aptly named, this type of virus hides inside an app to take control of or affect your phone and data.
How do phones get viruses?
Smartphones and computers get viruses in a similar way. The most common include:
- Clicking on links or attachments from unverified sources. These are most commonly distributed as emails and SMS.
- Clicking on seemingly innocent ads that take you to an unsecured webpage or download mobile malware to your device.
- Visiting suspicious websites, often by ignoring security warnings.
- Downloading malicious apps from an unverified source, usually outside the Apple App Store or Google Play Store.
- Connecting your phone to an unsecured internet connection like public Wi-Fi (McAfee offers a secure VPN that makes it safe to use unsecured Wi-Fi networks by encrypting your data.)
7 signs your phone has a virus
Now that you know how your phone could be the target of a virus, look out for these seven signs to determine if your device has been infected with malicious software.
You see random pop-up ads or new apps
Most pop-up ads don’t carry viruses but are only used as marketing tools. However, if you find yourself shutting pop-up ads more often than usual, it might indicate a virus on your phone.
Don’t open any apps in your library that you don’t remember installing. Instead, uninstall them immediately. These apps tend to carry malware that’s activated when the app is opened or used.
Your device feels physically hot
Your phone isn’t built to support malware. When you accidentally download apps that contain malware, the device has to work harder to continue functioning. In this case, your phone might be overheating.
Random messages are sent to your contacts
If your contacts receive unsolicited scam emails or messages on social media from your account, especially those containing suspicious links, a virus may have accessed your contact list. It’s best to let all the recipients know that your phone has been hacked so that they don’t download any malware themselves or forward those links to anybody else.
The device responds slowly
An unusually slow-performing device is a hint of suspicious activity on your phone. The device may slow down because it needs to work harder to support the downloaded virus. Alternatively, unfamiliar apps might be taking up storage space and running background tasks, causing your phone to run slowly.
You find fraudulent charges on your accounts
Be sure to follow up on charges on your credit card or transactions in your banking statements that you don’t recognize. It could be an unfamiliar app or malware making purchases through your account without your knowledge.
The phone uses excess data
A sudden rise in your data usage or phone bill can be suspicious. A virus might be running background processes or using your internet connection to transfer data out of your device for malicious purposes.
Your battery drains quickly
An unusually quick battery drain may also cause concern. Your phone will be trying to meet the energy requirements of the virus, so this problem is likely to persist for as long as the virus is on the device.
How can I check if my phone has a virus?
You may have an inkling that a virus is housed inside your phone, but the only way to be sure is to check.
An easy way to do this is by downloading a trustworthy antivirus app. The McAfee Mobile Security app scans for threats regularly and blocks them in real time. It prevents suspicious apps from attaching themselves to your phone and secures any public connections you might be using.
How to remove a virus from Android and iPhone
If you detect a virus on your iPhone or Android device, there are several things you can do.
- Download antivirus software like McAfee’s award-winning antivirus software or a mobile security app to help you locate existing viruses and malware. By identifying the exact problem, you know what to get rid of and how to protect your device in the future.
- Do a thorough sweep of your app library to make sure that whatever apps are on your phone were downloaded by you. Delete any apps that aren’t familiar.
- To protect your information, delete any sensitive text messages and clear history regularly from your mobile browsers. Empty the cache in your browsers and apps.
- In some instances, you may need to reboot your smartphone to its original factory settings. This can lead to data loss, so be sure to back up important documents to the cloud.
- Create strong passwords for all your accounts after cleaning up your phone. You can then protect your passwords using a password management system like McAfee True Key, which uses the most robust encryption algorithms available so only you have access to your information.
7 tips to protect your phone from viruses
It’s never too late to start caring for your phone. Follow these tips to stay safe online and help reduce the risk of your phone getting a virus.
- Only download an app from a trusted source, i.e., the app store or other verified stores. You should read app reviews and understand how the app intends to use your data.
- Set up strong, unique passwords for your accounts instead of using the same or similar passwords. This prevents a domino effect in case one of the accounts is compromised.
- Think twice before you click on a link. If you believe it looks suspicious, your gut is probably right! Avoid clicking on it until you have more information about its trustworthiness. These links can be found across messaging services and are often part of phishing scams.
- Clear your cache periodically. Scan your browsing history to get rid of any links that seem suspicious.
- Avoid saving login information on your browsers and log out when you’re not using a particular browser. Although this is a convenience trade-off, it’s harder for malware to access accounts you’re not logged into during the attack.
- Update your operating system and apps frequently. Regular updates build upon previous security features. Sometimes, these updates contain security patches created in response to specific threats in prior versions.
- Don’t give an app all the permissions it asks for. Instead, you can choose to give it access to certain data only when required. Minimizing an application’s access to your information keeps you safer.
Discover how McAfee Mobile Security keeps your phone safe
McAfee Mobile Security is committed to keeping your mobile phone secure, whether it’s an iPhone or Android device. In addition to regularly scanning your phone to track suspicious activity, our technology responds to threats in real time. Our comprehensive tools also secure your internet connections and let you browse peacefully. Using our app makes sure that your phone and data are protected at all times.
So, what are you waiting for? Download McAfee Mobile Security today!
Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats. | <urn:uuid:f35ec18f-e1cf-4491-be8e-691972f132ac> | CC-MAIN-2022-40 | https://www.mcafee.com/blogs/mobile-security/7-signs-your-phone-has-a-virus-and-what-you-can-do/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00197.warc.gz | en | 0.913831 | 1,906 | 3.09375 | 3 |
Exasol runs on a cluster of powerful servers. Each server, or node, is connected to the cluster-internally through at least one private network, and is connected to at least one public network.
The building blocks of an Exasol Cluster are commodity Intel servers without any particular expensive components. SAS hard drives and Ethernet Cards are sufficient. Especially there is no need for an additional storage layer like a SAN.
For additional information on hardware and the minimum requirements for server hardware, refer to System Requirements section.
As a best practice the hard drives of Exasol Cluster nodes are being configured as RAID 1 pairs. Each cluster node holds four different areas on disk:
OS with 50 GB size containing CentOS Linux, EXAClusterOS and the Exasol database executables
Swap with 16 GB size
Data with 50 GB size containing Logfiles, Coredumps and BucketFS
Storage consuming the remaining capacity for the hard drives for the Data Volumes and Archive Volumes
The first three areas can be stored on dedicated disks in which case these disks are also configured in RAID 1 pairs, usually with a smaller size than those that contain the volumes. More common than having dedicated disks is having servers with only one type of disk. These are configured as hardware RAID 1 pairs. On top of that software RAID partitions are being striped across all disks to contain OS, Swap and Data partition.
Sample disk layout:
One cluster node has 8 physical disks in this example. These 8 disks are configured as hardware RAID 1 pairs, resulting in 4 usable “disks” (sda,sdb,sdc,sdd) that can tolerate the failure of any one physical hard drive.
On top of that, the 3 areas OS, Swap and Data are striped as software RAID 0 across all 4 “disks”. The storage part is not using software RAID.
Exasol 4+1 Cluster: Software Layers
This popular multi-node cluster will serve as example to illustrate the concepts explained. It is called 4+1 cluster because it has 4 Active nodes and 1 Reserve node. These 5 nodes have the same layers of software available. Upon cluster installation, the License Server copies these layers as tar-balls across the private network to the other nodes. The License Server is the only node in the cluster that boots from disk. Upon cluster start-up, it provides the required SW layers to the other cluster nodes. | <urn:uuid:b97abb18-f2aa-4f89-9df8-e99119cbe535> | CC-MAIN-2022-40 | https://docs.exasol.com/db/6.2/administration/on-premise/architecture/cluster_nodes.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00397.warc.gz | en | 0.911399 | 505 | 2.671875 | 3 |
This article is for women. All women and especially those who care about women. Every year, on March 8th the world dedicates a day to celebrate women. What is the real significance of this day? The very fact that we must have it reminds us of all that we are still not living in an equal gender world.
We still live in a patriarchal world surrounded by all kinds of prejudice and unacceptable violence towards women in almost every culture in the world. However, as has happened since the beginning of time, the younger generations are starting to question old practices and biases women have been following for years. And more importantly, they are changing behaviors of what is considered “normal.”
Women in technology
As a woman working in technology for the last 30 years, this field has always been considered a career for men. Women like Pearl I. Young, the first woman hired as a technical employee, at the National Advisory Committee for Aeronautics (NASA), or Ada Lovelace who is widely recognized as the first woman computer programmer, had to fight a lot, for us to have our place in the tech industry today and they will always be an inspiration for all of us.
Can you imagine graduating as one of only 4 or 5 women in a class full of men? It happened to me in my Technical High School, college, even my MBA and the Latu Sensu degree. This never stopped my drive to continue to learn and grow in the industry, never. When you have passion for what you do, there are very few things that can make you change your mind.
Of course, there have been times when our intelligence and knowledge has been questioned. We’ve all heard jokes or comments when we do our jobs, just for being a woman. Multiple times, our ideas have been underestimated or dismissed, but when the same solution is presented by a man the answer is always right. Unfortunately, almost every woman knows what this feels like.
What does Sorority mean in feminism?
This refers to female sisterhood. It is when a woman supports another woman or group of women, without judgments, only helping, complicity or alliance between each other.
This year’s campaign for International Women’s Day is about breaking biases and we have joined the campaign, to create safe and equal spaces for all women in the industry, their homes, schools, and workplaces.
Imagine a gender-equal world.
- A world free of bias, stereotypes, and discrimination.
- A world that is diverse, equitable, and inclusive.
- A world where difference is valued and celebrated.
- Together we can forge women’s equality.
- Collectively we can all #BreakTheBias.
Over time we have learned to deal with this. We have adapted our ways of speaking and standing up for ourselves when there is an injustice or in a violent situation. We were taught to be quiet and accept whatever the world had done to us because that was what we were here for, cook, have kids, and stay at home. Now, we have learned to be confident, without losing our feminine essence.
Yes, we can use red lipsticks, have red nails, long hair, and high heels if we like and feel comfortable. The way we dress or use our makeup to feel beautiful does not prove that we are less capable or qualified for our jobs. We do not need to look like a man to be respected. And never, never lose our kind and soft way of treating people.
No matter what profession you choose, it must be something that makes you happy and strive for it every day. The world is changing, and all women can do anything they want. There is no one profession for a man or a woman. There is only the passion to do what we love. | <urn:uuid:cc5d6e19-8a92-4acb-8f4c-7a5372eed759> | CC-MAIN-2022-40 | https://edgeuno.com/articles/international-womens-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00397.warc.gz | en | 0.96726 | 781 | 3.1875 | 3 |
A new study found 1 in 12 deaths can be prevented with 30 minutes of physical activity 5 days a week.
The study analyzed information from more than 130,000 people ages 35 to 70 in 17 countries. The researchers examined whether participants met physical activity guidelines, which recommend that people get 150 minutes of moderate activity per week, or 30 minutes a day. The researchers considered not only leisure-time physical activity, but also non-leisure-time activity, including activity done at work.
The scientists found that people who met physical activity guidelines about 30% less likely to die during the study period, compared to those who did not meet physical activity guidelines. Those who met the guidelines 20% less likely to develop heart disease than those who didn’t meet the guidelines.
With these results, the researchers estimated 8% of deaths and 5% of heart disease cases worldwide prevent over seven years.
Meeting physical activity guidelines by walking for as little as 30 minutes most days of the week has a substantial benefit, said, Scott Lear, a researcher at St. Paul’s Hospital in Vancouver, Canada. Physical activity represents a low-cost approach to preventing cardiovascular disease, and provides robust evidence to support public health actions to increase all forms of physical activity.
Previous studies looked mainly at people living in high-income countries, and focused on leisure-time activity. But it wasn’t known whether findings in high-income countries would apply to middle and low-income countries, where leisure-time activity is much less common.
The new study included people living in a mix of high, middle, and low-income countries. None of the participants had heart disease at the start of the study. They followed for about seven years to see whether they developed heart disease or died.
Overall, about 18% of participants did not meet physical activity guidelines. Among this group, 6.4% died during the study period, compared with 4.2% met the guidelines. In addition, 5.1% of those who did not meet the guidelines developed heart disease during the study, compared with 3.8% of those who met the guidelines.
The study said, the more physical activity people engaged in the lower their risks of death and heart disease. Many people who engaged in high levels of physical activity did so primarily though non-leisure-time activities.
These findings realize the full benefits of physical activity, it needs to incorporate into daily life.
More information: [THE LANCET] | <urn:uuid:ea237538-22bc-4fde-89e4-92bf0f8accd1> | CC-MAIN-2022-40 | https://areflect.com/2017/09/22/30-minutes-of-daily-physical-activity-could-prevent-early-deaths/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00397.warc.gz | en | 0.949025 | 515 | 3.296875 | 3 |
The spinach on the grocery store shelf is bright green and looks delicious, but how do you know it is actually safe for you to eat? What if your retailer could see and validate with certainty where that spinach was grown, handled, stored and inspected, as well as every stop it made on its journey to the store? This is information your retailer can have access to with a shared, distributed ledger technology called blockchain. Blockchain-based solutions have the potential to transform the food industry by directly connecting growers, processors, distributors, suppliers, retailers and regulators with a shared, immutable view of their transaction history.
Challenges of the food supply chain
While the globalization of food production and trade has opened up access to food and increased consumer options, it’s also made the supply chain longer and more complex. The journey from farm to fork has become less transparent. Even when growers, suppliers and retailers are fully committed to implementing strong food safety measures, this increasing lack of transparency can make it difficult to quickly diagnose and address issues, including initiating recalls if they are needed.
It can be difficult to locate the cause of a problem because participants in the food supply chain usually keep their own records (some in more detail than others) and are only required at most to share “one up, one down.” These records are frequently paper-based and vulnerable to inaccurate updating. When an issue occurs, inadequate track and trace capabilities may cause investigation delays that can be costly. If an origination point can’t be identified, a particular type of food could be completely banned from store shelves when only a single batch was affected and needed to be pulled. Regulators, growers, processors, distributors, suppliers, retailers and consumers would all benefit if delay and waste could be reduced.
Transforming the food supply chain with blockchain
If farm origination details, batch numbers, processing data, expiration dates and shipping details can be digitally recorded on blockchain, it may become possible to verify the history, location and status of a food product. This end-to-end traceability would improve transparency and efficiency throughout the food supply chain. For example, a retailer becomes aware of an issue with some of the spinach on its shelves. If the condition of that spinach was digitally recorded on the blockchain during its journey from farm to fork, network participants could view the entire history of that spinach to quickly triangulate on the root of the problem. If necessary, a selective recall of spinach from that specific batch could then be executed in an efficient manner.
Solving the challenges in the food industry will require participants in the food supply chain to work together. Recently announced, IBM’s collaboration with food suppliers and retailers such as Dole, Driscoll’s, Golden State Foods, Kroger, McCormick and Company, McLane Company, Nestlé, Tyson Foods, Unilever and Walmart brings together players across the food supply chain to tackle challenges that can result from a lack of transparency. The food safety collaboration will enable network participants to exchange data across a immutable network built on blockchain technology for enhanced visibility and accountability throughout the food supply chain.
Accelerating the future of food safety
Food is a huge, multi-trillion-dollar industry, so implementing a blockchain-based food safety system across the industry won’t happen in a single day. However, the food industry could change faster than you might expect. By focusing on shared value and leveraging insight from collaborators to address different types of needs among various constituents, we can drive adoption to reach critical mass.
We are also able to greatly accelerate the activation of this solution because of the new IBM Blockchain Platform, which is a fully enterprise-ready blockchain platform designed to accelerate the development, governance and operation of business networks. The platform offers a development sandbox and interactive playground to convert business designs into code, democratic governance and management tools to speed the formation and growth of multi-organization networks, and helps enable you to deploy and scale those networks with the necessary security and performance.
Blockchain is already beginning to improve the amount of collaboration, trust and transparency across the food industry, and over time will increase the traceability and safety of food products. Connect with IBM Blockchain and our ecosystem to explore how you can disrupt your industry.
Responsible practices to preserve our planet require innovation, agility, and collaboration. Consumers, investors, producers, and governments around the world are choosing to do business with those that demonstrate a commitment to sustainability. In the mining sector, British Columbia is committed to increased transparency and trust related to where products come from and how they are […]
What excites me the most about being part of the team at IBM is the work we do for our clients that truly makes a difference in individual lives and provides for smarter and safer interactions with each other and our planet. The urgency to reopen all areas of the economy safely as we navigate […]
IBM has a strong heritage in social responsibility. Our technical and industry professionals across business units and research divisions develop new ways of helping to solve difficult environmental problems based upon data and today’s exponential information technologies — including AI, automation, IoT and blockchain, which also have the power to change business models, reinvent processes, […] | <urn:uuid:8177b17e-bc5f-49b6-8fc9-bcacdcef4e17> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/blockchain/2017/09/improving-confidence-in-food-safety-with-ibm-blockchain/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00397.warc.gz | en | 0.949772 | 1,058 | 2.78125 | 3 |
Increasingly powerful and inexpensive computers, advanced machine-learning algorithms, and the explosive growth of big data have enabled us to extract insights from all that data and turn them into valuable predictions.
Copyright by blogs.wsj.com
But the prominence of algorithmic methods has led to concerns regarding their overall fairness in the treatment of those whose behavior they’re predicting, such as whether the algorithms systematically discriminate against individuals with a common ethnicity or religion.
These concerns have been present whenever we make important decisions. What’s new is the much, much larger scale at which we now rely on algorithms to help us decide. Human errors that may have once been idiosyncratic may now become systematic.
“Artificial intelligence is the pursuit of machines that are able to act purposefully to make decisions towards the pursuit of goals,” wrote Harvard University Professor David Parkes in ” A Responsibility to Judge Carefully in the Era of Decision Machines ,” an essay recently published as part of Harvard’s Digital Initiative .
“Machines need to be able to predict to decide, but decision making requires much more,” he wrote. “Decision making requires bringing together and reconciling multiple points of view. Decision making requires leadership in advocating and explaining a path forward. Decision making requires dialogue.”
Given the widespread role of predictions in business, government and everyday life, AI is already having a major impact on many human activities. As was previously the case with arithmetic, communications and access to information, we will be able to use predictions in all kinds of new applications. Over time, we’ll discover that lots of tasks can be reframed as prediction problems.
But, “[it’s] decisions, not predictions, that have consequences,” Mr. Parkes wrote. “If the narrative of the present is one of managers who are valued for showing judgment in decision making…then the narrative of the future will be one in which we are valued for our ability to judge and shape the decision-making capabilities of machines. “
The academic community is starting to pay attention to these very important and difficult questions underlying the shift from predictions to decisions. Last year Mr. Parkes was co-organizer of a workshop on Algorithmic and Economic Perspectives on Fairness. The workshop brought together researchers with backgrounds in algorithmic decision making, machine learning and data science with policy makers, legal experts, economists, and business leaders.
Workshop participants were asked to identify and frame what they felt were the most pressing issues to ensure fairness in an increasingly data- and algorithmic-driven world. Let me summarize some of the key issues they came up with as well as questions to be further investigated.
Decision Making and Algorithms. It’s not enough to focus on the fairness of algorithms because their output is just one of the inputs to a human decision maker. This raises a number of important questions: How do human decision makers interpret and integrate the output of algorithms? When they deviate from the algorithmic recommendation, is it in a systematic way? And which aspects of a decision process should be handled by an algorithm and which by a human to achieve fair outcomes?
Assessing Outcomes. It’s very difficult to measure the impact of an algorithm on a decision because of indirect effects and feedback loops. Therefore, it’s very important to monitor and evaluate actual outcomes. Can we properly understand the reasons behind an algorithmic recommendation? How can we design automated systems that will do appropriate exploration in order to provide robust performance in changing environments?
Regulation and Monitoring. Poorly designed regulations may be harmful to the individuals they’re intended to protect as well as being costly to implement for firms. That means it’s important to specify the precise way in which compliance will be monitored. How should recommendation systems be designed to provide users with more control? Could the regulation of algorithms lead to firms abandoning algorithms in favor of less inspectable forms of decision-making?
Educational and Workforce Implications. The study of fairness considerations as they relate to algorithmic systems is a fairly new area. It’s thus important to understand the effect of different kinds of training on how well people will interact with AI-based decisions, as well as the management and governance structure for AI-based decisions. Are managers (or judges) who have some technical training more likely to use machine-learning-based recommendations? What should software engineers learn about the ethical implications of their technologies? What’s the relationship between domain and technical expertise in thinking about these issues?
Algorithm Research. Algorithm design is a well-established area of research within computer science. At the same time, fairness questions are inherently complex and multifaceted and incredibly important to get right. How can we promote cross-field collaborations between researchers with domain expertise (moral philosophy, economics, sociology, legal scholarship) and those with technical expertise? | <urn:uuid:7fb9ff6e-c5e5-41a3-a99a-6001e41322de> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/03/29/the-coming-era-of-decision-machines/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00397.warc.gz | en | 0.954175 | 1,016 | 3.03125 | 3 |
NASA is slated to launch two small-satellite missions valued at $140 million combined as part of the agency’s Heliophysics Solar Terrestrial Probes initiative in 2025.
The rideshare missions will launch along with NASA’s Interstellar Mapping and Acceleration Probe and are meant to support research into the Earth’s exosphere as well as propulsion technologies driven by solar radiation, the agency said Friday.
The first mission, known as Global Lyman-alpha Imagers of the Dynamic Exosphere (GLIDE), is valued at $75 million and will focus on tracking hydrogen-emitted ultraviolet light in the region between the Earth’s atmosphere and outer space.
Solar Cruiser, the second probe, is a technology demonstration effort worth $65 million that is aimed at evaluating the capacity of solar photons to support spacecraft built to forecast solar storms.
NASA also allocated funding for the /Spectral Imaging of Heliospheric Lyman Alpha (SIHLA) mission of opportunity which will involve mapping the sky to study the boundary between the heliosphere and heliopause.
SIHLA will receive a final decision on STP rideshare participation at a later date. | <urn:uuid:9be1283d-b997-4cad-9d4d-8b63f744fba7> | CC-MAIN-2022-40 | https://executivegov.com/2020/12/nasa-to-launch-smallsat-missions-totaling-140m-for-solar-probe-initiative/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00397.warc.gz | en | 0.938878 | 244 | 2.6875 | 3 |
Google AI Quantum Engineer Uses New Approach to Multiplication to Open Door to Quantum Computing
(QuantaMagazine) Quantum computers theoretically can do anything that a classical computer can. In practice, however, the quantumness in a quantum computer makes it nearly impossible to efficiently run some of the most important classical algorithms.
In April, a software engineer at Google AI Quantum in Santa Barbara, California, described a quantum version of a classical algorithm for quickly multiplying very large numbers. Classical computers have been running this algorithm for a long time. Before Gidney’s paper it was unclear whether it would be possible to retrofit it for quantum machines.
The multiplication algorithm is part of a class of nearly ubiquitous algorithms in computer science. Gidney expects that his new technique will allow quantum computers to implement this class of algorithms, which until now appeared to be too cumbersome to be used in a quantum machine.
Gridley used the Karatsuba method developed in 1960 by a Russian mathematician Anatoly Karatsuba. His method involves splitting long numbers into shorter numbers. To multiply two eight-digit numbers, for example, you would first split each into two four-digit numbers, then split each of these into two-digit numbers. You then do some operations on all the two-digit numbers and reconstitute the results into a final product. For multiplication involving large numbers, the Karatsuba method takes far fewer steps than the grade-school method.
Gidney described a quantum way of implementing Karatsuba multiplication that doesn’t impose huge memory costs. Gidney expects that his method will work for adapting many classical recursive algorithms to quantum computers. | <urn:uuid:28f2ef1f-f712-4b29-a6d2-de26752be8c6> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/google-ai-quantum-engineer-uses-new-approach-multiplication-open-door-quantum-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00397.warc.gz | en | 0.930044 | 336 | 3.140625 | 3 |
9. Multimedia Communications – DTI Multimedia Communications on the Move
1. Origins of GSM – Technology
In 1984 France and Germany agreed to run joint R&D trials of emerging digital cellular radio technologies. It was essentially a competition between leading French and German technology companies to have their digital mobile technology chosen by the state monopoly mobile operators in France and Germany for a future digital cellular radio service.
In 1985 the agreement was extended to include Italy.
Below is a copy of the official tripartite co-operation agreement between France, Germany and Italy:
In 1986 the UK joined this agreement and it became the Quadrapartite agreement on Digital Cellular radio. A fourth annex was added to the agreement listing the UK R&D activities.
The intentions of the agreement was to coordinate of R&D activities and operational plans. The reality was it was like an onion with three layers. The inner layer was an 18m ECU Franco-German R&D collaboration. The second layer embraced Italy…a junior partner in technology terms but very enthusiastic to see a common European standard. The third layer was the UK …willing to see a common technical standard… providing it was the one the UK mobile operators felt able to support and in a far trickier situation to commit the UK to rolling out the new digital mobile networks…since this was no longer a decision for the UK Government.
The technology differences between France and Germany on the one hand (more development than research) and Italy on the other (more research centric) becomes clear from two of the annexes attached to the agreement. A copy of the two annexes is attached below:
2. Origins of GSM – Intellectual Property Rights
All the grief that was to hit ETSI (and most European suppliers) on Intellectual Property Rights flowed from a single page in the third annex to the France/German/Italian co-operation agreement. It required any Essential Patents to be available to all European suppliers “free of charge”. An Essential Patent was one that would be infringed unavoidably if a mobile radio was to be compliant with the technical standard.
A copy of this third annex is attached below:
This was later to lead to the mother of all IPR battles in the European Telecommunications Standards Institute when ETSI came to decide its rules on IPR.
Ranged on one side were all the European Telecommunications Monopoly operators and their traditional suppliers. Ranges on the other side were IBM, DEC and Motorola. It was a clash of history and culture.
All the big monopoly telephone companies had national R&D laboratories producing dazzling innovation. But they made their money from telephone services protected by state monopolies. So the tradition was to licence out any of their IPR free of charge to their traditional suppliers. This was not just a European thing. ATT invented the transistor and gave away their IPR on this mother of all electronic inventions to mankind – free of charge.
On the other hand IBM, DEC (in computers) and Motorola (in private mobile radio) made their money selling kit and they secured their monopoly position (where they could) through patent protection laws. The fact that some of their patents were essential to a public telephone service was simply a bit of good luck they were entitled to…and giving it away free of charge was not going to happen…they were not even willing to be forced to licence key IPR on fair and reasonable terms.
It was a clash of public policy and private interests and between the interests of manufacturers and service companies. Today these IPR wars still rumble on – the result of these yet unresolved conflicts.
3. Origins of GSM – An Agreed Standard
GSM met in Madeira in February 1987. The results from all the trials had been gathered in and processed. A decision had to be made on which technology to select to become the basis of the GSM technical standard. But GSM failed to find a common agreement. On the one side were France and Germany backing a wide-band TDMA solution. On the other hand were most other countries preferring a narrow-band TDMA approach.
It was also evident that there were sharp divisions of view on which version of narrow-band TDMA should be selected. On the face of it the GSM Madeira meeting looked set to fail and this is what most of the world concluded at the end of the meeting.
But the GSM Madeira meeting achieved a hidden success. It managed to agree a set of working assumptions for a narrow-band TDMA solution that was not only agreed by the narrow-band TDMA camp but was also supported by France and Germany (without prejudice to their preference for wide-band TDMA).
This breakthrough resulted from intensive back-room discussions and set down on some scruffy pieces of paper. This comprised a summary document describing the nature of the agreement, the specification for base stations and the specification for mobiles.
Below is a copy of those scruffy pieces of paper for specification for the GSM mobile (the set of the initial working assumptions)… the draft of the world’s first recorded GSM mobile specification. The document is now held by the UK Science Museum.
4. Origins of GSM – Political
GSM was a European project that sat across the European Union (or Community as it was then called) and the European Free Trade Area. Both were represented in CEPT. The party absent from CEPT (formally speaking) was the European Commission. So GSM was not the usual type of EU political project coming out of Brussels. The political decision was the result of an inter-governmental partnership between France, Germany, Italy and the UK. It flowed from the digital cellular cooperation agreement mentioned above. The critical meeting was hosted by the German Minister in Bonn in May 1987. Below is a copy of the declaration agreed at this game-changing meeting between the big-4 EU (or CEPT) governments represented. The document is now held by the UK Science Museum.
It sent a very strong political message that there would be a single standard supported across Europe, it was specific on the technology to be used and committed each of the four governments to take all necessary measures to ensure GSM services opened by 1991 in their respective countries.
The most critical of these measures was securing the investments from the mobile network operators. Towards this end the declaration called on Officials from the four countries to draw up a Memorandum of Understanding to be ready by September 1987 for all mobile operators in CEPT to sign. It was European inter-governmental leadership at its best.
Original of the UK Government’s copy of the Bonn Declaration on display in the Information Gallery of the Science Museum in London
That is not to say the European Commission had no role in the success of GSM. They tabled a Directive that ensured the spectrum was set aside for GSM (that might otherwise be usurped by market pressures in some countries), played a strong role in the GSM type approval and were instrumental in propelling the duopoly GSM network competition model across the entire EU.
5. Origins of GSM – Networks and Services
The GSM Memorandum of Understanding was perhaps the most important document in mobile phone history. It was the decisive means to secure not only the investment in GSM networks in every European country but to do it on a scale, scope and time-scale to shock an entire industrial eco-system into life…to create in fact an entirely new industry almost overnight. There is only one original copy of the GSM Memorandum of Understanding. This is held at the HQ of the GSMA. The UK DTI Official who drew up the GSM MOU took a copy in Copenhagen after it had been signed and before it being handed over to the German Administration who provided the first Chairman of the signatories of the GSM MoU (which later became the GSMA). Below is the copy of the original took away from Copenhagen by the author. The actual original is held by the GSMA Association at its headquarters.
5. GSM MoU
The GSM Memorandum of Understanding had an annex setting out Network Implementation Phases and Related Milestones. A copy of this Annex is set out below:
Every mobile operator signing the Memorandum of Understanding did so on a unique page that was added to the MoU. This was in the spirit of a very open agreement…a spirit where Europe was willing to share the benefits of this common effort with every mobile operator in every part of the world.
Thirteen mobile network operators from twelve countries signed the MoU on the 7th September 1987. Copies of these original signatory pages are given below:
In the meeting on the 7th September when the MOU was formally discussed there were indications from 11 countries that their operators would sign the MOU. Over the lunch break Portugal got authorisation to sign…making 12 countries by the end of the day. Around a week later Spain then signed. A purely voluntary GSM Memorandum of Understanding had catalysed agreement right across Europe… that was eventually to transform mobile radio globally.
6. Origins of GSM – Personal Communications
In the 1970′s and 80′s national mobile networks were rolled out on the basis of the fewest possible base stations to provide national coverage – which meant towers on top of hills or high buildings. GSM was conceived in this era where mobile phones were either car phones or huge heavy portable devices. In these early years of cellular radio the portable mobile phone was inconsequential…they only worked when they were very close to a base station. In 1986 around 85% of UK mobile customers only had car phones. That percentage was much higher in France, Germany and Italy.
There were plenty of visionaries around that dreamed that one day every wire-line telephone in every home would be replaced by a small mobile radio telephone.
The most practical way of achieving this vision (from the perspective of the early 80′s) looked to be the cordless phone. Japan led the way with its PHS service and by 1988 there was wide interest in a European version of this. In the UK it was called Telepoint based on a 900MHz cordless phone called CT2.
The use of national cellular radio networks for wire-line substitution for a mass consumer market did not look a practical or economic proposition.
This conventional wisdom was to be dramatically changed in 1989. The UK Department of Trade and Industry published a seminal consultation document called “Phones on the Move”. A copy of this document is below:
The document broke new ground. It was the first time a government gave a very strong steer that personal communications should be based upon GSM technology and not cordless phone technology. Underpinning this was the fact that it was also the first time any government proposed to open up 1800 MHz for cellular radio services. At 1800 MHz transmission distances are much shorter…so that more dense networks had to be rolled out. The beneficial side effect of this was to significantly widen the area over which small mobiles would work successfully…a factor enhanced by licencing a third and fourth competitive mobile operator.
The decision to open up the bands at 1800 MHz in Europe had another very positive consequence. It allowed GSM to be adopted in the USA (at 1900 MHz) and was to lead to seamless roaming across the Atlantic…allowing the G in GSM to truly mean Global.
7. Origins of GSM – Radio Spectrum
Spectrum is the vital raw material for all radio services. New spectrum has always been critical in getting new networks into service. It often creates the opportunity. Since the very early days of Marconi most of that new spectrum has come from re-farming spectrum from an obsolete service. In the case of GSM the dawn of the spectrum opportunity was some NATO military radio services that were being phased out that used spectrum at 900 MHz.
Below is a copy of a UK Home Office internal minutes between top Civil Servants on what to advise Ministers to do with this new spectrum.
It is a classic document in three respects. First, it acknowledges the case for this new spectrum being given over to mobile services and notes the possibility that this view might be shared by other countries in Europe. Second it confronts the BT monopoly and mentions (for the very first time) emerging political forces wanting competition to be introduced into public mobile radio. The Home Office sees all sorts of problems with introducing mobile network competition – and this offers an important historical insight into how how alien mobile network competition was viewed by officials right across Europe.
The third thing the document capture is how radio spectrum was managed (behind closed doors) prior to the arrival in Europe of spectrum auctions and independent regulators…it also shows the enormous efforts made by the radio spectrum managers of the late 1970′s to be ahead of the curve. Here is a document dated 1981 looking ahead to the likely national mobiles needs through to 1995…and not being that far adrift.
(Back to the Top)
8. First GSM Mobiles to be type approved
In 1991 the first GSM networks were ready but there were no GSM mobiles that had been type approved and ready to supply to customers. For the established mobile operators with analogue networks they could afford to wait patiently for the manufacturers and type-approval authorities to sort things out. But new entrants like Mannesmann in Germany a GSM network with no GSM mobiles to sell was a disaster. The manufacturers finally came through with products in early 1992 but the next hold-up was that the type-approval test equipment from Rohde & Schwartz was late. It was much talked about at the second GSM Congress in Cannes. George Schmitt toured the exhibition and conference giving out GSM badges but his exasperated interpretation of GSM was God Send Mobiles…Finally the European administrations got together and agreed an Interim type-approval arrangement. National authorities faxed the GSM MoU office with GSM mobiles approved under these arrangements and the MoU staff acted as an information clearing house. The list below is not the original. It was compiled by the UK approval authority BABT from a list the GSMA consolidated in 1999.
9. Origins of the DTI 3G Multimedia Vision
The DTI issued a consultation document in July 1997 that set out the Government’s plans for the award of 3G licences in the hope that those who were successful would be well placed to take a world lead in the development of 3G standards and multimedia services and to participate in the 3G licensing opportunities expected to arise around the world.
9 Multimedia Communications on the Move
THE 3G OUTCOME DID NOT QUITE MATCH THE DTI’s EXPECTATIONS – THE MOBILE INDUSTRY WERE HIT BY A CALAMITY THAT MADE GOVERNMENTS VERY RICH …click on…INSIDE THE GREATEST 3G AUCTION ON EARTH | <urn:uuid:163d6d7f-5081-4a24-9f56-dd87fd603b30> | CC-MAIN-2022-40 | http://www.gsmhistory.com/rare-gsm-documents/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00397.warc.gz | en | 0.972128 | 3,065 | 3.109375 | 3 |
Table of Contents
- About KMIP
- How does KMIP work?
- KMIP Profile Version 2.1
- Benefits of KMIP
Encryption is the best option for an organization’s data security, which is why almost every business uses encryption to protect their data as they’ve realized how important it is. However, it must be remembered that managing encryption keys remains a challenge for the vast majority of people.
Implementing the Key Management Interoperability Protocol is the best solution to deal with a situation where data exchange is required between different key management servers and clients. It allows data to be sent in an interoperable manner between different management servers and the client’s system.
Development of KMIP
KMIP was developed by OASIS (the Organization for the Advancement of Structured Information Standards).
The primary purpose of KMIP was to define an interoperable protocol for data transfer between the various servers and consumers who may use these keys. It was first utilized in the storage division to exchange important management messages between archival storage and the management server. However, security concerns grew over time, requiring better encryption and a centralized key management system capable of uniting all moving parts inside an organization.
What is KMIP?
KMIP is a protocol that allows key management systems and cryptographically enabled applications, such as email, databases, and storage devices, to communicate. KMIP streamlines the management of cryptographic keys for organizations, removing the need for redundant, incompatible key management systems.
KMIP is an extensible communication protocol for manipulating cryptographic keys on a key management server that defines message formats. KMIP makes data encryption easier by simplifying encryption key management. On a server, keys can be generated and subsequently retrieved, sometimes wrapped or encrypted by another key. It also supports various cryptographic objects such as symmetric and asymmetric keys, shared secrets, authentication tokens, and digital certificates. Clients can also ask a server to encrypt or decrypt data without directly accessing the key using KMIP.
The key management interoperability standard can support both legacy systems as well as new cryptographic applications. In addition, the standard protocol makes it easier to manage the cryptographic key lifecycle, including generation, submission, retrieval, and termination.
How Does KMIP work?
KMIP is an open standard-based encryption and cryptographic key management system that standardizes and creates a universal language to communicate. In the absence of KMIP, different organizations use different languages for different purposes, which requires different security communication lines and results in increased costs for operations, infrastructure, and training.
The Key Management Interoperability Protocol ensures that a single language is used across different management environments without impacting performance.
The common interface provided by the Key Management Interoperability Protocol eliminates redundant and incompatible key management processes and enables more ubiquitous encryption. Furthermore, it provides easy and secure communication among different cryptographically secure applications.
Not only does KMIP ensure the security of critical data, but it also makes it easier to handle various keys across different platforms and vendors. All of this improves the IT infrastructure’s cost-effectiveness.
KMIP Profile Version 2.1
The Key Management Interoperability Protocol is a single, extensive protocol for communicating between clients who request any number of encryption keys and servers that store and manage those keys. KMIP delivers enhanced data security while minimizing expenditures on various products by removing redundant, incompatible key management protocols.
The KMIP Specification v2.1 is for developers and architects who want to develop systems and applications that use the Key Management Interoperability Protocol Specification to communicate.
Within specific contexts of KMIP server and client interaction, KMIP Profiles v2.1 specifies conformance clauses that define the use of objects, attributes, operations, message elements, and authentication mechanisms.
Benefits of KMIP
- Task Simplification: Organizations encounter a variety of issues while establishing IT security configurations. When many companies and technologies are involved, the situation becomes even more complicated. For example, the problem is significantly more complicated in the case of encryption and key management, as a separate key manager is required for each encryption. KMIP efficiently solves this issue by allowing a single key management system to manage all encryption systems, allowing organizations to spend their time and resources on more valuable business tasks.
- Operational Flexibility: Different proprietary key management systems were required to manage encryptions before the deployment of KMIP. Organizations must collaborate with different vendors, each of whom has systems built for different situations and configurations. KMIP provides flexibility to the organization to utilize any key management system. KMIP enables the organization to integrate across cloud platform, edge, and on-prem systems with a single key manager.
- Reduces the IT Infrastructure Cost: The hardware and software necessary to secure data are considerably reduced using a single KMIP-powered encryption key management system, lowering the total cost of owning security infrastructure. In addition, KMIP makes it easier to handle various keys across different platforms and vendors, improving the IT infrastructure’s cost-effectiveness.
With time, KMIP adoption and diversification became stronger. Technical and communications companies, universities, and libraries have been found to use KMIP to protect sensitive data. The robust security, effectiveness, and cost-efficiency of management of key lifecycle implementation and technology advancement show no sign of slowing down. | <urn:uuid:cc1718c7-ab1c-440e-aa9f-80762139f790> | CC-MAIN-2022-40 | https://www.encryptionconsulting.com/education-center/kmip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00397.warc.gz | en | 0.909933 | 1,106 | 2.9375 | 3 |
Navigating the world of computer parts can be daunting for the uninitiated. There are many different components like the hard drive, motherboard, RAM, and GPU, each with unique functions and many variations. But arguably, the most crucial component of them all is the CPU.
What is a CPU (in short answer)?
The term CPU stands for Central Processing Unit. In short, the CPU is electronic machinery that executes instructions from programs so you can call your friends, open your web browser, or write emails. Many people typically ask: "what is a CPU in a laptop or a desktop computer?" not realizing that CPUs are part of other modern devices like tablets, smartphones, DVD players, or smart washing machines too! Regardless of where you find it, the CPU will be completing calculations by utilizing its billions of transistors. These calculations run the software that allows a device to perform its tasks. For example, a CPU in a smart thermostat helps its software adjust heating and cooling temperatures by executing instructions.
What does a CPU do in a computer?
Just to clarify, any programable machine that automatically carries out logical operations or sequences of arithmetic is a computer. In other words, your laptops, desktops, tablets, gaming consoles, and smartphones are all computers. So, what does a CPU do in a computer? Well, it interprets binary signals to complete actions, calculations, and run applications in a three-step process:
- Fetch: The CPU fetches instructions from the computer’s memory and stores them in a part of its control unit called the Instruction Register (IR).
- Decode: The CPU sends the instruction from the IR to its instruction decoder. This combinatorial circuit decodes the instruction into signals.
- Execute: The decoded signals travel to relevant destinations in the CPU for the execution phase.
A CPU also works with other components. For example, it may take relevant data sent from a video game to a graphics card. The graphics card then processes the information to display on a monitor. Likewise, a CPU helps move data from a computer’s hard drive to its memory for faster access.
Is the CPU the brain of the computer?
Experts often refer to the CPU as the brain when describing computer components in layman terms. While this analogical comparison to a human body accurately depicts the critical nature of a CPU’s role, it doesn't tell the whole story. A CPU doesn’t offer instructions; the software does. In truth, a computer’s software and CPU working together in harmony are the brains of the operation.
What makes a CPU good?
A clock speed tells you how many instructions a CPU can manage in a second and generally indicates how fast it is. From the 90s to the early 2000s, CPU clock speeds improved significantly with every new generation. However, advancements in clock speeds began to plateau due to extra heat generation and higher power consumption. Here, manufacturers found it more cost-effective to enhance CPUs in other ways, so much so that a modern processor can usually outperform a decade-old processor that has a higher clock speed.
The multi-core processor revolution began with dual-cores and quad-cores. Instead of focusing on advancing clock speed, manufacturers fitted multiple CPUs on one chip. Nowadays, premium CPUs are hitting 32 cores, 64 cores, and more. Such CPUs are an excellent choice for video editors, game streamers, and users of demanding applications, though they may be something of an overkill for the average user.
Hyper-threading is a technological innovation from Intel that allows a single processor core to perform like two by dividing workloads for simultaneous processing. To put it crudely, imagine dividing a hot dog into two and eating both pieces together for faster consumption instead of starting on one end and working your way to the other. Of course, Intel’s competitor AMD has its own version of hyper-threading.
Clock speed vs. Cores
It helps to think of a CPU with a higher clock speed as a sports car and a computer with more cores as a truck. While the sports car will reach its destination faster, a truck will carry more load. Whether you should select a fast processor or a processor with multiple cores depends on your workload. For example, while some apps benefit from multiple cores, others rely on higher clock speed and may not utilize multiple cores.
What does the CPU do in gaming?
The GPU on a graphics card or motherboard renders graphics for a game like landscapes and animations, while a CPU handles calculations for in-game mechanics, artificial intelligence (AI), and inputs from a mouse and keyboard. Not too long ago, games did not take advantage of multiple cores, but modern titles can efficiently utilize over four cores. So, if someone asks you what the best CPU for gaming is, you might tell them to pick a quick multi-threading processor with at least four, if not six cores, that falls in their budget.
Why is my CPU slow?
CPUs can slow down because of aging, overheating, inadequate power or poor ventilation. Some types of malware can also hijack system resources. Check out our article on how to protect your computer from malicious cryptomining to prevent bad actors from using your machine for their monetary gain.
How to maintain a CPU
To keep your CPU in good shape, ensure that your computer’s fans are clean and keep your machine in a ventilated location. For protection against CPU over-use from malware, use reliable antivirus/anti-malware software to protect against resource-stealers like cryptojackers. You may also want to remove some pre-installed software that could unnecessarily take up resources. If you have built your own computer and know how to work with hardware, according to Intel, you should also replace your thermal paste once every few years. Lastly, it also helps to recognize hardware problems that look like malware problems to troubleshoot issues more thoroughly as they arise. | <urn:uuid:0130b1e1-bb6a-46dd-b047-bcd7520e99c4> | CC-MAIN-2022-40 | https://www.malwarebytes.com/computer/what-is-a-cpu | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00397.warc.gz | en | 0.932223 | 1,241 | 3.90625 | 4 |
Between the increasingly fragile power grid, the escalating power consumption of IT equipment and the constantly increasing importance of our network, it isn’t difficult to see the value a UPS (uninterruptible power supply) has to not only a business, but a home. So have you ever decided to research some UPSs to see which is the right fit for you, only to be left asking yourself, “Watts? VA? Huh?”
Most of us have heard of watts before – and have some understanding that each piece of equipment draws a certain amount of watts when it operates, but how exactly does that relate to a UPS? And what is a VA anyway?
Electronics have both maximum watt ratings and maximum VA (volt-ampere) ratings; and neither the watt nor the VA rating of a UPS may be exceeded by the attached equipment (load). Watts is the real power drawn by the equipment, while volt-amps are called the “apparent power” and are the product of the voltage applied to the equipment multiplied by the current drawn by the equipment. The watt rating determines the actual power purchased from the utility company and the heat loading generated by the equipment; and the VA rating is used for sizing wiring and circuit breakers. Any clearer? Probably not much.
What you really need to know is that for electronics such as computers and UPSs, watt and VA ratings can differ significantly; with the VA rating always being equal to or larger than the watt rating. The ratio of the watts to VA is called the “Power Factor” and is expressed either as a number (i.e. – 0.8) or a percentage (i.e. 80%). This power factor is what really matters when sizing a UPS for your specific requirements.
APC™ by Schneider Electric’s™ latest generation of Smart-UPS™ On-Line now delivers innovative features to help you make the most of your energy™. 6kVA (6000 VA) models and higher have a unity power factor, which means VA translates to an equal amount of watts (i.e. 6000 VA = 6000 Watts). Smaller models of the next generation of Smart-UPS On-Line have a 0.9 power factor or higher, and all are Energy Star™ qualified regardless of VA.
The difference between 0.8 or 0.9 power factors and a unity power factor (1.0) may not sound like much – but when you take into account the fact that the extra available wattage can be used to support additional loads and extend runtimes, it is easy to see how the next generation of Smart-UPS On-Line will increase your availability while saving you money.
Please refer to our UPS Selector for help properly sizing a UPS. Alternatively, if you are looking to upgrade your current UPS, refer to our UPS Upgrade Selector; and don’t forget to take advantage of ourTrade-UPS Program which allows you to receive up to a 25% discount on the purchase of a new APC by Schneider Electric UPS when you trade in your old model, regardless of the manufacturer.
For further discussion regarding the differences between watts and VA please refer to White Paper 15, Watts and Volt Amps: Powerful Confusion. | <urn:uuid:fb0a8cdc-be6b-4e78-b854-5b7510c4496e> | CC-MAIN-2022-40 | https://www.apc.com/il/en/solutions/industry-insights/watts-vs-va-whats-difference-anyway.jsp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00597.warc.gz | en | 0.946289 | 671 | 2.765625 | 3 |
To wrap up Cybersecurity Awareness month, we bring you some helpful practices that any company can start today to reduce their attack surface and avoid getting hacked. Unfortunately, as the sharp rise in breaches indicates, getting hacked can happen to any of us. Read on to see what you can do to prevent it.
According to IT Chronicles, about 4,000 cyber-attacks occur each day in the United States, and about 30,000 websites are hacked daily across the world. These alarming statistics point to the fact that anyone can fall victim to a cyberattack if they are not careful.
Luckily, everyone can take certain precautions to keep their personal data as secure as possible.
Here are 5 simple ways to avoid being hacked:
#1 Use Multi-Factor Authentication (MFA)
Multi-factor authentication (MFA) is a method used to help keep your data secure when accessing online accounts or workstations. MFA is defined as a method of security that requires multiple independent methods of authentication to verify a user’s identity for a login or other transaction.
Most people have used MFA whether they realize it or not – MFA includes passwords, time-based codes, biometrics, and more to make it more difficult for hackers to get into a user’s account. For instance, a hacker could have some luck guessing a password (particularly weak ones), but the odds of a hacker having additional access to the device that received a time-sensitive code are much lower.
In other words, MFA puts more obstacles in the way of the hacker as they try to access a given resource. Fortunately, many personal and work accounts – from email and social media to secure workstation login – offer MFA options that you can turn on to keep your information safe.
#2 Create Unique Passwords
Creating strong passwords seems like a no-brainer; however, many breaches have shown that users still create weak passwords susceptible to brute-force attack. According to Tech Republic, “password” is still being used as the most common password across all industries. Ideally, we would use the strongest passwords possible for all accounts, but especially ones that have personal information that can be traced back to one’s identity (Social Security Number, home address, etc.).
When creating a new password, make sure to include eight characters at the bare minimum. These characters should be a combination of letters, numbers, and symbols. Additionally, using both uppercase and lowercase letters make the password even more difficult to guess. Many websites now enforce these rules by default and will tell you when a password needs some more characters added. And finally, make sure to avoid using names or common words since these types of can be rather easy to guess.
In recent years, we have seen the rise of passwordless authentication, which attempts to avoid the weak password issue by relying on other login modalities. This can include biometrics, PINs, and hardware tokens, often used in conjunction with each other. AuthX supports a passwordless authentication experience on our workstation client.
#3 Keep Your Technology Updated
Making sure that your software is up to date can considerably help ward off cyber-attacks. In September 2021, Apple had to implement an emergency software update after it was discovered that spyware could be downloaded onto Apple devices – putting millions of users’ data at risk.
According to USA Today, hackers had the ability to secretly install the spyware on Apple devices even if the user did nothing wrong, such as click on a malicious link or open a bad document. Once it became known that the spyware made it possible for hackers to steal sensitive information, Apple immediately implemented a new software update to fix the problem.
Knowing is only half the battle and making sure your organization always keeps its devices up-to-date ensures your data stays as secure as possible. Some security platforms allow you to only grant access to those users running the latest patches, such as with AuthX Adaptive Policy.
#4 Learn to Recognize Phishing Attacks
Phishing is a type of social engineering where a hacker tries to steal personal information from others by using deceptive emails and websites. These attacks have grown increasingly sophisticated, using an evolving set of tricks to hid malicious links and convinces potential victims of their authenticity. According to Expert Insights, 75% of organizations across the globe where hit with phishing attacks in 2020, and 74% of phishing attacks that targeted American enterprises were successful. These numbers suggest how vulnerable organizations of all types are to such cyberattacks.
Thankfully, numerous ways exist to spot phishing attacks and to prevent yourself from falling victim to one. According to Microsoft Support, one should be immediately suspicious if an email or text urgently demands that you click on a link or open a document. Bad spelling and poor grammar can also give away a phishing attempt, especially considering most organizations ensure that their copy has as few errors as possible. In addition, emails sent with incorrect domains are a strong sign of a phishing attack. In other words, the email content may appear to originate from a reputable company, but the email address may not be from an official company email address.
Keep an eye out for these telling signs of phishing attacks to avoid giving out your personal information to a malicious actor.
#5 Get Off Public Wi-Fi
Although convenient, public Wi-Fi can pose a potential threat to the security of your connected devices. According to Good Speed, a major reason why using public Wi-Fi presents a risk is its frequent lack of encryption.
But why is encryption important? When Wi-Fi is not encrypted, it allows virtually anyone to have access to information on smart devices that are using unencrypted Wi-Fi. This could allow hackers to steal your information, your company’s information, and any other sensitive data you send over public Wi-Fi.
To add an additional layer of security, only connect to these networks when using a VPN to keep your data safe. For additional security, using MFA to authenticate to your VPN extends enterprise security policies to the edge of the network, such as with AuthX’s integration with OpenVPN.
Looking to Improve Your Cybersecurity?
AuthX is a seamless solution to keep your data secure. We offer MFA services using RFID readers, push notifications, biometrics, SMS/call, remote unlock, and more to ensure that your data is protected from hackers. Sound interesting? Click HERE to sign up for a free trial for AuthX! | <urn:uuid:c07cfaea-6dd0-4185-8e61-a0ac96c91fbe> | CC-MAIN-2022-40 | https://www.authx.com/5-simple-ways-to-avoid-getting-hacked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00597.warc.gz | en | 0.945548 | 1,332 | 3.171875 | 3 |
The onus for data privacy preservation has shifted partially toward businesses, but this doesn’t decrease the control of individuals. On the contrary, individuals have greater control of how their information is used, mostly because businesses are accountable for how they store, share, and use data.
Regulations like GDPR, CCPA, and LGPD are forcing businesses to absorb increased responsibility for data security. Along with the changing digital identity landscape, such regulations are the leading driver of privacy-enhancing technology adoption. 60% of large enterprises will be leveraging one or more privacy-enhancing technologies (PETs) by 2025.
While regulations and institutional pressure accelerate adoption of privacy-enhancing technologies, other forces delay deployment. Specifically, implementation costs create financial roadblocks. Our reporting on privacy-enhancing technologies outlines how data privacy strategies and notable PET providers help businesses reconcile costs and risks.
Privacy-enhancing technologies protect data by minimizing personal data use and sharing. The data becomes more secure while individuals gain visibility into (and control over) how their information is consumed.
PETs fall into two categories:
Soft and hard PETs all aim to increase data security and privacy. EU data protection authorities have levied $1.2 billion in fines over breaches of GDPR since January 2021. Privacy-enhancing technologies are designed to prevent such penalties for regulatory non-compliance, though there’s still a long way to go to address new risks.
Individuals are increasingly exposed to fraud, data breaches, and misuses of their personal data. Cellular technologies, devices, and networks that service over five billion mobile subscribers across the world also unlock new data streams that provide us with digital identities as unique as a fingerprint. Smartphones have become accurate identifiers of individuals, creating even more opportunities for data misuse. Public awareness of high-profile privacy failures is combined with regulators' powers and willingness to use them - both of these are keeping privacy in the mind of the public.
Regulators pass regulation to mandate data privacy, robust cybersecurity, and protection of children. In turn, strict regulations force businesses to re-evaluate their data management practices. Privacy concerns are especially high in some industries, and investing in PETs is much less costly than the impact of even one data privacy event, which can damage brand equity and spur financial penalties.
Few business leaders deny the importance of data privacy; however, there are competing priorities. Limited capabilities and resources prevent some businesses from adopting PETs in a timely manner. It’s especially difficult to find cost-effective techniques to meet all privacy requirements.
There are many solution providers to meet current market demand, but few do it sufficiently. Most PETs offer similar capabilities and use case coverage. Major players include first-movers with established reputations as well as well-funded newcomers. Some of these vendors are pulling ahead of the pack by eliminating burdensome implementation costs and offering compliance support with reduced technical complexity.
Effective adoption of PETs involves selecting techniques to satisfy specific use cases. Businesses that choose appropriate methods and providers receive the benefits they need without unnecessary costs. Our member-exclusive reporting on privacy-enhancing technologies maps data protection needs to specific providers.
The global data privacy software market size is expected to reach $17.8 billion by 2028. The market landscape will continue to change along with the state of consumer identity and privacy. For now, vendors typically operate on a tiered subscription basis based on factors such as number of licenses, platform utilization, and data volume.
Common data privacy and security methods include the following:
As service providers contribute more to online privacy, businesses realize previously aspirational privacy goals. Access member-exclusive reporting for more detailed descriptions of these five data privacy techniques and the challenges they address.
There are dozens of notable companies in the privacy market. Each has its own focus area, ranging from homomorphic encryption to data de-identification. Some specialize in providing encrypted data access for specific industries like fintech and healthcare.
Unfortunately, many of the most notable players in the PET market struggle to demonstrate tangible impact. Because data protection is difficult to demonstrate, some business leaders see little reason to tolerate costly and technically difficult implementations. For privacy-enhancing technologies to become fully mainstream, vendors must demonstrate how they can avoid the consequences of maintaining the status quo, reduce regulatory and legal risk, and create meaningful privacy interactions for consumers.
Businesses can’t afford not to invest in privacy-enhancing technology, but can’t afford to spend frivolously. The best PET provider for protected data analysis might not be the best option for privacy compliance or data localization. It takes some knowledge of the data privacy landscape to make an informed selection.
To learn more about all of the key players, contact Liminal for a full report on the PET market. | <urn:uuid:3b6a8b61-f954-4dff-819b-4628ec50d7df> | CC-MAIN-2022-40 | https://liminal.co/articles/the-data-privacy-paradox-stated-concerns-meet-actual-priorities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00597.warc.gz | en | 0.922345 | 978 | 2.53125 | 3 |
Johns Hopkins University Applied Physics Laboratory has completed a recent flight test of a sensor aboard the Virgin Galactic-made VSS Unity spacecraft and demonstrated its ability to assess the inside conditions of a suborbital space vehicle and its payload.
The JHU APL Integrated Universal Suborbital platform was able to specify VSS Unity’s electromagnetic field environment and studied how the spacecraft might be affected by strong external and internally generated fields, the not-for-profit research center said Thursday.
Todd Smith, principal investigator for JANUS at APL’s space exploration sector, said the data gathered from the suborbital test flight will be used to observe conditions for eventual passengers and to help cancel electromagnetic interference to payloads assessing the fields of the Earth.
The platform is also designed to look at the lower ionosphere encountered at about 50 to 60 miles and determine its potential impacts on spacecraft and onboard technologies.
The May 22 mission is JANUS’ eighth overall flight, five of which were on a Blue Origin craft and two on a Virgin Galactic rocket.
“We learn more and more with each flight. We are developing a very capable instrument that is going to provide valuable information for scientists and instrument developers, as well as space pilots and passengers,” said Smith.
The Flight Opportunities program of NASA funded the recent flight and most of JANUS’ tests. | <urn:uuid:1aa62100-66a0-4969-92e6-e7ba415089e8> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2021/06/johns-hopkins-apl-demos-janus-sensors-spacecraft-environment-characterization-ability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00597.warc.gz | en | 0.938308 | 285 | 2.765625 | 3 |
The cyber security industry is in the midst of a revolution. XDR technology is making it possible for organizations to protect their data, networks, and applications in ways that have never n possible before. But what exactly is XDR? And how does it work? In this article, we’ll explore these questions and more:
What is XDR?
XDR is the next generation of cyber defense. XDR is a new technology for detecting and remediating malware that combines EDR and MDR into one platform, providing organizations with both detection and response capabilities in one solution.
XDR is designed to detect, contain and remediate malware in real-time—without requiring applications to be updated or rewritten—across all cloud service providers (CSPs), as well as physical devices on-premises or off-premises.
How does XDR Work?
XDR has three tiers:
Dynamic and Flexible Deployments.
This is the first tier, where you can leverage XDR’s dynamic cloud infrastructure to adapt quickly to changing security needs. Dynamic deployments can be used for various purposes, including red team exercises and training simulations.
Investigation and Response.
The second tier allows you to investigate incidents using advanced tools that allow you to conduct forensic analysis on compromised systems in order to better understand what happened during an attack, who was affected by it, and how bad things could have become if not for your quick response time.
Analytics & Detection.
The third tier leverages advanced analytics capabilities so that you can detect new attacks faster than ever before—and at scale!
XDR vs. EDR & MDR
XDR is the next generation of cyber defense. It uses a new, more powerful technology to detect and block attacks from a wide range of sources.
XDR has many advantages over existing technologies like EDR and MDR:
- XDR can detect and block attacks from a wide range of sources. The most common attack vectors for APTs are spear phishing emails and watering hole attacks. XDR detects these threats at the earliest possible moment before they are executed on your systems or data exfiltration occurs, giving you time to respond appropriately.
- XDR discovers unknown malware behavior based on advanced machine learning models that find anomalies in network traffic. This allows it to detect and prevent threats even when they haven’t been seen before or when they look different from previous variants of existing malware families.
Use Cases for XDR
The XDR platform is a powerful tool for organizations, but it also has specific use cases. The following are some of the ways in which XDR can be used:
- Threat Hunting: With XDR, threat hunters will have access to all their data at once and can use that data to identify threats. This can help organizations stay ahead of hackers who are trying to infiltrate their network by spotting them before they do damage.
- Investigation: In addition to helping with threat hunting, XDR can also help with investigations because users will have access to all relevant information about an incident in one place. This means investigators won’t have trouble tracking down evidence across multiple systems or departments—they’ll just need one tool that does it all!
- Triage: If your organization deals with a high volume of requests from customers and partners, you may want an easier way for employees outside the IT department to triage these requests without having prior knowledge about how your infrastructure works (i.e., what underlying technologies were used). With XDR on hand, these users can take care of urgent matters quickly without needing advanced technical skills or training from IT staff members who might be unavailable at any given moment for one reason or another.
Should You Invest in an XDR?
XDR is the next generation of cyber defense. It is a revolutionary product that will help secure your data, networks, and infrastructure from the constant threat of hackers and malware. By leveraging the power of artificial intelligence and deep learning, you can predict what will happen before it happens, giving you a massive advantage over any attacker. XDR uses machine learning algorithms to predict new threats by analyzing millions of data points every second in real-time. The most important thing about XDR is that it’s not just another tool in your arsenal; it’s an essential part of your security strategy because it works with all other products to protect against threats inside and outside your organization. Because XDR tools are still relatively new, understanding and learning how to use them successfully takes time, energy, and a unique set of skillsets. With this in mind, we recommend incorporating managed XDR (MXDR), bolstered by a full-time, 24×7 SOC, to get the most out of your investment. A managed security service provider with XDR solutions can help your team deploy and managed the technology so that the business can get back to the projects they care about most. Check the boxes for compliance while protecting your business from top to bottom with MXDR.
How we leverage XDR at Cyber Sainik
If you need help with your XDR, we have the expertise to give you the best results. From managed extended detection and response, to cybersecurity strategy development, our team of experts are prepared to assist in your organization’s security program. Having worked with some of the most prominent companies in the world, our cybersecurity specialist know what it takes to deliver solutions that work. We understand how important your data is, and we’ll ensure that it’s protected using a combination of advanced tools and systems. Schedule a free consultation with Cyber Sainik. | <urn:uuid:224f9a77-8114-452c-8990-cd20adaad47a> | CC-MAIN-2022-40 | https://cybersainik.com/the-technology-behind-xdr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00597.warc.gz | en | 0.941205 | 1,148 | 2.671875 | 3 |
Published On June 11, 2019The Internet of Things (IoT) brings security issues to every organization, making it easy for hackers to exploit your most sensitive information. But how?
The Internet of Things (IoT) may not sound like something security folk should get worked up over, until you realize that it’s the unsecured Internet-of -Pacemakers, -Baby Monitors, -Wireless Gadgets and countless other “Things” that vendors are connecting to the internet.
If you’re wondering how many Internet-of-Things “things” it takes to be “countless,” there will be more than 20 billion connected devices by 2020, according to Gartner. An enterprise can easily have many thousands of IoT devices. And for every one of them that’s snuggled up against your invaluable, ever-increasing corporate data, its vulnerabilities become yours.
Yes, the IoT adds benefits to just about anything through remote access, telemetry (taking measures from a distance) and control. But criminal hackers like it, too, for its broad attack surface (because there are so many) and weak security. Most IoT devices are barely robust enough to send and receive sensor data, let alone protect themselves.
There are as many potential IoT attacks as there are devices. Cyberthugs are doing a lot more with the IoT than orchestrating distributed denial of service (DDoS) attacks to bring down major web properties and sites like yours. But you have to know the problem before you can worry about the solution. Here are some examples of how cybercriminals use and abuse the IoT for anything but the good of your networks, systems, data, organization and consumers.
Criminal hackers can eavesdrop on wireless transmissions close to IoT devices, using scanners that they position to abut your IoT hardware. With this method, they can capture the cryptographic keys to unlock the encryption that secures your IoT data. With keys in hand, cyberthugs can access and sift through data that the encryption was meant to protect.
With the unfettered access to IoT that follows, cyberthieves can steal consumer data that devices like digital signs and kiosks collect. Some digital signage includes Point-of-Sale (PoS) terminals where people make purchases, so payment card data is a target here, too. Any IoT that collects consumer data can share it with cybercrooks once they listen in on your wireless transmissions and crack your encoded information.
Consumers could reconsider doing business with you if they discover that a breach of your IoT compromised their payment cards or personal information. Meanwhile, the cyberhoodlums make off with records they can sell or manipulate for monetary gain.
Cybercriminals are entering your networks through third-party IoT devices and applications. When you, an employee or a third-party vendor installs a vulnerable, unsecured IoT device in your environment, it gives cybercrooks an opening.
They can enter the device via stolen credentials, weak passwords, broadly published default passwords and web-based attacks via browsers on computers that connect to the IoT. They can even search for vulnerable IoT devices using the search engine Shodan, which is designed to locate IoT devices connected to the internet. It then shares installed software with any known vulnerabilities that haven’t been patched by the organization using the IoT.
If you have not segmented the IoT network from the rest of the enterprise network, it’s only an IoT-device hop, lax-security skip, and network-router or -gateway jump from your IoT environment to your most prized data.
Cyberterror on Plants and Equipment
With goals other than money, nation-state hackers attack the Industrial IoT (IIoT). These devices can include industrial controls such as gauges, valves, pumps and actuators. They can also include smart sensors and different apparatuses in critical infrastructure sectors like manufacturing, energy, transportation systems and more than a dozen others that the Department of Homeland Security has identified.
Because companies connect IIoT to the internet for the benefits of new intelligence about industrial processes, cybermiscreants can reach those devices, as well. When criminals are controlling these sensor-laden gadgets, they can use them to send misleading commands and data to machines, systems and employees, triggering unanticipated reactions with disastrous results.
Their purpose is to set off major crises such as plant shutdowns, production shortfalls and financial losses in plants and equipment, loss of power production in the energy sector and life-threatening collisions in mass transportation, etc. Their ultimate goals may also include cyberterror and cyberwarfare.
There’s Nothing Simple About IoT
There are more examples that parallel these, illustrating a more profound problem. Bad-guy hackers can decrypt, read and steal any data that they find in your IoT devices. They can gain entrance into your network through any IoT device connection. They can disrupt, shut down and even destroy your critical infrastructure site that your organization counts on to stay alive. That’s a lot of trouble for a simple, wireless “thing.” | <urn:uuid:2472b189-0495-4e6e-bfea-9d5ac6e79f62> | CC-MAIN-2022-40 | https://www.ironmountain.com/blogs/2019/hackers-are-hurting-the-internet-of-things-in-more-ways-than-you-think | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00597.warc.gz | en | 0.931227 | 1,068 | 2.875 | 3 |
KVM vs. LXC: Which Virtualisation Is Better for Your Business?
If you are considering which virtualisation server to choose but are not familiar with the differences between the technologies, this article might be of help. You will get to know which basic virtualisation types exist and what the difference between KVM and LXC virtualisation, which are generally used for VPS servers in MasterDC, is.
It is considerably hard to understand all parameters and technical details of VPS. Which is why we decided to explain how the virtualisation itself works, whilst focusing on KVM and LXC that are commonly used in Master DC for virtualisation servers with Linux OS. Before we start with the technologies concerned, first let’s discuss what virtualisation types actually exist and what their general dis/advantages are.
There exist several sorts of virtualisation that differ particularly in their extent and level on which the virtualisation is conducted, which has a fundamental impact on the VPS features such as scalability, and/or the OS compatibility. To serve the purpose of this article, it is sufficient to introduce the two basic types – full virtualisation and the virtualisation on the OS level. Obviously there exist many other partial virtualisations which virtualise only some of the hardware instances (memory, processor, network card), however these are not the subject of this article.
Full or in other words Native Virtualisation is a virtualisation in the true sense of the word. Its pivotal component is a hypervisor which monitors and controls the VS running straight on the hardware level. You might imagine it as a main arbiter who assigns performance and physical server memory to individual virtual servers, whilst, at the same time, separating the servers from each other.
OS is located in the level above the hypervisor which enables to launch various unmodified OSs on individual virtualisation machines, making it one of the main advantages of the full virtualisation. On the other, though, there are quite high overhead costs. To run the hypervisor itself might consume up to 20% of the physical server performance. A typical example of full virtualisation are VMware solutions.
The virtualisation on the OS level, which creates so-called containers, also makes it possible to run several separated virtual machines on one physical sever. The main difference is that these virtual machines run on one shared OS core, so that the OS of individual virtual machines (containers) must have the same core. The application for creating virtual machines is thus above the level with the OS.
Virtualisation on the OS level is advantageous since it uses performance and physical server capacity more efficiently in comparison with full virtualisation. The applications then, similarly as hypervisor, assign quotas on the disk, memory, and prioritises processing time. Another important advantage is an option to isolate the container and subsequently deploy it in different environments. This virtualisation type includes OpenVZ, Docker and LXC.
Leading Software Companies Are Behind the KVM Virtualisation
KVM hypervisor was originally developed by Israeli startup Qumranet. In Sep 2008 though, RedHat – the world number one in Linux solutions for commercial sphere – bought it for 107 million dollars. KVM abbreviation means Kernel-Based Virtual Machine, which might be loosely translated as ‘virtualisation on the core level’. The title thus refers to a virtualisation mechanism which KVM uses – a module, which makes the Linux core work as a hypervisor.
On the one hand it is necessary having had the OS installed, on the other hand this OS itself works as a hypervisor. Different sources thus do not always agree in what virtualisation type is going on. It is mostly claimed that KVM corresponds to full virtualisation since installed core of the original OS behaves like a hypervisor. It conforms to one of the key features of full virtualisation, for on the KVM it is possible to launch unmodified OSs.
KVM, as a free software, might be used on different Linux distributions, such as CentOS, Ubunt and Debian. If you want to virtualise physical server of Windows OS, it is necessary to choose a different hypervisor.
Configure Your VPS as Required
V Masteru víme, že na každém detailu záleží, a proto nabízíme možnost upravit si VPS přesně podle vašich potřeb. Chtěli byste větší RAM? Více CPU? Nebo vám vyhovuje jiný operační systém? V konfigurátoru si zvolíte vyhovující parametry na míru svému projektu.
Docker Support, Security and Other KVM Virtualisation Benefits
One of the main advantages of KVM virtualisation is the fact that you easily create containers for the project where necessary so that they are separated from each other. VPS with the Docker support is an ideal solution for developers since it facilitates high portability and easier scaling. More about the Docker you may learn in our article about how the containers work and why to want them.
Since KVM is a part of the Linux core, it can change the RAM for individual virtual machines very efficiently, without slowing the already running processes down. Another important feature is a possibility of dynamic migration. This means that individual virtual systems might be shifted without interrupting their activities.
Owing to Linux basics, KVM virtualisation offers a high level of security, which might be, using accessories, enhanced. Besides, SELinux offers encrypting and access control.
Last but not least it is necessary to mention a constant KVM hypervisor adjusting. While in its beginnings it was only compatible with a Linux x86 platform, there are tens of nowadays supported platforms. KVM hypervisor is also thought of as one of the most solid and reliable ones on the market. In this perspective, cooperation with RedHat with other research companies that are concerned with software development, such as IBM, Intel and NetApp, play a vitally important role.
LXC Virtualisation Has Low Overhead Costs
LXC technology virtualises servers on OS level, which why it is called ‘light-weight virtualisation’. Compared to full virtualisation, it saves processor memory and performance, since the whole virtualisation process costs have very low requirements. For this reason, it is possible to create new virtual machines (containers) more easily and faster compared to full virtualisation.
LXC technology, same as KVM, assigns to individual virtual machines system resources of the physical server that is being virtualised. For instance it defines what the operation memory, hard disk size and processor performance of an individual VPS will be. It also takes care of isolation of individual containers. Thus when one of the VPSs threatened or attacked, the other servers will not be influenced. Compared to KVM though, the extent of security is lower, which is given by the character of virtualisation itself. Full virtualisation isolates individual virtual machine in a more complex way than the virtualisation on the OS level.
Another significant disadvantage is that Docker cannot be launched on LXC virtualised servers as this functionality is not officially supported.
An open-source technology Proxmox, which offers intuitive web interface for administration of plentiful servers with LXC and KVM virtualisation, thus makes the administrators’ job easier.
Is it always better to choose the server with virtualisation?
Apparently it depends on the type of the project which the server is intended for. The major advantage of KVM virtualisation is that that individual VPS uses the core of its own Linux distribution and thus does not have to share the OS core of the host server.
Furthermore, if you need to divide VPSs into separated containers (perhaps using Docker), it is better to choose VPS with KVM virtualisation. This function is particularly suitable for app development and testing, and to launch several webs that should be placed in isolated environments.
VPS with LXC virtualisation costs less, but you must put up with the fact that its options are bound to the OS compatibility of the host server. Their use is therefore significantly limited and is more suitable for smaller projects such as web hosting, particularly if you intend to use CMS in order to insert the content (redaction Word Press, Joomla! Or Drupal type). Namely because CMS is compatible with virtualisation technology based on containers.
Choose from a Wide Scale of VPSs
Except for the KVM and LXC, Master also offers VPSs virtualised using Hyper-V technology which virtualises server on the OS level and thus is used exceptionally for Windows. Study a complete offer of our virtualised servers and choose an ideal solution for your project. | <urn:uuid:bc3a586b-5073-4e87-88f8-46532e569df2> | CC-MAIN-2022-40 | https://www.masterdc.com/blog/server-virtualisation-is-kvm-or-lxc-the-vs-of-choice-for-your-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00597.warc.gz | en | 0.895167 | 1,844 | 2.703125 | 3 |
Brute Force AttackGRIDINSOFT TEAM
A brute force attack attempts to break the code (password, passphrase, encryption key, etc.) by consecutively trying all possible character combinations until the right one is found. Such an attack can be characterized as systematic guessing. In cryptography, the “brute force” term reflects virtually unlimited time or computing power the hacker needs to perform the attack effectively, not the nature of the code breaker’s interaction with the targeted system.
An exhaustive search method, brute force plays a symbolic role in cryptography. Although it is the slowest code-breaking method, it is also the purest in its artless efficiency. Thus, the capacity of a cryptographic protection method (or a particular password or key) against the brute force attack can serve as a criterion of its effectiveness.
Strengths and Weaknesses of Brute Force
As noted above, brute force attacks are virtually impossible to repel. Breaking the password may take years for the offenders. However, they will eventually succeed. Therefore, all protective measures against disputed attacks (except for highly impractical implementations of unconditional security) can be reduced to making brute force useless. And that is relatively easy to do.
Imagine a four-digit code. It will take a human a lot of time to test ten thousand variants to see which one is correct. However, a computer will find the needed combination in less than a second. Regardless of such evident computational dominance of a machine over a human, a strong password will make this advantage irrelevant. An 18-character password featuring lower and upper case letters, digits, and special symbols will keep even the most powerful computer busy for millions of years.
This table shows the difference in the time it takes to hack passwords differing in strength.
Encryption keys used for secure communications can be targeted for hacking attacks as well as passwords, but they are in the same way unreachable for brute force attacks nowadays. Bit strings for scrambling transmitted data and unscrambling it upon reception, encryption keys can be pretty long, and breaking them is also a difficult task. For example, AES (advanced encryption standard) featuring 256-bit encryption keys makes reaching the encoded traffic in a reasonable time impossible.
It is a state of affairs for modern broad-market machines. The progress won't cease, though. Quantum computers brought to the world will jeopardize today's best classical encryption methods. IBM's already existing 100+ qubit quantum computer and its successors are impending rivals for 256-bit encryption schemes, most likely to render them obsolete. Ciphering the "classic" machine will take ages to decrypt but will be a piece of cake for the powerful quantum computer. Fortunately, such computers will likely be too expensive to be available for everyone.
Modern Technical Issues
Besides strong passwords and long encryption keys, various technical solutions oppose brute force attacks. The programs that receive and check passwords, online or offline, have security measures against brute force. These are CAPTCHA (a well-known quick anti-robot test,) a programmed delay between allowed attempts to enter a password, IP/account blocking if password-guessing becomes evident, etc.
Another brute force counter-measure is data obfuscation which applies to encrypted data. It is an additional technique wherein data is altered by certain algorithms that obscure information for the human eye. Obfuscation has nothing to do with the encryption itself. However, it might save data from being recognized as correctly decrypted or prevent the decrypted information from timely usage by the hackers.
What Is Brute Force Good For Then?
What's the point of such an attack if there are so many things that make it useless? - you might ask. That's a reasonable question. The answer is that, although surrounded by outstanding technical protection measures, the human user provides the most critical vulnerability. Few people follow password-related safety rules unless the technology doesn't make them obey prescribed security regulations, of course. Large tech companies, like Apple or Microsoft, takes care of that, but not all companies do so. Passwords like "123" or featuring pet names are still very widespread. Why is it dangerous and hands-untying to brute force attackers is coming up further.
Tools for Brute Force
The Brute Force Attackers use various tools to achieve access to your systems. You can use these brute-force attacking tools themselves for penetration.
The penetration test is the practice of trying to check your computers using the same ways hackers do. These tools can help you to make you able to identify low-security holes.
|Hydra||Brute Force tools for login cracking used either on Linux or Windows/Cygwin. In addition: Solaris, FreeBSD/OpenBSD, QNX (Blackberry 10), and macOS. Hydra supports many protocols such as AFP, HTTP-FORM-GET, HTTP-GET, HTTP-FORM-POST, HTTP-HEAD, HTTP-PROXY, and more.||C||🆓|
|Gobuster||Gobuster used to brute-force:
|BruteX||Brute force all services running on a target:
|Dirsearch||An advanced command-line tool designed to brute force directories and files in webservers (web path scanner).||Python||🆓|
|Patator||Patator is a multi-threaded tool written in Python, that strives to be more reliable and flexible than his fellow predecessors.||Phyton||🆓|
|Pydictor||Pydictor is a dictionary builder for a brute-force attacks.||Phyton||🆓|
Types of Brute Force Attacks
The hackers have developed tools to use the computational powers of the brute force method but avoid its disadvantages. A simple brute force attack uses no outside logic. Since it is not supposed to be successful at hacking strong passwords, hackers should narrow the application area of the brute force method. And they did it. The brute force mechanism spends time and resources on myriads of variants irrelevant to what it can be successful against. However, the method’s variations listed further can be successful against weak or lexeme-based passwords.
Hybrid Brute Force Attack
This type of attack uses a previously gathered set of words and digit combinations, candidates for password bases. It works as a usual brute force attack but concentrates efforts only on variations of the words in the list. The addition to the simple brute force hacking program, in this case, is software that produces the mentioned variations. Hybrid attacks are effective against weak passwords (“111,” “123456”) or name-based passwords combined with numbers (“Richard2000”).
A dictionary attack is an older version of a brute force hybrid attack, or, better to say, what a brute force attack must be combined with to become a hybrid attack. Dictionary attacks are machine-quick trying of different words. It might either be scrolling through a dictionary or using pre-gathered word lists.
Rainbow Table Attack
It is a special variant of lookup tables for reversing cryptographic hash functions. It uses the mechanism of a reasonable compromise between the time of the search (by the table) and the memory it takes to do it. Rainbow tables are used to crack passwords that underwent hashing and attack open-text-based symmetric ciphers. The method is based on the fact that different passwords can produce the same hash. If the malefactors know the hash value, they can use the tables to find the password relatively quickly.
Reverse Brute Force Attack
If criminals lay hands-on leaked passwords but don't know for which login are these passwords, they begin login picking. It is executed the same way as usual brute force attack on password, but it targets the login field. That's where the name of the method comes from. Hackers may also try to check whether any of the clients of a certain service or network uses widespread passwords like "qwertyuiop". However, that is not so effective when there is a possibility to find the login with the use of OSINT - just by searching the email address or username related to the place you're trying to get in via brute force. Another way to get this kind of information is social engineering - and that is exactly what people do.
Since people often use the same passwords and even login-password pairs on different websites, as soon as any of these pairs get in possession of malefactors, the latter can use credential stuffing to test whether these hacked credentials work for any other websites. This process is automated, and hackers can surely add word variation production to it.
Motivation for Brute Force Attacks
As you have probably noticed, sometimes malefactors attack precise users and their particular accounts, but sometimes they attempt to hack something randomly. Although codebreakers might seem uncertain about their goals, the cybercriminal world is very diverse. Thus, crooks will use any hacked account, mailbox, or device. If it is a spear attack, and offenders get what they hunted for exactly, - that's a big win. The victim should be ready to suffer reputational, financial, or political losses. However, if hackers manage to hack at least something, they will know what to do. Gathering information, identity theft, or malware installation (coin miners, ransomware, botnet software, etc.) is very likely to happen. Hackers can monetize any of the named activities on respective black markets.
How to Prevent Brute Force Attacks?The following security measures will effectively make brute force attacks pointless:
- Use strong passwords. A lot of services offer you the recomendation on the strong passwords - do not neglect them;
- Change passwords regularly. It can be leaked regardless of your password strength. To avoid account hijack, it is better to change the passwords at least twice a year. That's especially needed when you use the same or similar password for multiple services;
- Use 2-factor authentication. This option will require confirmation of your identity via your another device after you (or an attacker) enter a correct password;
- Progressive delays in case of wrong password input, CAPTCHA procedures, and account lockouts (when the wrong password is tried over a certain number of times) are also good security features. You can activate them if you administer a workgroup. | <urn:uuid:30669eaf-ce32-4952-bc7a-baa9b7eed2d4> | CC-MAIN-2022-40 | https://gridinsoft.com/brute-force | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00797.warc.gz | en | 0.915977 | 2,165 | 3.5625 | 4 |
People who design explosive detection devices may have something to learn from their canine counterparts, according to Matthew Staymates, a mechanical engineer and fluid dynamicist at the National Institute of Standards and Technology (NIST).
Over the past few years, Staymates and his colleagues have been researching ways to improve explosive detection. Staymates turned to detection officials’ partners, namely dogs, for ideas. He said the way dogs inhale and exhale could provide an aerodynamic clue for how to model detection devices in the future. Detection devices, such as the ones used at airports, use wand-like tubes that suck in air through a little hole. The devices feed that air into an analyzer. Suspicious chemicals go undetected unless the vacuum-like device scans directly over it.
Dog noses, on the other hand, can cover a larger swath of area in less time. Staymates explained that dogs can propel air jets away from them, and then pull other jets of air toward them as they inhale. Dogs’ “aerodynamic reach” pulls odorants toward them, enabling the animals to cover more ground than human detectors.
“We don’t do it the way a dog does it,” Staymates said. “A dog is an active sampling system, and pulls new vapor toward itself. There are lessons from learning what a dog is doing.”
To test the effects of a dog’s aerodynamic reach, Staymates decided to replicate one. He used a 3-D printer to create a plastic dog nose based on a female Labrador retriever’s snout. The lifelike plastic nose, which resembles a mask, cost less than $5 to make. Staymates and his team retrofitted their model nose to a commercial vapor detector and used schlieren imaging, an aeronautical engineering technique used to view the flow of air around objects, to confirm that their false nose functions much like a real dog’s.
Staymates and his team tested the air-sampling performance of their creation, which “actively sniffs” like a dog, to a trace detection device that relies on continuous suction. Trace detection refers to picking up small, non-visible remnants of a substance. Using a mass spectrometer, Staymates found the sniffing artificial dog nose was four times better 3.9 inches away from the vapor source and 18 times better at a stand-off distance of 7.9 inches.
While the dog nose model gave Staymates a clue as to how aerodynamics could shape future detection devices, he stated he still does not know how that technology will physically manifest in the future.
“We’re not going to replace the real dog. But they get tired and cranky and hungry,” Staymates said. “I don’t envision the next generation of devices will look like dog nostrils. The relevant thing is the holes for the air. I see this working well in the world of trace detection.” | <urn:uuid:7abb335e-9a19-48c1-8d87-533251c177ca> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/bomb-detectors-can-learn-from-dogs-nist-engineer-finds/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00797.warc.gz | en | 0.943955 | 621 | 3.609375 | 4 |
Browser extensions are computer programs that add functionality to existing browsers. They come in as many kinds (and more) as there are browsers.
Internet Explorer distinguishes between toolbars and browser helper objects (BHOs). Other browsers like Firefox, Chrome, Opera, and Safari call them add-ons or simply extensions.
For PUPs, the economically most interesting browsers are the most popular ones, e.g. Chrome, Firefox, and Internet Explorer.
Browser extensions (in this case BHOs) were introduced with the release of Internet Explorer 4 in 1997. The first concerns arose because of their tracking capabilities and the fact that the extensions also affected explore.exe. Other browsers give their extensions less power and are trying different approaches to stop Potentially Unwanted Programs from installing. Google warns about installing extensions that did not come from the Chrome Web Store. Firefox and Opera use blacklists to disable extensions that are found breaking the rules.
Browser extensions offer some added functionality to the user. With PUPs these are often advertisement supported applications. Another common infection method for browser extensions is bundled installers. These bundles are designed to install more than the user bargained for and usually include one or more browser extensions that function as adware and/or hijackers.
There are many PUPs that use browser extensions to deliver their content. Currently most famous and widespread is Ask. It uses several subfamilies like MindSpark and Bandoo to get its toolbars installed. Other families that use extensions for more than one browser are Sanbreel/Browsefox, Crossrider, and Conduit/SearchProtect.
Removal of browser extensions is usually not difficult in the case of PUPs. Often they have working uninstallers, and browser extensions can be easily disabled (and in some cases removed) in the browser settings themselves. In more serious infections, it may be necessary to change the passwords you used while you were online.
Incomplete or incorrect removal of browser extensions can lead to browser instability and crashes. In cases of hijackers, it may take some extra steps to change back startpage(s), default search engines, and even “new-tab-urls.”
Don’t get tempted to install software without a bit of research. Sometimes all it takes is a short look at the EULA or a quick search about the application to know it’s better to stay away from it. Avoid bundlers by downloading from the publishers’ sites whenever possible. Sometimes the price for free software is more than you would be willing to pay.
Internet Explorer is dealt three extensions. Two BHOs and a toolbar.
The toolbar pretty much looks the same in these three browsers.
The Firefox extension looks like this:
And this is the Chrome extension:
Looking at the details shows you the permissions for the extension.
Normally this PUP will only install the toolbar on your default browser, but we wanted to show you all the options.
Select your language | <urn:uuid:4729f150-e14b-4eab-82bc-21b7991b53c7> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/threats/browser-extensions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00797.warc.gz | en | 0.920082 | 613 | 2.671875 | 3 |
October 30, 2018
VMware Disaster Recovery Best Practices
Disaster recovery is a process that includes a set of measures that are directed towards recovering the components of an infrastructure after failure has occurred. Furthermore, DR aims to minimize the negative effects that may be caused by disaster as well as ensure business continuity. In order to prepare for the possible disaster types, companies usually compose a disaster recovery plan that should be a part of a business continuity plan. Virtual machines are the components that are under risk in a case of disaster; it is for this reason that you should prepare for disaster by developing a disaster recovery plan. This blog post explores the best practices of disaster recovery (DR) of VMware virtual environment.
Compose a Disaster Recovery Plan
A disaster recovery plan is a structured document that describes a disaster recovery process as a set of actions to be performed by the appropriate persons in a disastrous situation. Furthermore, the document determines the criteria of what is needed in order to launch the plan. Both natural and man-made catalysts can cause a disaster. A DR plan should include different recovery scenarios for different disaster types and unplanned incidents. For example, a DR plan can describe what to do in a case of a ransomware attack, a power outage, a hardware failure, an earthquake, a typhoon, etc. A DR plan can be categorized: for example, the first section could explain network recovery, the second could focus on data center recovery, while the third would explain VM recovery, etc.
Prepare Your Recovery Site
A disaster recovery site is a place that can be used by a company as a way of recovering infrastructure and workloads when a primary site that is used for production purposes fails to function. Disaster recovery sites can be hot, warm, or cold.
A hot site is a fully functional DR site that is equipped with configured ESXi servers, storage, VM replicas, and user data. If a primary site fails after disaster, a hot site is ready to be used immediately. Deployment of a hot site is costly, but provides the possibility of the fastest recovery possible.
A warm site contains some equipment such as network equipment, gateway servers, ESXi hosts, as well as storage, but may not contain VMs and user data. In this case VMs should be recovered from backups, and user data may need to be copied, too. Additional equipment and software can be installed during the disaster recovery process, thus using a warm site is a compromised solution that requires middle costs, but provides affordable recovery time.
A cold site is a DR site that only has basic infrastructure. When disaster strikes servers must be configured, storage must be deployed, VMs must be recovered, and user data may need to be extracted from backups. Using this type of DR site requires more effort to recover VMs and workloads. This recovery process takes a long time, but the price of having a cold site is the lowest as compared to other site types.
Have Backups and Replicas Created Automatically
VM backups and replicas are the most important components of disaster recovery in a VMware vSphere virtual environment. Backup includes a copy of VM data, which is stored in a safe place. Backed up data can be compressed and needs time to recover. A VM replica is an identical copy of the source VM that resides on an ESXi host, is ready to start when needed, and is used during failover. Avoid backing up the VMs manually too often, as some important changes can be missed and lost when disaster strikes. Make use of appropriate host level VM data protection software that can create VM backups and VM replicas automatically by setting up a schedule.
Use VMware Clustering Features
VMware provides clustering features such as Distributed Resource Scheduler (DRS) cluster, High Availability (HA) cluster, and Fault Tolerance (available for VMs in a HA cluster). An HA cluster helps you minimize VM downtime, while Fault Tolerance (FT) allows you to avoid downtime of VMs in a case of hardware failure. Be aware that clustering features are not a substitution for backup and replication. High Availability with Fault Tolerance and backup with replication complement each other. The point is that HA and FT cannot protect data against corruption, the deletion of files inside the VMs, unsuccessful software updates, or other software failures etc.
Use Appropriate VM Recovery Order
Virtual machines should be recovered in the appropriate order. Imagine that you have multiple VMs with different applications that have dependencies on each other. The classic example is having a VM with Active Directory Domain Controller, a VM with a database server, and a VM with a web server. The VMs must be started in the following order:
- The VM with Domain Controller should be started first.
- The VM with a database server starts when the VM with Domain Controller is running because a database server uses Domain Controller for user authentication.
- The VM with a web server starts when the VM with a database server is running because the web server uses the database for proper operation in this case.
If you have a VM with MS Exchange mail server, that VM must start after the VM with Domain Controller because MS Exchange is integrated with Active Directory for user authentication.
Use Appropriate VM Network Configuration
A production site and a disaster recovery site may have different networks for VM connection. Virtual network adapters of VMs are connected to ports of virtual switches (vSwitches). Port groups represent different networks with network names and the appropriate addresses. If you recover a VM to a DR site, but the VM is configured for connecting to the network of a production site (which differs from the network used for VMs on a DR site), the VM network connection cannot be established. In this case, don’t forget to change the network settings of VMs when you recover the VMs at the DR site.
Prepare Your VM Storage
There must be enough free space in the storage that is used at a DR site in order to store VMs. This is the first and the most critical requirement. Storage must also provide enough performance; otherwise the business-critical services that run on the VMs may lag. If network-based storage such as NAS (Network Attached Storage) or SAN (Storage Area Network) is used, the network speed must be fast enough to cope. The storage network at a DR site must be a dedicated network that is separated from other networks.
Test Your Recovery Plan Regularly
A disaster recovery plan may look good on paper, but may be useless in a case of disaster if it is not tested in advance. Thus, be sure to test your DR plan on a regular basis. Testing allows you to check if the DR plan is workable, and if the RTO and RPO can be met. Testing also allows you to detect the disadvantages of the DR plan, and hence allows you to make adjustments to fix them. Test your DR plan regularly to make sure that your vSphere virtual environment can be recovered. Infrastructure may change with time, and after changes occur a DR plan that was recently workable may not meet the appropriate requirements anymore. For example, some VMs may be added, IP addresses may be changed, applications may be migrated from one VM to another etc. Regular testing allows you to detect which parts of the plan should be updated after infrastructure changes have been made, in order to keep the DR plan in an efficient state.
Find the Right Site Recovery Solution
When you have composed the DR plan, find the site recovery solution that best meets your needs. In a case of using VMware vSphere, a solution should support host-level VM backup/replication, fast restore from backup, failover to a VM replica, the entire VM recovery and individual object recovery. Try to choose a suitable solution with the appropriate functionality, which would allow regular DR plan testing and updates.
NAKIVO Backup & Replication for VMware Disaster Recovery
NAKIVO Backup & Replication is a fast, reliable, and affordable VM data protection solution that can protect your VMware VMs. Among many other things, the product can perform host-level VM backup and replication, individual object recovery, instant VM recovery, and failover to a VM replica. No agents need to be installed on VMs as VMware vStorage API for data protection is used. Moreover, NAKIVO Backup & Replication includes a new Site Recovery functionality, with which you can perform disaster recovery of entire sites with (not only) VMware VMs.
Site Recovery Overview
Site Recovery is a powerful feature that helps you recover your VMs from one site to another in a case of disaster. This feature can also be used for planned VM migration between sites. You can build automated recovery workflows and run them for planned or emergency failover, as well as for testing purposes.
Site Recovery Features
Site Recovery allows you to automate and orchestrate a VM disaster recovery process. The feature includes a set of actions and conditions that you can combine into a site recovery workflow (job) according to your disaster recovery plan. These actions are:
- Failover VMs. You can fail over to a VM replica (the VM replica must be created before performing the failover action).
- Failback VMs. You can transfer workloads back from a VM replica stored at a DR site to a source VM stored at a production site.
- Start VMs. You can start one or multiple VMs.
- Stop VMs. You can stop one or multiple VMs.
- Run jobs. You can run jobs (backup, replication, Flash VM Boot, etc.) created in your NAKIVO Backup & Replication instance.
- Stop jobs. You can stop running jobs.
- Run script. You can run a script on a machine with the instance of NAKIVO Backup & Replication, on a remote Windows machine, a remote Linux machine, a VMware VM, a Hyper-V VM, or an EC2 instance.
- Attach repository. You can attach a backup repository.
- Detach repository. You can detach the already attached backup repository.
- Send emails. You can send an email after the appropriate action, for example, if VM failover was completed successfully.
- Wait. You can wait for a defined time before proceeding to the next action.
- Check condition. You can check the following conditions before proceeding to the next action: if a resource exists, if a resource is running, and if IP/hostname is reachable.
You can flexibly use the listed actions for creating different site recovery jobs for different use cases and scenarios. Click the Run Job button and all actions would be started automatically in the defined order. Site recovery jobs can be run manually in production and testing modes, but when you configure your site recovery jobs to run automatically as scheduled tasks, they are run in test mode.
Site Recovery Benefits
Site Recovery is a powerful, convenient, and intuitive feature. This feature can simplify disaster recovery for VMware vSphere virtual environments, as well as allows you to spend less effort and investment on business continuity.
To summarize the benefits of Site Recovery:
- It helps you implement your complex site recovery plans in the framework of your disaster recovery strategy.
- It automates a disaster recovery process.
- It reduces the time spent on disaster recovery. (As a result, you have less downtime, fewer interruptions of services, and cut costs.)
- Site recovery jobs can be tested automatically to detect whether your site recovery plan is up to date, as well as whether RPO and RTO can be met.
- Site recovery is not a standalone feature, but is built into the powerful and universal VM data protection solution where it can be managed from a single pane of glass.
- It has an affordable pricing policy. You don’t need to buy a separate license for using Site Recovery if you already have a license for the appropriate NAKIVO Backup & Replication edition.
The disaster recovery of a VMware vSphere virtual environment is an important process in ensuring business continuity. VMware disaster recovery best practices include the creation of a disaster recovery plan, as well as the automatic creation of VM replicas that are required for VM failover. Using VM backup and replication in addition to vSphere clustering features is recommended. Define your VM recovery order, prepare your disaster recovery site (including the network and storage components), make sure to test your disaster recovery plan regularly, and use a suitable data protection solution that supports host-level VM backup, replication, and recovery.
NAKIVO Backup & Replication is a universal VM data protection solution with support for VMware virtual machines. Site Recovery is a powerful new feature that is included in NAKIVO Backup & Replication since version 8.0. Site Recovery allows you to implement your disaster recovery plan by creating automated site recovery jobs. This useful feature helps you orchestrate and automate a disaster recovery process, recover VM data fast as well as ensure a high level of data protection. Download NAKIVO Backup & Replication with Site Recovery and try the product in your VMware vSphere environment. | <urn:uuid:76bf741f-d42c-477e-9124-f3ca005242a4> | CC-MAIN-2022-40 | https://www.nakivo.com/blog/vmware-disaster-recovery-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00797.warc.gz | en | 0.913389 | 2,685 | 2.796875 | 3 |
Business Continuity Management
In an ideal world, the IT cybersecurity manager will know the major of continuity risks a company faces and will have preparations in place to make sure nothing that disrupts critical processes ever happens. But if a disaster does occur, the cybersecurity manager needs a plan for that as well. We’re covering business continuity management in this chapter from a cybersecurity angle. The cybersecurity manager won’t be alone and should cooperate with other business units to cover other types of continuity risks besides cybersecurity.
Planning for those disasters is also known as business continuity management, the high-level planning for how to recover, restore normalcy, and minimise the loss after a serious incident. For instance, if you’re running a factory and there’s a flood, you might have to pause production for a week. That’s major. Water damage might be one risk; fire might be another. Of course, these types of disaster are improbable, depending on where your business is located.
Most companies, however, face the risk of an IT problem disrupting their factories or core processes. Most businesses depend on IT running smoothly. That means computers, chips, software, networks—even the flow of money depends on everything working in concert. The question is, when something major disrupts IT and, in turn, interrupts critical production and processes, how will you react, minimise the loss, and restore operations to normal? And what preparations can be made to make sure nothing like that happens?
The simplest scenario that affects just about every business would be a communications’ breakdown. As we know, almost every company is using the cloud. This means that if a factory is up and running, it’s connected to the cloud, and its business processes depend on the cloud. What happens when there’s no network? It can’t help but disrupt some part of the business.
Business continuity management requires looking at a number of different pieces: the network connecting the offices, the production sites, and the data centre or cloud are the obvious ones, but not the only ones. Another curious area to focus on is the domain name system (DNS). This is usually critical to everything happening inside networks. If the DNS fails or someone takes it down, the systems cannot resolve host names. They cannot find each other unless they are using an IP address for connecting. Similar systemic dependencies exist in most IT environments. The key is to identify single points of failure and build redundancy to counter scenarios that could bring business down to its knees.
In a perfect world, the cybersecurity manager would have a budget to cover double and triple backups or to run a redundant server for all systems. Few do. cybersecurity managers have to at least cover the critical functions; the difficulty lies in knowing which parts are critical.
Testing Is Essential
IT continuity management requires preparation, planning, testing, practising, and updating. A lot of companies focus on the plans but neglect the testing and practising parts because they’re harder and more expensive. But without testing, the company has no assurance that the backup plan actually works. Without practice, people won’t have the skills and experience to do what’s necessary to get everything back online in the event of a disaster.
It’s not unusual for a company to sink a million dollars into redundancy and backup hardware, or even establish a secondary disaster recovery site for IT, without ever testing the system. Will it work if something happens? No one knows.
We’ve seen numerous examples where a company had a primary processing site and a secondary site—a hot site and a cold site—and they never tried to switch over and operate it from the other side. If they don’t even know if it works, what is the point?
Levels of Preparedness
Disaster recovery preparedness exists on a spectrum; at the lowest level, the company does not have any redundancy. Everything’s running on one site, and that’s it. If a system or network goes down, they’re pretty much sunk and forced to recover any way they can. Quite often, this level of preparedness makes recovery slower; perhaps it takes a few days to get everything back on track.
The second level of preparation is to have some level of replication in place, like a cold site that holds data from the systems in a separate physical location or even in the cloud. Then, if there is an event, the other system can be brought to life, and network traffic can be routed to that new site. Ideally, systems can be brought back online in a relatively short amount of time, probably within a day or so.
Is It Warm in Here?
- Cold site—Has facilities, perhaps data backups available.
- Warm site—Ready to be turned on or scaled up, then reroute traffic to restore functionality.
- Hot site—Everything critical is replicated and running. Even if the other site goes down, it should not affect operations.
Let’s say there is a primary data centre somewhere and the cybersecurity manager wants to make sure that by having a cold site, they can recover in a few days’ time. The data, and maybe the hardware, are in the cloud and some other physical location, and there’s a plan in place for making the switch. If the network goes down or a data centre is destroyed, the cybersecurity manager can start firing up those systems in critical order in the new place. This will require some installation work for the systems, restoration of data from backups, and so on. Not the fastest option for recovery.
A more expensive option for redundancy is to create a recovery hot site. In a hot site, everything is duplicated in another location that resembles the primary data centre. Data is copied almost in real time from site to site, with dual systems up and running at all times in case something goes down in the primary site. The big benefit of using a hot site is that the switchover to a backup is almost transparent to users, at least in theory. Because the second site is hot, it’s running all the time—perhaps with a bit fewer resources than the primary one but functional nevertheless. Of course, this level of redundancy costs a lot of money.
The absolute minimum is to have backups of all the important data in another location, not all in one place. If companies don’t do that, they’ve lost a lot of insurance. There’s nothing the cybersecurity manager can do if the data is lost entirely. But if they have backups somewhere else, they always have a chance to recover. Remember 9/11? A lot of companies went down with those two towers. The ones who had backups from their data and some capacity to rebuild their systems mostly made it through the disaster. But there were many that didn’t.
Threats to Continuity
Business continuity risks are perhaps the most important ones to prepare for. Losing the company’s data is one, of course. It’s amazing how many companies don’t have backups. Many think they do until they test it the first time, usually in the event of an actual disaster. Unfortunately, some of them find out the system was behaving in an unexpected way. Maybe it wasn’t really doing a backup, or maybe the backup was encrypted so that no one could get their hands on it, but the only encryption key was in the primary system. Or the decryption and restoration process turns out to take so long that it doesn’t matter. We’ve seen all these scenarios happen, and whatever the reason, they happen a lot.
Subscribe To Our Newsletter
Get the latest intelligence and trends in the cyber security industry.
Backup, Backup, Backup!
- Have a backup plan made—how, when, and what data?
- Store the backups in a separate physical location.
- If you encrypt your backups, make sure you can decrypt as well.
- Test the backup restoration frequently—once is not enough!
- Check whether restoring backups can be done fast enough for business purposes.
One hospital in Singapore demonstrated just how bad these scenarios can get. They had a problem with ransomware, like nearly every company; nobody is immune, it just varies how much damage the attack does to operations. This hospital got hit badly. They had ten locations running—all hospitals and clinics—and someone clicked a link on a phishing email and downloaded a ransomware and got the receptionist’s computer infected.
After the initial infection, it took around three months until the hackers launched their attack. They had been infecting everything inside the network during that time. When the ransomware hit them, it activated on seven of the ten sites, and all the servers and workstations. All the data in the servers were encrypted. The company received a message telling them to contact a certain email address and pay a large sum of money to get the decryption key to get their data back.
Seven locations had no access to their office records or patient records or anything else. This is, of course, a continuity event. They couldn’t bill patients, couldn’t see customer data or anything. The IT manager went to restore data from the backups. They expected to be up and running again the next day, or in a few days’ time at least. But guess what? Their backups had been done with the same servers and technology that they used for storing the data normally. They had set up Windows shared folders and used software to backup data from servers and workstations to this backup share. The ransomware, of course, was able to find all shared folders in network and encrypt everything. So all their backups were encrypted, too, by the same attackers. Data wasn’t copied to an off-site location away from the company’s operational IT systems. Big mistake.
So they had two options. The first one was to pay the ransom. The second one was to re-enter all the customer data manually from paper printouts. They finally opted to take whatever printouts they had in their archives and re-enter the data manually. Since they didn’t pay the ransom, they were never able to access their data, and they ultimately lost it all.
Whether it’s a ransomware attack or something else, the most common danger of IT disruptions is system downtime due to system malfunctions, unexpected changes, or any other unforeseen reason. Companies don’t understand how often this happens and what it means to the business. They think they have backups, so it doesn’t matter. When a problem occurs, it’s many times more expensive to fix than the cost of preparing for something upfront. The lesson is, if companies do nothing else, they should do backups and do them often. End of story.
Nobody wants to risk a total loss of data, so one task that always makes the critical list is to put a system in place for regular backups. “Regular” depends on the type of business and system. This obviously needs to be planned well because the amount of systems, data, backup frequency, restoration tactic, and every other detail has to go in there.
There are two things cybersecurity managers need to figure out when they’re planning a backup system. The first is what data is important and needs to be in the backups, like customer data or banking records or encryption keys for the backup. The second is how often to make the backup. This might not seem very important, so many people tell us they do it every Friday and assume that’s good enough. But if they’re collecting data 24/7, 365 days a year, weekly backups aren’t often enough.
What they don’t think about, in many cases, is that the date and time of a backup defines the point in time to which they can return—the latest data they can get. Think of it as a time machine that can go back to the moment when the last backup was made. All progress that happened after that moment can be lost forever! A problem on Saturday morning is fairly minor if there’s a backup from Friday night. But what if the disruption happens on Friday morning? That means they will lose one full week of data because they have to go back to the last week in their backups. In business terms, that means the company has decided it’s okay to lose one week’s worth of data. In professional terms, we call this their Recovery Point Objective, RPO. Their RPO is then seven days’ worth of data that can be lost, and it’s okay to the business. This is a fairly high-level decision in any company. A lot can happen in a week. In most companies, that is not acceptable.
The cybersecurity manager should advocate for frequent backups. If IT gets to define the RPO, backups will most likely be made every Friday for all systems, or maybe once a month with an incremental backup weekly for whatever data was changed. Quite often, they forget to ask the executives what is actually an acceptable amount of data loss that the business can bear. The wise cybersecurity manager makes sure that this discussion takes place and that everyone understands the implications.
If there’s a backup system in place, a lot of people in the company are going to ask the cybersecurity manager why they should do more than that. People will ask the cybersecurity manager why they should care. They’ve got this backup system, but they don’t see why they need a plan for it. The IT security manager or cybersecurity manager has to answer that question.
Why does the company need a backup plan, or why should they test backups? Why spend on redundancy, or something as expensive as a hot site replication? Because the probability of IT failures and cyberattacks is fairly high, much higher than conventional disasters like fires or floods. These things happen almost yearly, sometimes many times every year. Cyberattacks occur routinely; a fire happens in a business building maybe once in fifty years, on average. Yet most of the office buildings are mandatorily equipped with fire exits, fire alarms, sprinkler systems and so on—often much more expensive but invisible to the eye once people overlook their importance.
Remember, there are risks that you or the business can never accept if they are in their right mind. Continuity risks are often like this. Nobody would risk the lives of employees, family, or themselves. Why would they jeopardise their whole business if they understand the risk correctly? Of course, they wouldn’t!
Business Impact Assessment
One effective way to demonstrate the need for a plan is to do a business impact assessment (BIA). Doing a BIA lets people see what would happen in different scenarios. For instance, the cybersecurity manager could work with a technical person and a business person, and challenge them to imagine what would happen if a certain IT service or system was not available. What if it’s down for a couple of hours? A day? A week? What happens to the business? By considering these scenarios, the participants will see that the longer the downtime, the bigger the impact on the business.
In most cases, in two hours, probably not that much happens. The service desk handles the customers with apologies. What if the outage lasts a week? How would people respond? Is there any way to do things manually for a longer period of time? What plans should be in place for this situation? Writing these procedures down and estimating their cost and effectiveness creates a BIA.
Assess Impacts Quickly
- Find a few people who know business and IT.
- Make a one-page form where you can write answers down.
- Ask them what happens to the business if a system is unavailable for a few hours, a few days, a week, or a month.
- Record the result of the discussion on your form.
- Repeat for all important systems.
The BIA can be extremely useful, but most companies don’t do it. Even companies that recognise the critical nature of certain systems don’t usually do a BIA.
Companies usually begin doing BIAs when they grow to have hundreds of IT systems in place, and they need to classify them and decide which ones are really critical. A BIA can help them do that. At first, they may list thirty to fifty items as critical, but after doing BIAs, they might be able to narrow it down to ten critical systems. Not everything can be considered critical. The BIA has the power to help companies prioritise.
Developing Disaster Recovery Plans
Sometimes, all the preventive measures fail, and disaster happens. That’s when the company needs a disaster recovery plan, or DRP. A DRP explains in detail how to recover from certain types of disasters. For IT, it should be technical in nature, like planning for blackouts or disruption of critical IT systems. A company can have a bunch of these plans—one plan for each scenario.
One scenario could be losing network connectivity—not having access to resources at your main data-processing site or the internet. For example, this could involve losing connectivity via VoIP to your customer service, or something like that.
For the plan to be effective, it should be drafted with the people who are actually responsible for using it when a disaster happens. The cybersecurity manager cannot make this plan all alone. Ivory Tower documents don’t matter at all if an IT disaster strikes—but this DRP should. The cybersecurity manager should think about who will react and take action to bring the facility or system back online. Those people should be on the team that creates the plan. It’s also critical that the people who are actually responsible for bringing systems back online have access to the DRP, have paper copies of it, and know how to use it.
The cybersecurity manager doesn’t have to be too pushy about getting these people involved. They just have to say something like, “Hey, I have these headers here on an empty document, and I need to explain what to do in the event of a disaster. So I’m going to share it with you online in our collaboration platform and ask you to fill it in. I’ll send a printed copy, then everybody can access it if something happens. Sound good?”
The DRP doesn’t have to be complicated; it might be expressed in simple bullet points. In minimum, it should contain a list of systems in order of importance, along with a list of instructions about what to do and who to contact in case of emergency. The plan should also have actionable details, like how to run diagnostics to find out the cause of the problems, log in to systems, restart servers or services, rebuild entire systems, restore data, and so on. It’s a very different type of document from a policy. A policy should be readable and understandable. The DRP should be useful for the people who are restoring the system—step by step, in very succinct terms. Its core purpose is to be there when people are working under a lot of stress, and they have no time to start looking for instructions. They’re going to be working under a lot of pressure, so if plan helps them find the information faster, then it will work.
The steps in the plan might be something like, “Is it working? If not, run this command. If that’s successful, it’s working. Go to the next step. Sign in to this system. See a log and what’s in there. Stop and start the service in question.” Very technical, very straightforward. A trained professional might do this in a few minutes’ time once they get a hold of the system and the DRP document. In the best case scenario, at least a few people will have access to the plan and practical experience on how to restore normalcy after a disaster.
With a concise, well-organised DRP, the company might be able to restore operations in a matter of fifteen minutes after a minor disaster. Without a plan, it can be a huge circus just to discover where the problem is and find out who to call. If too much time is spent trying to sort out who’s in charge, people will start to do their own analysis without understanding how to debug the problem, and that makes things even more difficult. In such cases it often happens that people make incorrect deductions about the cause of the problem, and the fix will take even longer.
But if all the commands are listed in the DRP, anybody on the team can log in and follow the plan. There’s no need to be dependent on just one person. Anyone with a copy of the DRP can execute it.
Too Many Bulldozers
Here’s a story that illustrates the key points in this article. There was a payment processing company that processed credit card transactions for almost an entire country. It was a huge component of the economy in that country. If the payment processing company were to fail, people would stop paying in the stores. Failure was not an option.
The company actually planned pretty well. They had two physical locations for their data centres that were close but not too close. All the systems were redundant—they just duplicated all the servers and all the data; it was thought out and executed well. But they didn’t test several scenarios, like what happens if they physically lose connectivity between those two sites. Would the customer still be able to pay, or would the entire payment processing operation go down?
In between those two data centres, a construction site went up. The construction company dug a hole in the ground, twenty metres deep. One of the bulldozers cut the fibre line between the two sites, and the payment processing company’s primary site was disconnected from the internet. Then the switchover to the secondary site didn’t work right, so they were down for the remainder of the day until they could figure out and bypass the problem.
In another situation, a big internet service provider and mobile service provider had a similar problem. This was also due to a bulldozer, but it was way worse. They actually thought that their two physically separate fibre optic lines in two locations didn’t go into the same service tunnel, but they did. When the bulldozer hit, all their customers were cut off and couldn’t be restored until the physical fibre optic line was repaired. This put down communications for hundreds of thousands of people in the metropolitan area, and all their business customers as well.
Redundancy is often a challenge in communications because companies often buy communication lines from two or three different internet service providers but don’t realise those companies are sharing one physical fibre. If that line is severed, having a variety of providers doesn’t make any difference. This is especially true for transoceanic fibre lines—there just aren’t many choices. | <urn:uuid:da22adc3-8165-46a7-80c6-b0ffa44161c3> | CC-MAIN-2022-40 | https://cyberintelligencehouse.com/2022/02/17/risk-part-iv/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00797.warc.gz | en | 0.958714 | 4,788 | 2.8125 | 3 |
A virtual private cloud (VPC) is a “cloud within a cloud” configuration where an organization establishes a private virtual networking environment within a cloud service provider’s public cloud. This “private cloud in the public cloud” usually grants complete control over the private virtual space, security, and where resources are located depending on availability by the CSP. The major benefit of the VPC deployment is to offload infrastructure risk onto a CSP, with many subsequent benefits like reduced IT staff, and associated infrastructure and staffing costs, and future-proofing the organization's tech stack.
There are similar concepts that sometimes are crossed with VPCs, such as virtual private servers (VPS), and virtual private networks (VPN). Virtual private clouds are very similar to virtual private servers (VPS) but with significant differences. A VPS, like a VPC, exists in the cloud, but uses only a fixed portion of the server with fixed resources—when accessing VPS, users interface with it as if it were a local drive. A VPS lacks efficient scalability, which distinguishes it from virtual cloud models. A VPC, contrastingly, is not bound by the underlying infrastructure, but rather their architecture allows them to scale on-demand.
VPNs are not a server technology. Virtual private networks (VPN) allow users to securely access a company's intranet from outside the firewall, and can be said to make a secure line over a public network like the Internet. Likewise, a worker can use a VPN connection to securely connect to a company’s VPC from anywhere they can access the Internet. VPNs are used to secure connections and transmit and receive data privately.
Virtual Private Cloud Features
Because virtual private clouds (VPC) are based in the public cloud space, VPCs have all the features expected from the public cloud—security, elasticity, scalability, and cost-planning and control. These are the key features cloud consumers expect from cloud service providers. VPCs, however, have additional security concerns, namely around how the CSP guarantees that a client’s VPC is isolated and protected from other partitions within the public cloud.
Isolating technologies include:
Subnet Masks — Subnet masks reserve ranges of IP addresses that are off-limits to certain groups. By setting subnet masks, VPCs can have ranges of private addresses reserved for within the network, and completely invisible to the public Internet.
Virtual Local Area Networks (VLAN) — A virtual local area network is a way to establish a group of computer devices that are logically segmented into VLAN that operates as a single network. Clients can be physically located anywhere.
Virtual Private Networks (VPN) — Virtual private networks (VPN) are not networks, but refer to the creation of a private connection to a network over the public Internet. VPNs use encryption to establish a secured “tunneled” connection and can be used to securely connect to a virtual private cloud.
Availability Zones — Availability zones logically and physically isolate partitions of the CSPs infrastructure within regions with their own power, cooling, and connectivity. By avoiding a single point of failure, availability zones help bolster redundancy and fault tolerance within the system.
Virtual Private Cloud Benefits
There are significant VPC benefits for companies that are considering establishing their own private clouds. With proper goals alignment, VPCs can prove to be a superior option over owning and operating a company’s private cloud internally.
VPCs are Reliable, Elastic, and Scalable — These three characteristics refer to a cloud's capacity to deliver. Because VPCs are housed in the public cloud, they share the original public cloud value propositions, reliable uptime and data access, elastic capacity able to meet growing capacity demand, and scalability to meet current workload demand.
VPC Security — Security is dependent on the needs of the system and compliance requirements. Leading public cloud providers with streamlined security processes can offer exceptionally convenient solutions to address security requirements. Providers that proactively upgrade their security measures also effectively provide VPC consumers insurance on future security needs.
Cost Savings — Public clouds are lauded for their pay-for-usage plans that have allowed organizations to effectively cost plan while offloading responsibility.
Virtual Private Cloud Architecture
Virtual private cloud architecture is built upon the same infrastructure other cloud models are. Including the technologies and practices that establish public cloud services, CSPs also use a three-tier architecture, and demilitarized zones to help organize VPC services.
Three-tier Architecture — As it sounds, three-tier architecture creates three interconnected layers that divide software responsibilities—web or presentation tier, application tier, and database tier. The presentation tier receives web browser requests and returns web pages and data stored within the other two layers. The application tier is what is considered the heart of the application where the business logic lives and works. The database tier houses the databases that store the data that the application tier interacts with and eventually sends that data to the presentation tier to be consumed.
Demilitarized Zone (DMZ) — Also called perimeter networks, DMZs are subnets established to create a buffer between the LAN, private cloud, or VPC, and the public Internet. DMZs provide access control, threat prevention, and detection of IP spoofing. The DMZ is protected from the Internet by a firewall, and then the enterprise LAN has a firewall that protects it from the DMZ. This configuration allows for resources to be exposed to the public, while also protecting the enterprise systems. If attacks do breach the DMZ, then they are stopped by the second firewall which is usually hardened against attacks.
How Virtual Private Cloud Works
Setting up a virtual private cloud (VPC) from a leading cloud provider is easy. After signing up for an account, the CSP will have you create a VPC with a single public subnet. From there, you’ll likely assign an IP address to access the internet, and then add an additional private subnet to your VPC. Based on the shared responsibility model and the services used, you’ll configure security features. At this point, you begin to use the VPC as you see fit, perhaps setting up a VPC peering connection that connects two VPCs that enables private traffic routing between them.
Private Cloud And Public Cloud
A public cloud is a shared pool of IT resources delivered to cloud consumers over the Internet by a cloud service provider (CSP). Depending on the level of service, cloud consumers and CSPs enter into a service level agreement (SLA) contract that defines the cloud service and for which parts each party is responsible (e.g. who is responsible for data, infrastructure, application, etc.).
Contrastingly, a private cloud is a cloud deployment model where a single organization owns and administers its own cloud and the underpinning networking infrastructure to support it. This model creates central access to IT resources for departments and staff across multiple locations and potential regions. Private clouds are implemented behind the organization’s firewall which is the major distinguishing factor from other cloud deployments models. In the private cloud model, the organization that owns the private cloud is both cloud consumer and cloud service provider (CSP).
Virtual Private Cloud (VPC) Vs. Private Cloud
Adopting a private cloud strategy demands that companies consider the worth of the network based on its business use, the necessity of private resources, and the cost of maintaining the network and supporting infrastructure, versus alternatives such as virtual private clouds (VPC), that enable private clouds in a public cloud space.
Private clouds are traditionally on-premise infrastructures secured behind enterprise firewalls. Their greatest benefit is complete control over all aspects of the cloud environment, from the choice of infrastructure to configurations, organization, and policies. However, the main drawback is the total cost of ownership and responsibility for maintaining the private cloud.
VPCs are also private and fully controlled by the cloud consumer, but they are public cloud offerings, for that reason, they also grant the cloud consumer the advantages of the public cloud—security, elasticity, scalability, and cost-planning.
Virtual Private Cloud (VPC) Vs. Public Cloud
Virtual private clouds (VPC) are single-tenant deployments in the public cloud. Cloud service providers can provision resources as a dedicated cloud for private use. This affords the cloud consumer the benefits of complete control while benefiting from public cloud flexibility, reliability, and scalability. In essence, VPC is no different than the public cloud except that it is isolated and secured for the use of one group.
Why Use A Virtual Private Cloud?
Virtual private clouds share all the benefits of public cloud space, reliability of services, flexibility of technologies, elasticity of capacity, and scalability of workloads, all underneath a pay-as-you-go plan. On top of those benefits, VPCs are fully controlled by the cloud consumer and have the benefits of private clouds: complete control over infrastructure and software choices, maximum control over configurations and customizations, ownership of network visibility and security measures, and ownership of compliance responsibilities.
Business Email Address
Thank you. We will contact you shortly.
Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us.
If you are already subscribed with us you will not receive any email from us where you need to confirm your data.
"FirstName": "First Name",
"LastName": "Last Name",
"Email": "Business Email",
"Title": "Job Title",
"Company": "Company Name",
"Phone": "Business Telephone",
"LeadCommentsExtended": "Additional Information(optional)",
"LblCustomField1": "What solution area are you wanting to discuss?",
"ApplicationModern": "Application Modernization",
"InfrastructureModern": "Infrastructure Modernization",
"DataModern": "Data Modernization",
"GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.",
"EmailError": "Must be valid email.",
"RequiredFieldError": "This field is required." | <urn:uuid:bf81fc13-f44b-4628-ac72-17a3d481f9d4> | CC-MAIN-2022-40 | https://www.hitachivantara.com/en-anz/insights/faq/what-is-virtual-private-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00797.warc.gz | en | 0.927865 | 2,275 | 2.921875 | 3 |
You may already know about NEC’s submarine optical fiber project, which is over 2000 miles of fiber at the ocean bottom pushing about 10Gbps. NEC has put this technology to use in another capacity…earthquake and tsunami detection.
Working with the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), NEC has put one of the most advanced earthquake detection systems in the world about 4000 meters beneath the ocean. The system contains state of the art seismograph technologies and water pressure meters which help in detecting and predicting major ocean events.
But this is not NEC’s first time at a project like this. In 1979 NEC planted its first earthquake observation system off the coast of Japan. | <urn:uuid:edab696e-a46d-4a10-9dc7-e712c5e37d45> | CC-MAIN-2022-40 | https://nectoday.com/nec-detecting-earthquakes-since-1979/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00197.warc.gz | en | 0.933778 | 145 | 3.234375 | 3 |
Data.gov has become a signature of the Obama administration’s open government effort. Now the United States has spearheaded a worldwide transparency drive called the Open Government Partnership.
India is a leading partner in the effort to take Data.gov globally. Both the President and Secretary of State Hillary Clinton discussed open government with Indian officials last summer.
“We had some developers from India come in the last week of August and stay with us … to discuss the possibility of jointly developing a solution that we open source and put out for people to use around the world to build their own Data.gov,” said Federal CIO Steven VanRoekel in an interview with The Federal Drive with Tom Temin and Amy Morris. According to VanRoekel, the United States released a data management tool, which is the first release associated with this partnership. The tool allows governments to do all of the workflow associated with checking data. The next step will be for the Indian government to build the website that will visually present that data.
“The Indian government has been doing a really great job of elements of this,” VanRoekel said. “They have an effort that they put forward to do data, to fight corruption, to do other things, and we saw them as a really good partner to go and jointly take this on.”
Although the U.S. government coded the data management tool, it did so using open source and under the umbrella of its international partnership. This means that the United States will not own the end product; the foreign partner that develops it will.
“The cross-benefit of this effort is that you want these things to be sort of ‘by-the-people/for-the-people’ as well as have the government lay down the base foundation,” VanRoekel said. “We’re really encouraged by the ability to publish this first phase, get it out into the hands of others, to make improvements, give us feedback and do other things.”
To facilitate the process, the Open Government Platform information was posted on the github social coding website for people to download.
Continuing the partnership
Now that the United States kicked off its partnership by presenting a way for governments to upload and manage the data in the system, it’s up to India to take the next step.
In the next six months, the Indian government will drop a set of code that provides the presentation aspect of the platform, with the final, end-to-end product being delivered later in 2012.
“Data is an amazing tool to shine light on many aspects of both government and other things,” VanRoekel said. “Exposing data, we found in the U.S., can give you insights, can make you smarter about decisions, can fight corruption around the world and other things.” | <urn:uuid:65918d91-19af-4ab6-a1a3-8ff79b5602e8> | CC-MAIN-2022-40 | https://federalnewsnetwork.com/tom-temin-federal-drive/2011/12/datagov-goes-global/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00197.warc.gz | en | 0.959429 | 596 | 2.546875 | 3 |
Harwell Glider Base
There was a glider base between the villages of Harwell and Chilton during World War II. It was used in the D-Day landings in Normandy, and in Operation Market Garden in the Netherlands. Let's see what we can see of the base today.
The glider base was near the large nearly flat field between Harwell and Chilton. See Harwell Field on the map below. It's the large area surrounded by roads, with the two paths (dashed and crossed green lines on this Ordnance Survey map) crossing near its center. The main roughly level area of the field is about 1.5 by 1 kilometers.
See the controlled airspace marked "EGR 102" on the map at top. That was there to protect a research facility of the U.K. Atomic Energy Commission, which just off one side of the former glider base.
Harwell is just south-west of Didcot, south of Abingdon, which is south of Oxford. Buses run from Oxford to Harwell every 30-60 minutes. They're local buses, taking about 45 minutes in each direction. Buses number 32 and 33 connect from Oxford. A £6 pass lets you ride the area buses all day. The Tourist Information Centre in Oxford can provide all the details.
South-west of the field are areas labeled on this map as "Harwell International Business Centre" and "Rutherford Appleton Laboratory". Also see the marked ambulance and fire station.
The ambulance and fire station and some of the older buildings now in the business centre date from the World War II air base. The United Kingdom Atomic Energy Authority built a large research facility, the Rutherford Appleton Laboratory. The International Business Centre was a recent development, connecting the UKAEA with industry.
In these pictures we will walk south from Harwell, along the route marked with green cross as a public right-of-way. That will take us right across the area of the glider base.
We will get lunch at the pub in Chilton, see what remains of the Rutherford Appleton Laboratory, and see what's become of the Harwell International Business Centre. Then we will return by the road along the business centre and the other path marked "Walkway" on here.
An old concrete structure remains near the north end of Harwell Field. I don't know if this really dates back to the area's military use during the war. It might instead have been a water tank for livestock or had some other agricultural use.
We're looking south from the north end of Harwell Field. We have just come up the hill from the town of Harwell, and we're at one end of the high flat area.
Two footpaths cross near the middle of the field. The path running from north to south continues straight ahead of us. The crossing path follows the lane branching off to our left.
The line of evergreen trees in the distance at center mark the far end of the field. Chilton is down a slope beyond them.
Turning around at the path crossing and looking back north, we see the Didcot power station in the distance. The town of Harwell is not visible, as it lies below the flat open area of Harwell Field.
We have now reached the south end of Harwell Field. The clump of trees on the horizon mark the path crossing near the center of the field.
The Rose & Crown pub in Chilton provides a nice place to get lunch and rest for a bit.
Returning north along the A4105 road, we see this sign at the edge of the Rutherford Appleton Laboratory.
The RAF headquarters building, barracks blocks, hangers, and so on becaume the United Kingdom Atomic Energy Research Establishment or UKAERE.
The UKAERE has been closed and the site is no longer known as the Harwell International Business Centre, it is Harwell Oxford. The facilities now house a wide variety of companies, not limited to nuclear and space physics as in the past.
Most of the site is now open for public access, although two areas are still fenced off and patrolled by the Nuclear Constabulary as it is a commercial nuclear decommissioning facility. Much of the site is scheduled to become a business and science complex, but the old and sometimes conflicting signs still abound.
This commemorative marker is near the public phone call box shown on the map near the south end of Harwell Field. We are looking toward the large and exotic looking Rutherford Appleton Laboratory building. If you're using GPS, it's around SU 483 863.
"This stone marks the end of the runway from which aircraft of No. 38 Group, Royal Air Force, took off on the night of 5th June 1944 with troops of the 6th Airborne Division who were the first British soldiers to land in Normandy in the main assault for the liberation of Europe."
The old runway surface is visible, running to the south-west off the south end of Harwell Field toward the present laboratory building.
If you look on Google Earth, you can find the three runways.
The end of one is marked now as Seven Road, it runs to the south of the large round "donut" building.
The second is to the east, marked "Frome Road"
The third is what is now Fermi Avenue, serving as the main access road to the southwest area of the site.
We have turned to look back across Harwell Field from that marker. The remaining runway surface is behind us. | <urn:uuid:fd0e151d-1fba-4742-acdd-b38a1a67d004> | CC-MAIN-2022-40 | https://cromwell-intl.com/travel/uk/harwell-glider-base/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00197.warc.gz | en | 0.962745 | 1,148 | 2.59375 | 3 |
Most Commented Posts
Cybersecurity is playing an important role in today’s world. It is essential for organizations to save their crucial data from theft and damage. The data may include sensitive content, personally identifiable information (PII), personal information, protected health information (PHI), information regarding intellectual property, and crucial governmental and industry information. An organization cannot save its valuable data without an efficient cybersecurity program. The number of cybercriminals and cyber thefts is increasing rapidly, which is another reason for the burden on the cybersecurity industry.
According to Astute Analytics, the global cybersecurity market will grow at a compound annual growth rate (CAGR) of 13.4% during the forecast period from 2021 to 2027. It is due to several factors, including growing cybercrimes, enhancing techniques performed by cybercriminals, and rising penetration of smart devices.
Growing Cloud Services Boosting the Chances For Cybercrime:
The popularity of cloud platforms, such as Amazon Web Services, is increasing at a rapid pace. Cloud platforms are highly used to store sensitive data and other mandatory information. The chance of your firm being the victim of a successful cyber assault or data breach is on the rise, all due to inadequate cloud service configuration.
Canva, a renowned Australia-based graphic design platform, witnessed a data breach in May 2019. The suspected attacker was named Gnosticplayers. The attacker exposed crucial data of approximately 136 million users, including usernames, passwords, email addresses, residence addresses, etc.
The Advanced Techniques Used By Cybercriminals:
Hackers have tactics that can outgrow basic solutions. Business executives can no longer rely on out-of-the-box solutions like antivirus software and firewalls. Cyber threats can come from any level of your organization. Social engineering scams, phishing assaults, ransomware attacks, and other malware are all designed to steal intellectual property or sensitive information. Furthermore, cybercrimes are no longer confined to a single segment like BFSI. They now operate in nearly every area, including healthcare. Thus, even small businesses run the risk of losing their reputation in today’s environment.
Regardless of size, all firms must guarantee that all employees are aware of cybersecurity hazards and the ways to mitigate them. It must involve continuous training, and a working structure aspired at reducing the risk of data leaks or breaches. Moreover, it is difficult to estimate the direct and indirect costs of security breaches because of the nature of cybercrime.
The theft of personal information is the most expensive and fastest-growing type of cybercrime. The growing availability of identity information to the web via cloud services is driving this trend. Cyberattacks may also try to damage data integrity in order to instill distrust in an organization or government. It does not only cost economically but can also ruin the reputation.
Since January 2020, more than GBP 11 million has been stolen owing to COVID-19 scams, according to the City of London Police. Moreover, a poll in Switzerland found that one in every seven people had been the victim of a cyberattack during the pandemic.
The importance of cybersecurity is increasing. Fundamentally, our society is more technologically reliant on cybersecurity. Earlier, data breaches and techniques were confidential. However, data breaches are now shared openly on social media platforms nowadays. It is significantly driving the demand for cybersecurity and potential techniques for enhancing security. For instance, LogRhythm, Inc. unveiled version 7.5 of its NextGen SIEM platform in July 2020. The SIEM platform and Open Collector technology aim to assist security professionals in witnessing threats rapidly. It will also help them eliminate errors by forming filters using Lucene helper assistance.
Moreover, in April 2020, Investcorp, an alternative investment product manager for institutional & private enterprises, acquired Avira Operations GmbH & Co. KG. Through this acquisition, Investcorp aims to support Avira’s business growth and continue to accelerate its business profits across OEMs and consumer market segments.
Governments of almost every country are focused on enhancing the cybersecurity segment. California became the primary state to regulate data breach disclosures in 2003. Through this, data breach victims can sue for up to $750. Moreover, companies may have to give a fine of up to $7,500 per victim.
Other Factors Driving The Demand For Cybercrime:
The expanding trend of Bring-Your-Own-Device (BYOD) is raising the chances for cybercrimes. Moreover, the growing use of social media applications to communicate information has opened up significant channels for online cyber threats and assaults. Thus, it is driving the demand for cybersecurity. Furthermore, digital apps contain considerable flaws with many vulnerabilities. As a result, it allows assaults to penetrate, resulting in the loss of user data. There is a high need for apps to be thoroughly tested, at all stages, from integration to deployment to go-live. Application security solutions and services serve as the backbone of a company’s security architecture, and their popularity is growing as more companies implement digital applications. As a result of the circumstance, cyber security solutions are becoming more popular. | <urn:uuid:5a474c70-97ce-40c5-aa8a-592d5dc4ef64> | CC-MAIN-2022-40 | https://ihowtoarticle.com/how-important-is-cybersecurity-in-todays-era/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00197.warc.gz | en | 0.940898 | 1,065 | 2.59375 | 3 |
Every business should have a strong cybersecurity posture to keep cybercriminals from infiltrating their network. One way to do this is by implementing a strict authentication process using two-step or two-factor authentication. These two processes are so similar that many confuse one with the other.
In the digital age, cybersecurity should be one of the top priorities for anyone who goes online. One way is to vet those who are trying to access your systems. But when it comes to verifying users’ identity, many are unaware of the two kinds of authentication measures available.
When it comes to protecting yourself and your business online, the type of authentication you use for logins, whether for business or for personal use, is vitally important. While many people understand that secure logins are crucial, the differences between the various security measures may be lost on many people. | <urn:uuid:835b60f3-433c-4c72-820c-a99e04ba6e7d> | CC-MAIN-2022-40 | https://www.datatel360.com/tag/two-step-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00197.warc.gz | en | 0.951474 | 172 | 2.53125 | 3 |
Two-Factor Authentication (2FA) is the use of two of the following three identification factors:
- Something you know – Most often a password for your account.
- Something you have – Such as a cell phone with a temporary authentication code.
- Something you are – Such as your fingerprint or facial recognition.
Using two of these three identification factors is the best way to protect your critical accounts. Hackers know that most people don’t have or use password managers set up with strong passwords and they don’t use 2FA. As a result these hackers send out phishing attacks to steal unsuspecting user’s login credentials. With two-factor authentication enabled on your accounts, it will require more than just your username and password to gain access to those accounts. Hackers would also require another identification factor (something you have or something you are), effectively locking hackers out of your accounts.
Start enabling 2FA on your critical accounts. Your bank probably requires 2FA already for online backing. Use it there and also enable on critical accounts like your Email, company VPN, and any Internet facing services you provide to your clients. | <urn:uuid:2adc61aa-e6bf-4ac8-b6c9-4dd8116dcdb3> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/two-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00398.warc.gz | en | 0.921076 | 240 | 3.015625 | 3 |
What should you know about social media phishing, or SMP? Many of us associate phishing risk with get-rich-quick links or attachments in marginally literate email messages, not our social media accounts and activity. What you should know is that phishing threat actors are deviously clever at setting traps by exploiting popular and trusted platforms, apps and topics in the news. What better hunting ground, then, for the digitally savvy criminal than social media?
Here are some answers to common questions asked about social media phishing.
What is social media phishing (SMP)?
Social media phishing is used by attackers seeking to steal personal data to sell on the dark web or to gain access, typically, to financial accounts. They may also troll for personal details for credential phishing purposes. For example, when armed with your birthday, social security number, middle name, mother’s maiden name and the like, combined with educated guesses about where you bank or keep retirement accounts, they can reset your password and pillage your accounts. Too much of this type of detail is easily found on social media websites.
Q2 2022 Phishing Review
The Cofense Intelligence™ team analyzes millions of emails and malware samples to understand the phishing landscape.Read More
Alternatively, an attacker may simply post an irresistible phishing link (e.g. “You won’t believe your eyes” or “See how I made $200,000 in 10 minutes”) on a friend’s social page. When the link is clicked, the victim is routed through a series of screens and spoofed webpages where attackers harvest important identifying information. You can read all about the methods they use – some are diabolically clever – on our Phishing Prevention & Email Security Blog.
As of 2021, more than 3.96 billion people worldwide are using social media. The average social-media consumer has 8.6 accounts on different networking sites; popular platforms like Facebook see 66% of their users logging in daily. This type of heavy and diverse traffic makes for a bottomless trough from which phishing threat actors gorge.
Why is social media a target for phishing attacks?
Social media is invaluable to threat actors for social engineering, which is a variety of deceptive tactics through which attackers use your good nature against you to get confidential information. Social media users choose their platforms to get and generously give information. They often make public where they live, work and vacation. They offer up the names, ages and birthdays of their children, friends and colleagues. They probably don’t realize how easy they’re making it for a digital criminal to structure and launch a targeted attack.
The attack may come in the form of, for example, a post with a link designed to entice the victim to share it on their social media. The victim’s contacts – trusting the source – may click on the link. From there, they’re taken to a phishing (but genuine looking) website. An authentication challenge will appear, obliging the user to validate their identity by supplying their social media (or Google Drive or OneDrive or other) credentials in order to see the content they were tricked into pursuing. Typically, the authentication will fail, forcing the victim to reenter credentials. In many cases, these credentials are all that’s needed for an attacker to wreak digital devastation.
What are examples of social media phishing?
On Facebook, beware of third-party apps that demand excessive amounts of information. Also, criminals can easily create a phishing site that looks just like the Facebook login page. On LinkedIn, look out for fake recruiters. They may send a document you must download to pursue that amazing opportunity. Once downloaded, the document unleashes malware via macros that aren’t readily visible to the untrained user. Educate yourself on how criminals manipulate other platforms – Twitter, Instagram, YouTube and more – to launch attacks and steal your stuff. Check out Cofense resources, and those offered by trusted organizations such as National Cyber Security Alliance.
Teach users to identify real phish.
Discover how Cofense PhishMe educates users on the real phishing tactics your company faces.Get a Demo
How can I protect myself against phishing on social media?
To steer clear of phishing on social media, a few quick best practices include these “don’ts”:
- Don’t accept friend requests from strangers.
- Don’t click on links to update your personal details – instead, visit the platform’s support pages to see what updating is needed, and how and when to do it.
- Don’t use the same password and user name for all your accounts because once one of them is stolen, all your accounts will be in jeopardy.
- Don’t ignore prompts to update your operating system; many attacks exploit unpatched vulnerabilities.
Social media is meant to be fun and informative. Don’t let the crooks ruin it for you. Keep in mind that attackers will try to use one successful exploit to go after not just you but your family, friends, colleagues, neighbors and employer.
For more information on staying safe against SMP, and other types of phishing attacks, visit us online and check out articles like this one, What Are Phishing Attacks and How Do You Stop Them? We’re here to help.
Source: Backlinko, https://backlinko.com/social-media-users | <urn:uuid:732cddab-cb66-43d5-80a7-52c174368de3> | CC-MAIN-2022-40 | https://cofense.com/knowledge-center/social-media-phishing-what-you-need-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00398.warc.gz | en | 0.918638 | 1,150 | 2.828125 | 3 |
1 - Lesson 1: Working with Multiple Worksheets and Workbooks
Topic A: Use Links and External ReferencesTopic B: Use 3-D ReferencesTopic C: Consolidate Data
2 - Lesson 2: Sharing and Protecting Workbooks
Topic A: Collaborate on a WorkbookTopic B: Protect Worksheets and Workbooks
3 - Lesson 3: Automating Workbook Functionality
Topic A: Apply Data ValidationTopic B: Search for Invalid Data and Formulas with ErrorsTopic C: Work with Macros
4 - Lesson 4: Using Lookup Functions and Formula Auditing
Topic A: Use Lookup FunctionsTopic B: Trace CellsTopic C: Watch and Evaluate Formulas
5 - Lesson 5: Forecasting Data
Topic A: Determine Potential Outcomes Using Data TablesTopic B: Determine Potential Outcomes Using ScenariosTopic C: Use the Goal Seek FeatureTopic D: Forecast Data Trends
6 - Lesson 6: Creating Sparklines and Mapping Data
Topic A: Create SparklinesTopic B: Map Data
Actual course outline may vary depending on offering center. Contact your sales representative for more information.
Who is it For?
This course is intended for students who are experienced Excel users and have a desire or need to increase their skills in working with some of the more advanced Excel features. Students will likely need to troubleshoot large, complex workbooks, automate repetitive tasks, engage in collaborative partnerships involving workbook data, construct complex Excel functions, and use those functions to perform rigorous analysis of extensive, complex datasets.
To ensure success, students should have practical, real-world experience creating and analyzing datasets by using Excel. Specific tasks students should be able to perform include: creating formulas and using Excel functions; creating, sorting, and filtering datasets and tables; presenting data by using basic charts; creating and working with PivotTables, slicers, and PivotCharts; and customizing the Excel environment. To meet these prerequisites, students can take the following Logical Operations courses, or should possess the equivalent skill level:
Microsoft® Excel® for Office 365™ (Desktop or Online): Part 1
Microsoft® Excel® for Office 365™ (Desktop or Online): Part 2 | <urn:uuid:a331268c-525d-4890-8477-1728c07c89b5> | CC-MAIN-2022-40 | https://charleston.newhorizons.com/training-and-certifications/course-outline/id/1035992887/c/microsoft-excel-for-office-365-desktop-or-onlinev1-1-part-3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00398.warc.gz | en | 0.806095 | 483 | 3.375 | 3 |
Picking up from part 1 of the back to basics, let’s jump into the multicast address type and why it is important to IPv6.
As a quick review, the goal of our “back to IPv6 basics” series is to provide a bit more detail on the three main address types in IPv6. They are:
Network operators often have a love/hate relationship with IPv4 multicast. Multicast seems like a dark art for many networking folks: a somewhat mysterious one-to-many packet forwarding technology. Because of this, many networking staff have very strong feelings on whether an enterprise network should or should not operate a multicast service, multicast routing protocols, etc. Luckily, in IPv6, while it can do many of the same traditional multicast functions as IPv4, that really isn’t the primary reason that we need to talk about multicast in IPv6.
The goal of multicast in the original design of the IPv6 protocol was to address some shortcomings in how multicast was designed and built in IPv4.
One example of this would be IPv6 multicast not having any artificial limitations on address to MAC address mapping. IPv4 has to reuse IPv4 multicast-to-Ethernet address mappings, which can cause inefficiencies in bandwidth usage for multicast streams. Additionally, the conventional thinking at the time IPv6 was drafted was that multicast would help address some of the bandwidth growth limitations that were impacting many networks. As Ethernet technology has evolved, the bandwidth problem has largely gone away. Now, the cost benefit analysis of implementing large scale Layer 3 multicast networks has turned in favor of just providing more unicast streams, even if they are of the same content, because there is now enough bandwidth to support that model. Multicast from an architectural perspective has an elegance and simplicity that motivated those drafting IPv6 to decide it was the right tool for replacing some other Layer 2 to 3 operational realities that were considered a bit ugly. For example, the fact that ARP and RARP did a kind of “Layer 2.5” function and were not true Layer 2 or Layer 3 in the OSI model bugged some. IPv6 became a bit of a harbor for purists who wanted to restore the networking protocol to its original intended form. Just realize that the majority of IPv6 multicast actually happens on a local link because of the motivations to “clean up” or improve on the mistakes of IPv4, not necessarily because the method was technically superior to what IPv4 was already doing. You can brush up on IPv4 multicast if you like over at Wikipedia.
IPv6 multicast addresses are defined with the following address space:
In IPv6, multicast has two major functions that map to how IPv6 multicast addresses are scoped. It has a function that is locally scoped to allow hosts to do things like duplicate address detection (DAD), neighbor discovery, and the ability to self-assign into well-known multicast groups (see Tom Coffeen’s SLAAC blog post that covers these). The second major function is for network-wide multicast, which is the same as the more common IPv4 multicast functions used today (i.e., beyond locally-scoped but within one autonomous routing domain).
Let’s tackle the IPv6 network-wide scope (the IANA scoping would be Organizational-Local or Scope 8) multicast use case because it matches with what is done in IPv4 networks today. At the time of the writing of the IPv6 specifications, multicast was considered an efficient way to do a publish-and-subscribe distributed protocol for applications that had no concept on how to do that with message bus technology like Tibco Rendezvous (at the time) or more modern equivalents like Rabbit MQ, MSMQ, or Consul. Because of the modern microservices application frameworks being utilized today (and how their core functions are being built around a message bus) the need for multicast networks has diminished greatly in the last few years. IPv6 still requires multicast (it is a core address type) but in-depth knowledge of it is likely not as high on the skills list as it was for IPv4.
The use of modern message bus technologies in the move to microservice architectures effectively replaces the need to deploy network wide multicast solutions, regardless of protocol. You may still need to deploy a network wide multicast solution in IPv6 if your applications have not been modernized. Luckily IPv6 has many of the same multicast routing equivalents such as PIM Source Specific Multicast or Any Source Multicast. The good news is that you will have more IPv6 multicast address types to choose from and an easier time mapping those IPv6 addresses to Layer 2 Ethernet multicast frames. The downside is you are still running and operating a multicast network with multicast routing protocols. For many, this is a set-it and forget-it sort of model until something goes wrong or breaks. Still, it is one more thing you are running in your environment which gets away from the simply network operator rules that seem to be the ethos of network philosophy today.
Changing gears, on the local link scope level, IPv6 uses multicast almost immediately. When a host is first bringing its interface up it will use a multicast process to request the global unicast prefix on that link via a Router Solicitation (RS) and then do Duplicate Address Detection (DAD) using ICMPv6 multicast after generating both a global unicast address but also a solicited node multicast address. This solicited node multicast address is how we no longer require an ARP/RARP process like what IPv4 uses. We are able to map an IPv6 multicast address directly to an Ethernet multicast frame. I won’t belabor the technical details here as Scott Hogg has already provided that information in his blog post at FE80::1 is a Perfectly Valid IPv6 Default Gateway. The important point is to understand that EVERY host will have a solicited node multicast address of some kind even if the host only has a link-local address (Check out Denise Fishburne’s excellent blog post series on this topic). It is a function of how the protocol works.
Fundamentally, the really interesting part of how IPv6 uses multicast was the decision to have a preset group of multicast addresses that certain hosts could join based on their function. These multicast addresses are listed out up on the IANA website. Here are a few you might find interesting:
FF02::1 – all nodes
FF02::2 – all routers
FF02::5 – all OSPF SPF router
FF02::6 – all OSPF DR router
FF02::9 – RIP routers
FF02::a – EIGRP routers
FF02::d – PIM routers
FF02::1:2 – All DHCP servers and relay agents
As you can see, having a whole set of well-defined locally scoped multicast addresses makes life a bit easier. In the list above you will notice all the addresses start with the hex digits FF02. There is significance to the bit order value (after the FF portion of the address) and in this case the value 02 indicates that the address is link-local scope in nature. As a general rule of thumb, you will typically see FF02 the majority of the time with some FF08 if an organization decides to deploy network wide organization-local scope multicast services.
So that is it, the quick look at IPv6 multicast. There are fantastic resources to use to dig in more. Check out Wikipedia, or you can watch my IPv6: Introduction to the Protocol course on Pluralsight.
You can find me on twitter as @ehorley and remember…
IPv6 is the future and the future is now! | <urn:uuid:502818fb-8b90-4bb9-a79a-a307d5ea0278> | CC-MAIN-2022-40 | https://blogs.infoblox.com/ipv6-coe/back-to-basics-the-ipv6-address-types-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00398.warc.gz | en | 0.939228 | 1,664 | 2.984375 | 3 |
Dynamic Content Meter12th August 2016
Project-based Licensing – Collaboration Made Easier22nd November 2016
Internet Security is a Collective Responsibility
Each of us has to become more aware of these security issues and in our roles as consumers or as application providers. As consumers, we must take precautions to ensure the security of our data in the systems we access (hint: password123 is not a good password…). In fact, good password hygiene is one of the essential elements of internet security. As application providers, we must use the best of breed systems to protect customer data and ensure business continuity.
A data breach is the intentional or unintentional release of secure or confidential information to an untrusted environment. The consequences of fraud can be huge, and we should all be aware of the very serious risks. To get a better sense of the scale of the problem, we started doing some research on recent cyber attacks and system breaches. We were struck by the results - most breaches are no longer hitting the headlines and the frequency and scale of the breaches are increasing. It's an issue we all have to take steps to address.
Using the most up-to-date version of your operating system
Make sure you have a company firewall in place
Using federated identity software like 10Duke
Promote good password hygiene with your employees
Reduce exposure to password disclosure by using Single Sign-on
Are you a software developer looking to sell more?
Learn more from our guides:
You might also be interested in:
The Internet is a world-wide communication network of computers with limitless potential for learning, making new friends and having fun. But it’s also home to hackers and fraudsters who use it as a playground for illegal activity. | <urn:uuid:8ddc2cf0-7a62-4ba5-b82a-bcead2f7fd63> | CC-MAIN-2022-40 | https://www.10duke.com/blog/internet-security-is-a-collective-responsibility/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00398.warc.gz | en | 0.936668 | 358 | 2.625 | 3 |
You couldn’t go a week last year without seeing a ransomware headline in the news. And it wasn’t because the media paid more attention to this relative newcomer in the area of cybercrime. Ransomware attacks on businesses skyrocketed 365 percent in 2019, and all signs point to more of the same in 2020.
As bad as the ransomware scourge was for businesses, local governments arguably had it worse. Baltimore, Atlanta, and Akron, Ohio, were among the bigger cities hit, followed shortly thereafter by Lake City and Riviera Beach in Florida, and then a coordinated campaign against 22 Texas municipalities. New Orleans was hit later on in the year, and still hasn’t completely restored its data and services.
While the number of ransomware attacks is enough to make even the calmest among us want to hide in a panic room, the likelihood of such attacks slowing down is right up there with the chance that President Trump will be asking U.S. attorney for the Southern District of New York Geoffrey Berman to prepare his tax return this year.
We can’t hide from the threat. Cybersecurity experts are aligned on this point: Ransomware is a serious issue. The situation is not going to be magically resolved. But while it’s worked on, there’s something all of us can do. Businesses, individuals, governments, and organizations alike can become savvier about the threat, understand the scope of the problem, and prepare for it.
What Is Ransomware?
While the sophistication and methods of attack may vary, the short answer is that ransomware is a type of malware that encrypts critical data on a computer or computer network so that users can’t regain access without paying a “ransom.” The payment is typically demanded in bitcoin, because it’s difficult to trace and easily transferable. Upon payment the hacker, if “honest,” will provide a digital key to decrypt the information. It doesn’t always go down that way. In fact, the criminal may leave the data encrypted after stealing it and put it up for sale on the dark web or simply use it in the commission of a crime.
Sometimes, these clowns don’t even know how to decrypt the data. Hackers aren’t generally focused on providing competent customer service. It’s all about the payday.
What makes ransomware such a difficult vector to recover from is the encryption, which ironically continues to be one of the best methods of securing data from hackers. In effect, ransomware is the weaponization of a cyber-protection protocol.
The threats used to get payment are serious, starting with the possibility that the data will be encrypted permanently. Few organizations can survive the loss of all (or even a significant portion of) their data. Hackers often threaten to delete the data by a certain deadline, or as is becoming more common, they may opt to release the data to the general public, which is a variant attack known as extortionware.
While there have been some success stories when it comes to ransomware remediation, the odds are not in your favor. The safest bet is to prevent these attacks in the first place. But there have been informative examples of companies that mitigated the damage from a ransomware attack. Your Cliffs Notes version: Put yourself in a position where you can’t be affected by such a hack.
Back Up Everything: Data recovery is an expensive and time-consuming process even when it isn’t being done in the wake of a ransomware attack. If the loss of your data is potentially catastrophic, the most straightforward solution is to back up your systems and data and do it often. Bear in mind that your data backups will be of no use if they are also encrypted by a ransomware attack, so keep them stored separately and offline.
Call for Help: The odds are very good that your IT staff is already overworked as a result of their day-to-day operations. Part of this can be blamed on the lack of cybersecurity skills available in the workforce. A recent study found that the cyber gap impacts 74 percent of organizations, with 63 percent of cybersecurity professionals reporting that the talent gap increased their workload, 68 percent reporting negative effects on their personal lives, and 38 percent reporting higher burnout rates.
The types of threats are ever changing, adding to the challenge cybersecurity professionals face. Phishing attacks, unpatched software vulnerabilities, and ransomware attacks all amount to an exercise of futility. If you expect your existing staff to be able to resolve a ransomware attack with the resources at their disposal, think again. They can’t do it. Find a contractor that specializes in ransomware recovery before you’re hit.
Silo Your Data: While the New Orleans ransomware attack was an unmitigated disaster for the city, one thing that helped was that it didn’t take their emergency services offline: police, paramedics, and fire departments were still able to respond to calls, because they were on a separate system from the compromised city services. Consider taking the same approach: Run and maintain separate servers and storage for your data. While it may require more resources in the short term, doing this will greatly aid in the containment of the damage from a ransomware attack.
Get Covered: I’ve said it before and I’ll say it again: Cyberattacks and data breaches have become the third certainty in life after death and taxes. With the constant threat of cyberattacks, foreign threat actors, ransomware-as-a-service operators, disgruntled former employees, and just plain misconfigured servers, insuring your company against cyber-risk is and should be viewed as a basic requirement of doing business.
Case in point: The Heritage Company went out of business following an October 2019 ransomware incident, leaving 300 employees out of work shortly before Christmas. Don’t be the next Heritage Company. If your company already has cyber insurance coverage, consider increasing it. Both New Orleans and Baltimore opted to increase their limits after discovering that the recovery process cost significantly more than was covered.
The unfortunate reality here is that the ransomware epidemic is likely to worsen before it improves. The best defense is to practice good cyber hygiene, back up data, keep systems patched and up-to-date, and invest in workplace training to identify phishing emails and other suspicious behavior. But if that fails, it’s wise to have a response plan in place. | <urn:uuid:701929f7-7fd1-451a-853d-a157db450c47> | CC-MAIN-2022-40 | https://adamlevin.com/2020/03/02/ransomware-is-the-no-1-cyber-threat-this-year-heres-what-you-can-do/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00398.warc.gz | en | 0.957613 | 1,322 | 2.53125 | 3 |
So you think you know something about data privacy? A lot of Californians did, too -- until some law school experts tested them.
In a research paper released yesterday, researchers from the University of California, Berkeley, School of Law found that the majority of people they surveyed did not know how their personal data might be used in everyday situations. How well do you know your privacy rights? Take this nine-question, true/false quiz and find out.
False. A newspaper or magazine is free to sell subscription lists without subscriber consent. Most people (50.9 percent) got this one right. Forty-six percent said true, and 2.5 percent didn't know.
False. Pizza companies have become a hub for collecting personal information, and the data is sometimes used by private investigators and governments to track individuals. Only 39.5 percent of respondents knew about this.
False. Many organizations that solicit charitable donations sell lists of members and donors. Most people (43.6 percent) thought that their data was protected. Forty-two percent were aware that charities sell such lists, and 13.9 percent weren't sure.
False. The majority of respondents (54.7 percent) know that sweepstakes operators can result in the sale of personal information without consent. Forty-two percent said true, and 3.1 percent didn't know.
False. You don't have to fill out this card to be protected by the warranty -- a receipt will do -- and many companies collect a wide range of personal information from warranty cards and then sell it for direct marketing purposes. Most people (50.3 percent) don't know about this practice. Thirty-nine percent said false, and 2.5 percent didn't know.
False. Many stores still ask for a phone number when they complete a purchase, when in fact it usually isn't required. Stores can resell this information, and it also is a loophole in the "Do Not Call" list, because a business can call customers with whom it has a "relationship." Most people (56.9 percent) do not know about this. Thirty-nine percent of respondents correctly answered false, and 4.2 percent didn't know.
False. Like product warranties, these forms often collect irrelevant data that can be sold to third parties. Most people (50.8 percent) believed their personal information would not be used without their consent. Forty-six percent said false, and 12.1 percent didn't know.
False. Catalog companies have long sold personal information and data about purchases that customers have made. Fewer people (47.9 percent) knew about this than those that didn't (48.5 percent). Four percent didn't know.
This one is true, at least in California. California law limits the collection of some information and sale of data collected through club programs. Most people (49.8 percent) got this one right. Forty-three percent said true, and 7.6 percent didn't know.
How did you do? The Berkeley researchers said those who shop online frequently did better than those who do only about half of the time. The research also points out the need for greater education on privacy practices and user rights, the researchers noted.
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message. | <urn:uuid:b3c6e50a-8458-4c62-b8d7-1343f494d70c> | CC-MAIN-2022-40 | https://www.darkreading.com/perimeter/can-you-pass-this-privacy-quiz- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00598.warc.gz | en | 0.966684 | 691 | 2.765625 | 3 |
Are you having trouble finding the perfect computer? You aren’t alone, many people do. If you want to start building your own PC there are a couple basic things you should know about before you get started picking out components. We’re going to bring you through the process so you can start building your own PC and the computer of your dreams.
Building your own PC – What is it’s use?
Building your own PC? Obviously you want to do this because you can’t find what you want in a prebuilt one, but what purpose will this PC serve? Are you just going to be writing emails and surfing the web? Will you be playing computer games? How about audio or video editing? Want to be able to run VMware desktop products and run your home lab?
Each of these use cases has slightly different requirements, so it is good to know what you are looking to accomplish going into the process.
Here are the components you are going to need:
- Chassis and Power Supply
- Mother Board
- Graphics Card
- Disk Drives
- Fans/Cooling System
Let’s take a look at the components for each of these items.
Chassis and Power Supply
Obliviously, you need the shell when building your own PC first! There are so many unique choices out there, you are bound to find a chassis that suits your needs. The important things to think about here is how big is your chassis, and does it come with a power supply or not?
It if it does not, you will need to purchase a power supply that fits your chassis. You may also want to pay attention to how the chassis is laid out for cooling purposes.
The motherboard is the heart and soul of your PC, and what you connect everything else to. Of course, it must fit in the chassis you selected.
When it comes to your motherboard, it is important to consider how many connectivity slots it has and what type. It is also important to see what kind of processors are compatible with the motherboard. You may even want to pick out your processor first for some use cases, and then pick a motherboard and chassis based on that.
Pay attention to the following components of your motherboard:
- CPU socket
- GPU slot
- USB connectors
- PCIe slots
- DIMM or RAM slots
- On board Audio/Video capabilities
- Other ports like headphone, USB, Keyboard, Mouse, etc
The CPU is of course the the brains of the operation when it comes to picking PC components.
Pay attention to how fast your CPU is or clock speed, and how many cores it has inside of it, since they are what determine your CPUs performance.
It is very important that your Motherboard and CPU are compatible, so you may want to pick your CPU before your motherboard if you have a specific model in mind.
When it comes to RAM, there are two things to keep in mind: capacity and speed. How many GB of RAM you need depends on what you are doing to some extent. Personally, I would avoid less than 16 GB of RAM in a system. The rule of thumb here is the more the better!
The higher capacity DIMM or RAM chip is, the more it will cost. This is important since you have a certain number of slots in your motherboard for RAM.
One strategy is to go with fewer more expensive chips so you have room to expand later if you need more RAM.
Graphics Card (GPU)
If your motherboard has on board graphics, it may be enough if you are just checking your email or surfing the ram. If you plan on playing video games, a dedicated GPU card is a must. Many motherboards have a special slot for GPUs, so be sure to pay attention to what your motherboard has on it. These can get very expensive, and also have memory on board of them.
I almost said hard drives here, but the fact of the matter is almost everything is SSD these days. You may want to consider multiple disk drives in your system – a SSD for OS and Applications, and a traditional hard drive for storage.
Some motherboards may also have a NVMe slot on board that can be used for a storage memory chip.
All of the components we talked about generate heat in your computer, and air must circulate within your chassis for peak performance. If things get too hot inside your computer, you may see performance issues or even damage your components.
How you need to cool your system ties back to the components within it, since different types of components will generate different amounts of heat.
Building your own PC concluding thoughts
Now that you know what you need to get started, it is time to do some research to find out the best components for your needs. Start by making a list, and double and triple checking that everything you selected is compatible.
When you are getting ready to build your new PC, don’t miss this helpful guide from Intel that walks you through the assembly process.
Get the latest information in technology and career here! | <urn:uuid:44a9b911-96ce-4ede-b44c-c6c6e139536f> | CC-MAIN-2022-40 | https://24x7itconnection.com/2020/07/16/building-your-own-pc-how-to-pick-the-right-components/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00598.warc.gz | en | 0.95385 | 1,066 | 2.625 | 3 |
Do you know the cost of data protection? Chinese social media management solution provider, Socialark, found the cost of not having data protection was more than they could bear. Earlier this year, the company experienced a massive data breach that exposed 200 million LinkedIn, Facebook, and Instagram accounts. As it turned out, Socialark had not password-protected their database or encrypted their data—a costly misstep for any organization.
At the rate data is created, stored, and used, your organization can’t afford not to have data protection—and data protection training—as part of your overall security strategy. Your employees need to be aware of and prepared to handle phishing and ransomware threats that go straight for your data.
With the right data protection measures and employee training program in place, you can create an impenetrable wall around your sensitive data and data ecosystems. This post sheds light on data protection training, why your employees need it, and eight best practices for data protection training.
What is data protection training
Data protection is about storing and securing data accuracy, confidentiality, and integrity. But data protection training goes a step further to educate employees on organization and industry standards to protect data from destruction, loss, modification, or theft.
Because data compromises can occur by mistake or by malicious intent, data protection training addresses proper data handling practices to protect against malicious attempts. It also helps employees learn more about data loss, privacy loopholes, and disclosure issues.
While data protection training centers on protocols for ensuring data authenticity, data security training focuses on protocols for managing the systems, networks, and infrastructure that contain the data. By combining protocols and training for data protection and data security, you set your organization up for success in meeting regulatory data protection requirements.
Why employees need data protection training
Data protection grew from big data, the potential for data breaches, and the need for regulatory compliance to safeguard data from data breaches. By providing employees with effective data protection training, they gain a better understanding of the following concepts:
- Privacy rules for personal identifiable information (PII)
- Secure data processing
- Safe data handling
- Third-party data handling
- Data protection laws
- Compliance regulations
With knowledge of the protocols, laws, and regulations around your organization’s data, your employees become more aware of how critical it is to protect it. And, when you add cybersecurity awareness training, employees are more equipped to handle potential internal and external risks and threats that can lead to a cyberattack and data breach.
8 best practices for data protection training
Implementing a data protection training program takes a well-planned and coordinated approach that fosters cross-departmental collaboration and promotes employee productivity. Follow these eight best practices when implementing data protection training.
1. Address government and industry compliance requirements
As consumer privacy became increasingly threatened by data breaches as the result of poor data handling and security practices, governments pushed for tight regulations to protect their people. These regulations have resulted in creating well-known standards, including:
- General Data Protection Regulation (GDPR)
- Health Insurance Portability and Accountability Act (HIPAA)
- Payment Card Industry Data Security Standard (PCI DSS)
Each regulatory organization has its own criteria that companies must follow to avoid facing fines and penalties, especially when a data breach occurs. To ensure your company complies with the industry standards that relate to your business, include training on the regulations your business must follow and the audits it must pass.
2. Review your data center security strategy
The more layers your data center security strategy has, the better it will protect the confidential information it holds. But those layers can be difficult for employees to understand. Therefore, review the data center security strategy with your employees, covering such subjects as:
- Physical access to the data center
- Data handling practices
- Documenting, monitoring, and reviewing data assets
- Network security
- Hardware and software updates and patches
- Backup and restore procedures
Document all policies so employees can refer to them and know what’s expected in maintaining the security of your data center.
3. Go over safety protocols for personal data
With employees using multiple SaaS applications daily, they tend to choose a single, easy-to-remember, yet hackable password. The problem with that approach is the risk to your company’s security.
Protecting personal data requires strict safety protocols and proactive awareness. Without proper protocols in place, hackers can gain access to your systems, networks, and sensitive organization and employee data by using brute-force attacks and social engineering campaigns.
Therefore, include your organization’s protocols for protecting personal data, including:
- Requirements for setting and changing passwords
- Credential sharing
- Single sign-on (SSO)
- Use of multi-factor authentication (MFA) or passwordless login
- Security codes
By including these protocols in your data protection training program, your employees understand the required practices to ensure data privacy and protection of their own data, your company’s data, and your customers’ data.
4. Explain supply chain policies
Supply chain cyberattacks have been around for decades but have skyrocketed in the past 10 years. In late 2020, one of the most notable incidents—the SolarWinds supply chain attack—occurred, in which a third-party vulnerability enabled hackers to infiltrate Fortune 500 organizations and US government bodies.
To prevent a data breach that comes from your supply chain, include supply chain policies as part of your data protection training. The topics you might cover include:
- Policies for using only verifiable and reputable suppliers
- Supply chain risk management practices
- Cybersecurity awareness training
- Risk-level assessments for third-party suppliers
- End-to-end software supply chain security
By helping employees understand what they need to know and do when working with the supply chain, they’ll be better prepared to face an attack in an intelligent, strategic, and secure way.
5. Discuss cybersecurity risk assessment
Most data protection and privacy regulations require businesses to conduct cybersecurity risk assessments regularly. These assessments help identify corporate assets that can be affected by a security breach and how well your access controls can protect them.
As part of your training program, make sure employees understand the importance of cybersecurity risk assessments. Explain the findings of the reports to help them understand where your organization has vulnerabilities and how they can be more effective in protecting them.
6. Detail breach reporting protocols
All data privacy regulations today require data processors to report the breach as soon as possible and notify all victims about the status of their personal information. For example, GDPR requires organizations to respond to data breaches within 72 hours and notify the relevant supervisory authority with all related documentation. The same applies to HIPAA, which emphasizes notifying victims directly. Depending on the regulations that apply to your organization, make sure employees know the procedures to follow when a breach occurs.
Also, make sure employees know the internal protocols they must follow after a data breach. For example, you might have all employees change their passwords. The protocols may vary by role, team, and department, so make sure everyone knows what to do and who to contact for questions. You might designate your data protection officer to establish these protocols and update them regularly.
7. Deliver phishing simulation training in the workflow
Phishing and smishing are terms that most people are familiar with in theory but maybe not so in practice. Create awareness through phishing simulation training as the first step in combatting phishing attacks.
The hands-on training that phishing simulations provide has proven their effectiveness in how employees learn about and react to phishing threats. Real-life phishing simulations can be deployed automatically right in your employees’ workflow. When combined with real-time engagement statistics and insights, Chief Information Security Officers (CISOs) and security teams can determine the right course of action for employees to take next.
8. Provide security awareness training for all employees
Each year, cybercriminals use more sophisticated techniques than the year before. To keep up with their ever-changing, mischievous ways, keep your data protection training on top of hacking threats and trends so employees can be aware of them and respond confidently to prevent them.
Provide security awareness training at regular intervals by using bite-sized, customizable content. The content should be customizable to adapt to employees by job role, team, department, or geographic location. As with phishing simulations, use data to gauge the success of your training program so you know which employees need more or specialized training and which ones can advance to the next topic.
Start your data protection training
Data protection training is more than just brushing over a list of dos and don’ts about data privacy. It’s about making sure you cover every aspect of data protection as necessary for your organization and each employee. Make sure your data protection training program starts with the eight best practices outlined in this post. Pair it with a phishing simulation and cybersecurity awareness training platform backed by machine learning and powered by data insights. Start your data protection training program on the right foot with this 90-minute self-guided tour. | <urn:uuid:fe31fee0-94ec-48ef-8fd0-e6d8e4f88e8d> | CC-MAIN-2022-40 | https://cybeready.com/8-best-practices-for-data-protection-training | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00598.warc.gz | en | 0.918593 | 1,881 | 2.65625 | 3 |
A key goal of visual analytics and data science is to identify actionable insights that impact business processes – to grow revenue, improve productivity or mitigate risk. Automated AI, or specifically automated machine learning for data science, can help with this goal. AutoML can dramatically increase the productivity of data scientists by automating the more mundane tasks and freeing up time for innovation. AutoML with transparency can also guide and educate users on how to get the most out of their data and data science environment, while enforcing best practices.
The role and function of data scientists is on the rise. Data scientists have become the ultimate hackers; they do what it takes to get the job done. This can include designing and deploying end-to-end systems for model training and inference – for batch jobs running on a clock or a trigger – and real-time event processing. Such end-to-end systems typically include data access and federation, caching strategies, feature engineering, machine learning and model ops. Model ops can include containerising models, adding RESTful interfaces and deploying into operational systems – in hybrid, and sometimes multi-cloud, environments.
Crucially, what data scientists require more than anything is to become more productive. AutoML helps with this by assisting analysts with data preparation, data cleaning, feature selection, feature engineering and modelling, with explainability. AutoML digital assistance is now starting to be extended to data science platforms that scale across hybrid cloud environments with deployment into event-based architectures.
Ideally, AutoML systems should generate automatic flows which are editable, and informative with regard to how the software works. This should include surfacing the steps or nodes in the workflow, and how they are created and configured for the analysis. The generated flows should, and can be, an educational experience for the data scientist in how to optimally use the software. An AutoML system is also a way to enforce best practices, both for the experienced, professional data scientist, and for the less experienced practitioner. So, as the user moves through a data science pipeline, the environment is helping to connect, clean and prepare data, plus engineer features for model building. And the system should ideally provide guidance on things like hold-out validation sets, feature and model combinations and model explainability.
A word of caution – we are not saying that the goal is the complete automation of everything in data science, as has been advocated elsewhere. The goal is not to produce an environment of total automation where pushing a big red button means ‘job done.’ Rather, the goal is to educate the practitioner as a digital assistant, automating the more mundane tasks, educating the user and enforcing good scientific practices.
This ideal AutoML software system helps business analysts, data scientists and developers by removing complexity and accelerating deployment to live production environments. These capabilities are starting to shift the conversation between business analysts, data scientists, developers and business executives to focus on addressing the problems at hand with the best solutions available. Automating the mundane frees up time for developing innovative approaches to growing revenue, reducing risk and removing unnecessary costs.
Automated AI for all
The large number of stakeholders in a data science project make it a challenge to simplify the process. For example, a system that moves from a business analyst for dataviz, to a data scientist for training and deployment, involves workflows for cleaning the data, engineering the features and building the models that create the predictions – in batch jobs and on streaming data in operational systems.
Productivity gains come from automatically generating these multiple different workflows for tasks such as data preparation, feature engineering, feature selection and modelling. Automating processes from preparation to model tuning produces transparent editable workflows which can more quickly move to production-ready versions in operational systems.
When a data scientist creates a predictive model, it can be a great deal of work to develop the many different data prep / data science workflows required. When these are automatically generated, there can be significant time savings, more accurate models and enforced best practices throughout.
Productivity gains and smarter outputs
Automated data prep and machine learning can create considerable productivity gains for business analysts and data scientists. By automating different stages of the workflow from business analyst, to data scientist, to production, models are created, tuned and deployed as cloud native production environments.
To address more complex issues, machine learning models are becoming easier to deploy and connect to data feeds to support faster and smarter decisions in real time. It is not about creating a black box. Whether the desired outcome is helping financial services more accurately detect fraud, or monitoring oil field output, analysts, scientists and developers are using automated workflows for insights to build smarter models at a faster pace.
One key area of value in data science is in making accurate predictions in live operations environments. Just as physical automated production lines created the modern industrial age – think car factory robots – so data science automation is driving the digital industrial age by enabling analytics to be quickly applied to different domains by experts who are no longer forced to do the grunt work.
Through automation, data science can move faster to solve real-world problems, while providing measurable benefits for everyone in the value chain.
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, Cyber Security & Cloud Expo and 5G Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology. | <urn:uuid:b63f5e78-14b3-43d5-b2e7-b456e4420e3d> | CC-MAIN-2022-40 | https://www.enterprise-cio.com/news/2020/jun/19/why-automation-is-changing-data-science-for-everyone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00598.warc.gz | en | 0.920722 | 1,115 | 2.609375 | 3 |
Microsoft SQL Server is a ‘relational database’ server, with T-SQL and ANSI SQL as its primary query languages.
The product marked the beginning of Microsoft’s entry into the enterprise-level database market, competing against giants like IBM, Oracle, and Sybase. Microsoft SQL Server is available in multiple editions, each having its own set of features.
Prove you're a Dev Guru! Take the test now! (opens in new tab)
Each edition targets a particular user base. These editions are SQL Server Compact Edition (SQL CE), Datacenter Edition, Developer Edition, SQL Server 2005 Embedded Edition (SSEE), Enterprise Edition, Evaluation Edition, SQL Server Express Edition, Fast Track, Standard Edition, Web Edition and Workgroup Edition.
There are many uses of SQL Server. It can be useful when working with large e-commerce applications. SQL Server can also be used for most projects programmed using .NET. Here are some of the reasons to choose SQL Server.
1 SQL server is a great tool for hassle-free stability. It can handle deeply embedded applications with a mere 1MB footprint, in order to run huge data warehouses with capacity of over 1TB. SQL Server offers platform flexibility, and is open source. This allows developers to customise it according to the requirements of the client.
2 Secondly, SQL Server has a unique storage-engine architecture. This is of great benefit to database professionals when configuring a MySQL database server for various high-performance applications.
3 Reliability and scalability means very little downtime, even in the most complex IT businesses.
4 SQL Server offers complete ACID ('atomic, consistent, isolated, durable') transaction support, multi-version transaction support, and unlimited row-level locking. High-traffic web portals cannot do without the SQL server, thanks to its high-performance query engine, and lightning-fast data insertion functionality.
5 SQL Server provides SSH (Secure Shell) and SSL (Secure Sockets Layer) support to ensure safe and secure connections.
6 SQL Server offers support for almost every possible application development. For example, it offers support for ANSI-standard SQL, triggers, functions, plug-in libraries, drivers (ODBC, JDBC), and cursors. | <urn:uuid:355fa20b-a4fa-4883-93b4-3f8aa8786320> | CC-MAIN-2022-40 | https://www.itproportal.com/2010/06/17/why-i-prefer-sql-server/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00598.warc.gz | en | 0.871942 | 465 | 2.578125 | 3 |
There has been significant growth in the volume of cyber crime targeting data centers and networks. Last year showed an increase of cyber attacks reported, and the discovery of vulnerabilities was at a standstill but remaining high in numbers.
This all coming from a newly published cybersecurity report by HP. One of the main findings was a dramatic increase of web exploit tool kits, which HP calls the top cyber-crime weapon. The toolkits are traded online and are used to access IT systems and in turn, valuable information and data are stolen.
The report also found that as soon as vulnerabilities were found and in the process of being fixed, new ones were compromised by cyber attacks. Third-party plugins to content management systems were found to be the No. 1 cause of web application vulnerabilities.
Web application vulnerabilities were shown to represent about half of all security vulnerabilities. Websites such as WordPress, Joomla and Drupal were revealed to be the most-frequently attacked systems.
HP suggests reducing attacks by implementing improved security practices, keeping systems up-to-date and learning more about HP's digital Vaccine Labs. The lab is a research organization for vulnerability analysis and discovery. | <urn:uuid:241d3683-507f-4100-9d71-7804038c7fab> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2011/04/report-ids-web-exploit-toolkit-as-top-cyber-weapons/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00598.warc.gz | en | 0.977382 | 235 | 2.578125 | 3 |
Welcome to MIMO and Beamforming in 5G
- Course Length:
- 1.5 hours
This course provides a technical introduction to MIMO and beamforming in 5G. You will learn the role of antennas in wireless communications, the evolution of antenna techniques and the difference between passive and active antenna systems. Additionally, the concepts of MIMO and Massive MIMO will be explained along with the utilization of SU-MIMO and MU-MIMO to increase throughput and capacity, respectively, in wireless systems. Lastly, you will learn the types of beams used in 5G NR and the techniques used to produce them as well as beam management techniques such as beam sweeping, beam selection, beam switching and beam failure recovery so that you are better equipped to configure beamforming parameters and monitor beam performance.
This course is designed for a broad audience of personnel working in the wireless industry.
After completing this course, the student will be able to:
■ Describe the types of antenna techniques
■ Differentiate between passive and active antennas
■ Explain the concept of MIMO
■ Explain Massive MIMO and its uses
■ Describe SU-MIMO and MU-MIMO
■ Describe beamforming
■ Differentiate the beamforming techniques
■ Explain beam management in 5G systems
1. MIMO Fundamentals
1.1 Transmit and Receive Diversity
1.2 MIMO: What and why?
1.3 Single-User MIMO (SU-MIMO)
1.4 Multi-User MIMO (MU-MIMO)
1.5 DL and UL MIMO in 5G
1.6 Massive MIMO
2.1 Beamforming: What and why?
2.2 Analog, digital and hybrid beamforming
2.3 Beamforming in 5G
3. Beamforming in 5G
3.1 Introduction to beam management
3.2 SSB-Block and traffic beams in 5G
3.3 Beam sweeping
3.4 Beam selection
3.5 Beam change
3.6 Beam failure recovery
Putting It All Together | <urn:uuid:ef9e0d33-f83c-420c-9965-13be1a7ddf2d> | CC-MAIN-2022-40 | https://www.awardsolutions.com/portal/blended/welcome-mimo-and-beamforming-5g?destination=on-demand-courses | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00598.warc.gz | en | 0.870715 | 454 | 2.625 | 3 |
The Children’s Online Privacy Protection Act (COPPA) came into effect on April 21, 2000, and was officially amended on July 1, 2013.
What is the Children’s’ Online Privacy Protection Act?
The purpose of COPPA is to give parents more control over what information is collected from children, and about children, under the age of 13.
COPPA applies to organizations that offer online services, including websites, apps and IoT devices (such as smart toys), which collect, use, or disclose personal information from (or about) children.
What Constitutes “Personal Information”?
Although the definition of “personal information” can vary between different data privacy regulations, the differences are usually subtle. For example, information such as names, addresses, telephone numbers and Social Security numbers are examples of personal information that are common across all data privacy laws. However, the definition is often expanded depending on the industry or user group which the regulation pertains to. For example, under COPPA, the following would also be considered “personal information”.
- A screen or user name that functions as online contact information;
- A persistent identifier that can be used to recognize a user over time and across different websites or online services;
- A photograph, video, or audio file, where such file contains a child’s image or voice;
- Information concerning the child or the parents of that child that the operator collects online from the child and combines with an identifier described above.
How is COPPA Enforced and What are the Penalties for Failing to Comply with it?
COPPA is ultimately enforced by the Federal Trade Commission (FTC). However, parents and other relevant stakeholders have the option to report COPPA violations to the FTC, either via their website, or by calling a toll free number at (877) FTC-HELP.
An organization in violation with COPPA can be subject to fines of up to $43,792 per violation. However, the exact amount depends on a number of factors, which include; the egregiousness of the violation, the type and amount of personal information involved, how the information was used, whether it was shared with third parties, and the size of the company.
How to Comply with The Children’s Online Privacy Protection Act
Update your privacy policies & notices
If you have read this far, the chances are your organization collects personal information associated with children under the age of 13. As such, you will need to ensure that you have published a clear and comprehensive privacy notice that describes how you collect, use, share and store their personal information, and you must ensure that you have obtained the necessary consent from the children’s parents before collecting it.
Under COPPA, parents must be granted access to their child’s personal information. As such, you will need to establish a formal procedure for enabling parents to request access to their child’s information, and you must be able to fulfil the request in a timely manner.
Discover & classify personal information
As mentioned above, organizations must give parents access to their child’s personal information, which they can review, edit and delete if necessary. Many organizations store large amounts of unstructured data, and this data might exist in multiple locations/data centers.
This can make locating data in a timely manner tricky, especially if they are not even aware that the data exists. Organizations should adopt a sophisticated data classification tool which will automatically scan their repositories (both on-premise and “in the cloud”) for personal information, and classify the information accordingly.
Most proprietary solutions will provide a wide range of templates that are mapped to specific data privacy laws, such as the GDPR, HIPAA, CCPA, and of course, COPPA.
Minimize the amount of data you collect and store
Once you have classified your data, it is good practice to remove any data that is ROT (Redundant, Obsolete and Trivial). Naturally, if you only store the data you absolutely need, this will help to minimize the likelihood of a data breach.
Likewise, you will also need to ensure that you are only collecting the information you need, which requires developing a comprehensive data retention policy. A data retention policy is a set of guidelines, typically in the form of a spreadsheet, that helps organizations keep track of how long certain types of information should be retained, and how the information should be disposed of when no longer required.
When the retention period for a specific piece of information has expired, you will need to either securely/thoroughly dispose of the data or anonymize it.
Enforce “least privilege” access
Access to a child’s personal information must only be granted to those who legitimately need access to it. You will need an access control policy that describes the procedures for granting and revoking access to personal information.
It’s also worth noting that, under COPPA, parents have the right to prohibit organizations from disclosing their child’s personal information to third parties. A common approach that is used to quickly grant/revoke access to third-parties (and other users) is role-based access control (RBAC), whereby access rights are assigned to groups (or roles), and members are assigned to those groups.
For example, you could setup a group called “Business Associates”, with its own unique set of access rights. Adding/removing members from this group will be significantly easier and less error-prone than assigning specific rights to specific users, although RBAC is typically less granular.
Conduct a third-party risk assessment
You must ensure that you have taken reasonable steps to ensure that the third-parties you share personal information with are able to maintain the confidentiality, security, and integrity of the data. You will need to get them to sign some form of agreement to ensure that they are able to satisfy the COPPA compliance requirements, and you will need to periodically review the security controls they have in place to ensure they are still relevant.
Monitor access to personal information
Although not directly relevant to complying with COPPA, if you do not have visibility into who has access to your data, you will find it hard to demonstrate compliance to the supervisory authorities.
Most Data Security Platforms can aggregate and correlate event data from multiple repositories and display a summary of this information via a centralized dashboard. From there, you can simply select the relevant regulation (in this case COPPA) and generate a customized report at the click of a button.
These reports can be used as evidence to show that you know when a child’s personal information has been accessed, shared, moved, modified or removed, and by whom.
A real-time auditing solution will also help to identify anomalous user behaviour, either based on a single event of pre-defined threshold condition. If, for example, a child’s information is accessed outside of office hours, an alert will be sent to the administrator, which they can follow up to ensure that the data is being accessed for legitimate reasons.
If you’d like to see how the Lepide Data Security Platform can help give you more visibility over your sensitive data and help you be compliant with COPPA, schedule a demo with one of our engineers or start your free trial today. | <urn:uuid:7ab74d58-73a2-4201-ae17-678d6b417afd> | CC-MAIN-2022-40 | https://www.lepide.com/blog/what-is-the-childrens-online-privacy-protection-act-coppa-and-how-to-comply/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00598.warc.gz | en | 0.937331 | 1,523 | 3.015625 | 3 |
The network cabling industry’s fiber optic manufacturers over the last few decades have been on a constant mission to develop the better fiber connector. This means lower cost, lower dB losses, easier to terminate out in the field. There have been over 100 connectors developed over the years but a select few have stood the test of time and beat out their competition. Now, let’s talk about the most common fiber connectors as following:
A fiber optic connector terminates at the end of a fiber optic cable is used when you need a means to connect and disconnect the fiber cable quickly. A fiber splice would be used in a more permanent application. the connectors provide a mechanical connection for the two fiber cables and align both cores precisely so the light can pass through with little loss. There are many different types of connectors but many share similar features. Many connectors are spring loaded. This will push the fiber ends very close other so as to eliminate airspace between them, which would result in higher dB losses.
There are generally five main components to a fiber connector: the ferrule, the body, the coupling structure, the boot and the dust cap.
Ferrule: The ferrule is the small round cylinder that actually makes contact with the glass and holds it in place. These are commonly made of ceramic today but also are made of metal and plastic.
Body: This sub assembly holds the ferrule in place. It then fits into the connector housing.
Connector Housing: This holds all sub assembly parts in place and has the coupling that will connect to the customer’s equipment. The securing mechanism is usually bayonet, snap-in or screw on type.
Boot: This will cover the transition from the connector to the fiber optic cable. Provides stress relief.
Dust Cap: Just as it implies will protect the connector from accumulating dust.
There are many types of connectors on the market. The major differences are the dimensions and the method of connection to equipment. Most companies will settle on one type of connector and keep that as a standard across the board. It makes sense because all equipment has to be ordered with that specific connector type and to have 2 or 3 different connector types can get messy. For typical network cabling projects today LC is fast becoming the shining star of fiber connectors. LC is a small form factor connector which means it requires a much smaller footprint in your IT closet. Thus you can fit many more LC connectors into you fiber panels then say ST or SC connectors.
The LC connector was developed by Lucent Technologies, hence the LC. It is a Single Form Factor Connector that has a 1.25mm ferrule. The attaching mechanism is similar to an RJ-45 connector with the retaining clip. It is a smaller square connector, similar to the SC. LC connectors are often held together with a duplex plastic retainer. They are also very common in single mode fiber applications.
The ST connector was the first popular connector type to be used as a standard for many organizations in their fiber network applications. It has first developed by AT&T. Often called the “round connector” it has a spring loaded twist bayonet mount with a 2.5mm round ferrule and a round body. The ST connector is fast being replaced with the smaller, denser SFF connectors.
The SC connector is a push in/pull-out type connector that also has a 2.5 mm ferrule. It is very popular for its excellent performance record. The SC connector was standardized in TIA-568-A, and has been very popular for the last 15 years or so. It took a while to surpass the ST because of price and the fact that users were comfortable with the ST. Now it’s much more competitive with pricing and it is very easy install, only requiring a push in and pull out connection. This is very helpful in tight spaces. Simplex and duplex SC connectors are available. The SC was developed by the Japanese and some say stands for Standard Connector.
The FC connector you may find in older single mode installations. It was a popular choice that has been replaced by mostly ST or SC type connectors. It also has a 2.5mm ferrule. They have a screw on retaining mechanism but you need to be sure the key and slot on the connector are aligned correctly. FC connectors can also be mated to ST & SC’s through the use of an adaptor.
MTRJ stands for Mechanical-Transfer Registered Jack and was developed by Amp/Tyco and Corning. MTRJ is very similar to an RJ type modular plug. The connector is always found in duplex form. The body assembly of the connector is usually made from plastic and clips and locks into place. There are small pins present that guide the fiber for correct alignment. MTRJ’s also are available in male or female orientation. They are only used for multi-mode applications. They can also be difficult to test because many testers on the market do not accept a direct connection. You usually need to rig up a patch cord adaptor kit to make testing possible.
MU looks a miniature SC with a 1.25 mm ferrule. It’s more popular in Japan.
MT is a 12 fiber connector for ribbon cable. It’s main use is for preterminated cable assemblies and cabling systems. Here is a 12 fiber MT broken out into 12 STs.
MT connector is sometimes called a MTP or MPO connector which are commercial names.
Hopefully this guide may help you get an idea of what options are out there for your fiber optic connector needs.
As the best Chinese fiber optic products supplier, FiberStore Inc. supply a range of fiber connectors, fiber attenuators, fiber optic switch and more. If you would like to know more about our products information, please pay attention our news or contact us directly. | <urn:uuid:19b4c16c-bd7e-4d9e-99f2-237dffe4b772> | CC-MAIN-2022-40 | https://www.fiber-optic-components.com/tag/fiber-attenuators | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00798.warc.gz | en | 0.958966 | 1,234 | 2.796875 | 3 |
2020 began full of hope and optimism as tech analysts and futurists gazed into their virtual crystal balls with predictions for the decade ahead. Emerging technologies such as 5G, IoT, artificial intelligence, augmented reality, and blockchain would pave the way for a new digital world where everything is connected.
However, Gartner predicted that a more proactive approach to privacy and data protection would become our primary focus. Authorities around the world were also beginning to focus on tech giants such as Google, Facebook, Amazon that were beginning to have too much influence on our lives, culture, and the global economy.
Even Microsoft co-founder Bill Gates warned that Governments would need to regulate big tech companies while wearing a variety of pastel sweaters. The global pandemic changed everything, and in under six short months, our world looks unrecognizable, and the future now looks uncertain. Ironically, governments now appear to have a different view of the privacy of its citizens.
The rise of coronavirus contact tracing apps
Tracing the virus's path will play a critical role in getting back to the way things used to be. Traditional methods would involve in-person interviews with patients. But it's far too time-consuming, and there aren't enough available resources to make it an option. Turning to technology for a solution is naturally the next step for governments.
Mobile technology has thrown smartphone contact tracing apps into the spotlight by providing an automated solution. It is thought that the ability to trace people who are thought to have Covid-19 and those they may have unwittingly infected could win the fight against the pandemic. The digital solution is attractive for obvious reasons.
However, for these apps to be successful, around 60% of the population will need to download them. Even if they do, there are a few inconvenient technical limitations and fears from citizens that some governments might be tempted to leverage the tech tracking capabilities to expand surveillance.
Concerns that these battery draining apps could be used as data collection tools could also slow down adoption. Some might be guilty of over-reporting symptoms that result in the creation of false positives. Others might be tempted to not report anything at all, in fear that the data might be used to discriminate against them. Addressing all of these concerns to increase adoption provides authorities with two options.
Centralized vs. decentralized apps
The decentralized app is the preferred option for data protection advocates who have one eye on how this approach will impact citizens in a post-pandemic world. Decentralized servers remove the trail back to participants and the concern of a government or entity harvesting vast amounts of data about them.
A centralized system promises to anonymize the data, but it does require a significant amount of trust that those managing the server won't be tempted to misuse the information. The bigger question is, what happens to the data that reveals the movements of you and your friends after COVID-19?
Governments such as the UK have rejected Apple and Google's "decentralized" approach in favour of a "centralized model." But in doing so, it lacked the safeguards put in place by the tech giants that ensured data was protected. News that the app had failed basic cybersecurity tests before its release set off a few alarm bells.
Further news that data cannot be deleted as it might be kept for research purposes after the crisis prompted many users to voice their concerns across social media.
By contrast, many European countries, such as Austria, Estonia, and Switzerland, have embraced the decentralized approach. Although initially tempted by centralized models, Germany and Italy have also chosen the option that protects people's privacy while arguably not giving the authorities the same level of information they need. The biggest problem is that the privacy implications of these tracking apps will depend entirely on the country you reside in.
Many of us are more than aware that the most used apps on our smartphones already track our movement. But as intrusive as Facebook can be, it doesn't tell you to self-isolate every time somebody walks past your window or in the opposite supermarket aisle. In an age of instant gratification, we need to accept that there is no such thing as a quick fix and that the technology that promises a quick win is also somewhat flawed.
How many people will hand over their freedom to clinically informed algorithms on a centralized database? The danger is that these apps will be deleted quicker than an exhibition or conference app if they deliver frustration rather than value or peace of mind. Ironically, it's now the tech companies offering privacy-focused decentralized solutions while governments are being accused of being far too invasive.
2020 will be remembered as the year that the world battled the coronavirus pandemic. If we dare to look beyond COVID-19, there is an argument that the real battle is for privacy. Long after you have deleted the Coronavirus tracking apps, ask yourself what happens to that data? The answer to that question might suggest that the privacy fight will continue for many years to come. | <urn:uuid:392eedc6-b8aa-4c7d-b3bf-b7ed179f0bab> | CC-MAIN-2022-40 | https://cybernews.com/privacy/are-coronavirus-contact-tracing-apps-putting-your-privacy-at-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00798.warc.gz | en | 0.960251 | 1,010 | 2.78125 | 3 |
FREMONT, CA: Nanotechnology intersects with virtual reality (VR) on various levels and platforms. It involves devices up to 0.1 to 100 nanometers.
One of the use cases of nanotechnology was creating a long-lasting battery for VR and augmented or mixed reality (MR) technologies. The nanoscale perspective of the device allows better optimization of the battery life. Nanotechnology also helps to make VR and augmented reality (AR) realistic and immersive. Higher resolutions and viewing angles are possible with the help of nanotechnology. With the small screens that are used in VR and AR-based devices, it is tough to adjust the resolution, viewing angles and, lighting conditions. Setting the optics is also crucial due to the large area for the sensor input in the screen, for example, an AR headset. However, nanotechnology enables the precise optical setups, thereby providing the users with an immersive experience.
Nanotechnology plays a crucial role to enhance the sensor compatibility with respect to AR, VR, or mixed reality systems. Nano size sensors include optical sensors, location sensors, vibration sensors, voice sensors, gyroscopes, accelerometers, and other environmental sensors. Though many of these sensors are in use, with nanotechnology, they can be integrated with AR or VR based setups, and all the data from the sensor can be processed at the nanoscale.
Communication capabilities among the various elements of VR also have a vast scope of improvement with nanotechnology. Using the nanotech build sensors bridges the gap between the communications-driven aspects of VR. These sensors interpret gestures or movements using motion tracking and translate them to commands to be processed within the VR systems.
However, the most significant advantage of nanotechnology is that it allows light packaged end devices. Especially the challenge that comes along VR systems or an intelligent device is to ensure that their packaging is not bulky. With nanotechnology, hardware size is steadily shrinking that also minimizes the transportation risk. Overall, nanotech reduces the risk factor and builds consumers’ trust over the product. | <urn:uuid:c5e89118-ce25-4c3c-ab42-5bdf0f275dc0> | CC-MAIN-2022-40 | https://www.enterprisetechnologyreview.com/news/amalgamating-nanotechnology-virtual-reality-and-augmented-reality-nwid-204.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00798.warc.gz | en | 0.914689 | 416 | 3.046875 | 3 |
U.S. intelligence and military are speeding new sensors to space. They are still working on details of who’s ultimately in charge during a conflict.
The National Reconnaissance Office controls some of the most advanced sensing capabilities in space, which are integrated into the military’s own space intelligence capabilities.
But there’s still a division between intelligence and military operations, and two years into the U.S. Space Command and U.S. Space Force’s existence, there’s not a clear answer as to what would happen in a conflict that included space, and whether the combatant commander for space would lead the decision-making process
“That's something we're working through,” National Reconnaissance Office director Christopher Scolese said Thursday during a Mitchell Institute discussion.
The NRO gets its direction on how to prioritize its satellite taskings—which can detect and track missile launches or provide ground intelligence to commanders—from the National Security Agency and the National Geospatial Intelligence Agency, Scolese said. While those agencies take in requests from the combatant commanders, sometimes the agencies have different priorities.
“For the most part, it's a coordination effort, but sometimes it will be a, ‘Hey, you need to do this and we will do that,” Scolese said.
Space Command has responsibility for everything 100 kilometers above the Earth’s surface and beyond, which includes all of the low-Earth and geostationary orbits, overlapping the NRO’s area of operations.
U.S. Space Command and the NRO agreed to a “protect and defend” framework in 2019 that Chief of Space Operations Gen. Jay Raymond said at the time outlined that “when a threat is imminent, the NRO will execute direction from U.S. Space Command to take protective and defensive actions to safeguard their space assets.”
But what that means in execution is still being sorted through, Scolese said.
“This particular document, the ‘protect and defend’ as we call it, establishes the framework for how we're going to operate under various conditions, because it will be necessary for us to coordinate, and in some cases take direction. And we have agreed to do that,” Scolese said. “We're in the process of developing the strategies on how that happens, and when it happens, and under what situations it happens.” | <urn:uuid:5b116027-1c6d-4783-b996-83f4d8ba0cea> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2022/08/conflict-involving-space-will-nros-assets-fall-under-space-command/375511/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00798.warc.gz | en | 0.947943 | 510 | 2.609375 | 3 |
This article explored all the disadvantages and advantages of blockchain technology. Blockchain technology has been positioned as a game-changing innovation that primarily provides security levels never previously seen. This makes it a remarkably adaptable technology, required and wanted by the IT or financial industries and all industries. Rather than cryptocurrencies, there are several blockchain use cases, such as blockchain gaming.
Before we get started, here is a list of the best blockchain books in 2022 for better understanding. You may have heard about the blockchain talent gap and started to ask what is a blockchain developer. But unfortunately, you find some blockchain implementation challenges and security vulnerabilities. However, the advantages of blockchain are worth dealing with them. Let’s dive in.
Table of Contents
Advantages of blockchain technology
Blockchain technology is advancing quickly, and this trend is not about to stop. In the last few decades, many things that looked impossible—like high transaction costs, double spending, net fraud, recovering lost data, etc.—turned out to be true. However, all of this can be avoided with the development of blockchain technology. According to Statista, the most common use case for blockchain technology is securing information exchange.
These are some of the best advantages of blockchain technology:
- Cost reduction
- Free from censorship
- Faster processing
Anyone who decides to use and grasp blockchain technology will find it revolutionary. So let’s talk about the advantages of blockchain:
Blockchain technology makes transaction histories more transparent than ever. Each node in the network has a copy of the documentation because it is a particular form of a distributed ledger. Everyone may view the data on a blockchain ledger with ease.
Everyone in the network can notice the change and the updated record if a transaction history is changed. Therefore, everyone has access to all the information regarding currency exchange.
Compared to previous platforms or record-keeping systems, blockchain technology employs superior security. The consensus approach must be used to agree on all recorded transactions. Additionally, using a hashing algorithm, each transaction is encrypted and properly linked to the previous transaction.
Each node has a copy of every transaction made on the network, further improving security. As a result, other nodes will refuse his request to write transactions to the network, meaning that if a malicious actor ever wants to modify the transaction, they will be unable to do so.
Blockchain significantly reduces business costs because it eliminates the need for intermediaries and third parties. You don’t require anyone else to create the terms and conditions of exchange because you may put your trust in the trading partner. Allowing everyone to read a single, immutable version of the ledger reduces the time and money spent on documentation and its changes.
Free from censorship
The concept of trustworthy nodes for validation and consensus procedures that authorize transactions using smart contracts makes blockchain technology free from censorship since it is not under the authority of any one party.
It takes a lot of labor to complete a transaction using conventional paper-based systems since they require third parties to mediate and are prone to human mistakes.
Blockchain can speed up and discipline these antiquated processes, minimize error-proneness, and increase trading’s efficiency. Parties don’t need to maintain various records because there is only one ledger, which results in significantly less clutter.
Additionally, building trust is simpler when everyone has access to the same information. Settlements can be made simple and easy without the need for middlemen.
The transaction speed rose significantly once the blockchain technology was developed compared to how long it took the traditional banking organization to process and started the transaction.
Before the advent of blockchain, the banking process took about three days to settle. However, after its implementation, the time decreased to only a few minutes or even seconds.
Tracking things back to their sources in complex supply chains is challenging. However, with blockchain, the trades of items are tracked, giving you an audit trail to determine where a specific object was acquired.
Additionally, you learn about every stop the product made along the way. This level of product tracking can assist customers in confirming the product’s legitimacy and stopping fraud.
The above-mentioned point’s auditability also has another component. You can already observe and verify the legitimacy of your asset thanks to the audit trail that exists because each transaction is logged in the blockchain for the entirety of its lifetime.
Advantages of blockchain technology by sectors
The use of blockchain technology has various benefits. The first advantages are that it is secure, open, and difficult to manipulate. It enables widespread distribution and access to digital information without allowing for editing, making it a remarkably trustworthy source of first-hand information. It also enables the existence and trading of cryptocurrencies like Bitcoin.
Check out the best enterprise blockchain examples
Let’s take deep dive into the advantages of blockchain technology by sector.
Advantages of blockchain in accounting
How does blockchain benefit an accountant today? These are the advantages of blockchain in accounting:
- Rarer fraudulent actions
- Gain time for accountants
- Secure the data
- Give opportunity for upskilling
- Knock-on effect to business models
- Attract new customers
Advantages of blockchain in healthcare
What advantages does blockchain have for healthcare today? The benefits of blockchain in healthcare are as follows:
- Patient profile privacy
- Drug traceability
- Improved clinical trials
- Electronic health records (EHRs)
Advantages of blockchain in the energy sector
What benefits does blockchain technology now provide for the energy sector? The following are some advantages of blockchain in the energy sector:
- Environmental sustainability
- Fewer costs
Advantages of blockchain in real estate
What advantages does blockchain technology currently provide the real estate industry? Real estate industry can benefit from blockchain in the following ways:
- Identity of the proper tenant and investor
- Property sale
- Real-time accounting
Advantages of blockchain in trade finance
What benefits can trade finance currently get from blockchain technology? The following are some ways that blockchain can help trade finance:
- Data integrity
- Streamlined process
- Market reactivity
- Cost savings
Advantages of blockchain in government
What advantages does blockchain technology currently offer to a government? The government can benefit from blockchain in the following ways:
- Identity management
- Fair elections
- Finance management
Advantages of blockchain in logistics
Which benefits does blockchain technology now provide for logistics? The logistic sector can benefit from blockchain in the following ways:
- Improved freight tracking
- More suitable carrier onboarding
- Vehicle-to-vehicle communication
- Security for the Internet of Things (IoT) Devices
As you can see, there are a lot of advantages to the blockchain. But unfortunately, like everything else in the world, blockchain technology has some drawbacks too.
Disadvantages of blockchain
Since many blockchain solutions are experiencing early-stage issues, blockchain is not without its drawbacks and troublesome characteristics.
These are some of the most common disadvantages of blockchain:
- Power use
- Private keys
Let’s take a closer look at them.
Blockchain applications are quite well-liked by Bitcoin investors. However, it can only process seven transactions per second, compared to 10,000 for Hyprledger and 24,000 for Visa. With the issue of scalability in mind, it becomes more difficult to envision how blockchain could be used in practice.
Due to mining operations, the Blockchain has a somewhat high power usage. One of the reasons for this usage is that every time a new node is created, it simultaneously connects with every other node to maintain a real-time ledger.
The issue of storage arises because blockchain databases are permanently kept on all network nodes. There is no possibility that personal computers can hold an infinite amount of data that keeps getting added to as the number of transactions rises.
All nodes in the network have access to encrypted and anonymous data on a public blockchain. Therefore, this data is legitimately accessible to everyone on the network. Transactional data may be used to identify a person in the network, just as corporations typically use web trackers, cookies, and other tracking technologies.
As has been noted numerous times, excessive security can also be a weakness in the case of private keys. Once lost, these keys are virtually impossible to retrieve, which presents a challenge, particularly for those who possess cryptographic valuables.
Regulatory frameworks in the banking sector present difficulty in adopting blockchain. Blockchain applications will need to specify how to identify the fraudster if one occurs, which is a difficult task. For blockchain technology to be widely adopted, additional regulatory requirements must first be established.
Over time, it has been clear that the Proof of Work consensus method, which safeguards cryptocurrencies like Bitcoin on the blockchain, is extremely effective. However, there are a few potential ways to attack blockchain networks, with 51% of attacks being one of the most widespread.
Such an attack may occur if one party gains control of over 50% of the network’s hashing power. In that case, they would be able to intentionally exclude or alter the order of transactions, disrupting the network.
Don’t you know what is 51% attack? Check out our blockchain glossary
Which industry can benefit from blockchain?
Despite its disadvantages, the ability of blockchain technology to arrange data efficiently has led to the growing availability of this technology across industries.
Do you know which of the 4 types of blockchain is right for your business?
We have already explained some sectors that are heavily affected by blockchain above. Followings are some of the industries that seriously make benefit from blockchain technology too:
- Law enforcement and security
- Supply chains
- Software security
- Messaging apps
- Travel and mobility
- Product development
- Higher education
- Blockchain in HR
Why is blockchain the future?
At least one cutting-edge blockchain-based company will be valued at $10 billion by 2022. The extra value that blockchain technology brings to business will reach slightly over $360 billion by 2026 and more than $3.1 trillion by 2030.
Check out the best blockchain platforms
Blockchain will establish a reliable, uncensored, and accessible global data and information repository. This quality will guide the development of the third generation of the internet.
The internet is driving everything in the world now, and it will be the same in the future. In this context, what drives the internet also drives the future. That’s why blockchain is the future.
Blockchain is a revolutionary technology that has a big impact on practically every business. Blockchain networks, however, have the potential to be both benefits and drawbacks, operating as a double-edged sword that might operate both in favor of and against the broad adoption and use of this technology.
Understanding the technology will help people succeed in the future, whether they are experienced blockchain developers or are hoping to break into this fascinating sector.
Learning as much as you can about blockchain is the most obvious advice. Blockchain should not be confused with Bitcoin or other cryptocurrencies. A blockchain is used by bitcoin. However, a blockchain is not bitcoin. Blockchain technology has much more utility than just how cryptocurrencies initially implemented it. | <urn:uuid:190c6d6a-f540-4c5e-9015-961f04a03054> | CC-MAIN-2022-40 | https://dataconomy.com/2022/08/disadvantages-and-advantages-of-blockchain/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00798.warc.gz | en | 0.931503 | 2,384 | 2.546875 | 3 |
California Gov. Arnold Schwarzenegger on Wednesday announced that he will propose nearly US$95 million in the state budget to create the Governor’s Research and Innovation Initiative.
The initiative would provide funding for major projects aimed at growing California’s economic strength in key innovation sectors, including cleantech, biotech and nanotech.
“With some of the world’s finest universities and research institutions, the Golden State has more scientists, engineers and researchers and invests more on research and development than any other state,” Schwarzenegger said. “As a leader in developing new technologies, California will reap tremendous rewards for our economy and environment from this investment in our innovation infrastructure.”
There is a growing interest in clean technology. North American and European cleantech investment totaled $1.081 billion in the third quarter of 2006.
More than $934 million was invested in North American cleantech companies, making cleantech the third largest venture capital category for the quarter.
The largest third-quarter cleantech deals in North America were brokered by Cilion, Altra, Ion America, Renewable Energy Group and Newmarket, totaling $572 million, or 61.2 percent of cleantech investment in the quarter.
As a part of his proposed budget that will be unveiled in January, the major component of the Governor’s Research and Innovation Initiative is $30 million in lease revenue bonds for the Helios Project, a project by the University of California’s Lawrence Berkeley National Laboratory to create sustainable, carbon-neutral sources of energy.
The Helios Project’s four goals are to generate clean sustainable alternatives to hydrocarbon fuels, develop new energy sources, improve energy conservation and reduce greenhouse gas emissions. The $30 million will be used to build a new energy/nanotechnology research building for the Helios Project.
A Gliding Path
“Solar is now on a glide path to good parity, being led by the state of California,” according to Jigar Shah, CEO of SunEdison.
“The California Solar Initiative’s design is that, over the next 10 years, solar energy will be cost effective [even] without any state funding and research programs like the one proposed by the governor. [It is] critical to continue on a quick pace of innovation to lead to sufficient cost reductions to hit that target,” Shah told TechNewsWorld.
The Department of Energy offers a program for universities to participate in solar research. Solar companies such as SunEdison are hoping that DOE funding will be married with the governor’s proposed funding, so that innovation can occur across the country.
“You’ll find there will be increased support across the country for solar research now that solar has demonstrated that it’s viable. The question is with this research money how much faster can we make it,” he said. “A lot of solar companies will choose to keep operations in the states where the research is being done.” | <urn:uuid:11e8ae59-ed0c-43e1-9881-90a9ea63ce6f> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/california-may-spend-95-million-on-clean-technologies-54916.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00798.warc.gz | en | 0.936517 | 629 | 2.671875 | 3 |
GDPR (General Data Protection Regulation) is a provision of the European Union with which the European Parliament, the Council of the European Union and the European Commission enhance and standardize the protection of personal data of all the citizens of the EU. The Regulation also covers the export of personal data from the European Union.
GDPR is primarily directed at enabling the citizens to control their own personal data and at the simplification of the regulatory environment of the international economic relationships by harmonizing the regulations within the European Union.
The important point is that GDPR is applicable to both the one who precesses data and the one who collects it. The entity collecting data determines the aim and the significance of such processing, and the processing entity bears responsibility for direct processing, but both of them are liable for compliance with GDPR. | <urn:uuid:365ba78c-16b9-4773-b17c-32da53b43951> | CC-MAIN-2022-40 | https://eocortex.com/support/cctv-glossary/gdpr | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00198.warc.gz | en | 0.934013 | 164 | 2.90625 | 3 |
Application Programming Interface (API)
Application Programming Interface (API)
What is an application programming interface (API)?
An application programming interface, or API, enables companies to open up their applications’ data and functionality to external third-party developers, business partners, and internal departments within their companies. This allows services and products to communicate with each other and leverage each other’s data and functionality through a documented interface. Developers don't need to know how an API is implemented; they simply use the interface to communicate with other products and services. API use has surged over the past decade, to the degree that many of the most popular web applications today would not be possible without APIs.
How an API works
An API is a set of defined rules that explain how computers or applications communicate with one another. APIs sit between an application and the web server, acting as an intermediary layer that processes data transfer between systems.
Here’s how an API works:
- A client application initiates an API call to retrieve information—also known as a request. This request is processed from an application to the web server via the API’s Uniform Resource Identifier (URI) and includes a request verb, headers, and sometimes, a request body.
- After receiving a valid request, the API makes a call to the external program or web server.
- The server sends a response to the API with the requested information.
- The API transfers the data to the initial requesting application.
While the data transfer will differ depending on the web service being used, this process of requests and response all happens through an API. Whereas a user interface is designed for use by humans, APIs are designed for use by a computer or application.
APIs offer security by design because their position as middleman facilitates the abstraction of functionality between two systems—the API endpoint decouples the consuming application from the infrastructure providing the service. API calls usually include authorization credentials to reduce the risk of attacks on the server, and an API gateway can limit access to minimize security threats. Also, during the exchange, HTTP headers, cookies, or query string parameters provide additional security layers to the data.
For example, consider an API offered by a payment processing service. Customers can enter their card details on the frontend of an application for an ecommerce store. The payment processor doesn’t require access to the user’s bank account; the API creates a unique token for this transaction and includes it in the API call to the server. This ensures a higher level of security against potential hacking threats.
Why we need APIs
Whether you’re managing existing tools or designing new ones, you can use an application programming interface to simplify the process. Some of the main benefits of APIs include the following:
- Improved collaboration: The average enterprise uses almost 1,200 cloud applications (link resides outside of IBM), many of which are disconnected. APIs enable integration so that these platforms and apps can seamlessly communicate with one another. Through this integration, companies can automate workflows and improve workplace collaboration. Without APIs, many enterprises would lack connectivity and would suffer from informational silos that compromise productivity and performance.
- Easier innovation: APIs offer flexibility, allowing companies to make connections with new business partners, offer new services to their existing market, and, ultimately, access new markets that can generate massive returns and drive digital transformation. For example, the company Stripe began as an API with just seven lines of code. The company has since partnered with many of the biggest enterprises in the world, diversified to offer loans and corporate cards, and was recently valued at USD 36 billion (link resides outside of IBM).
- Data monetization: Many companies choose to offer APIs for free, at least initially, so that they can build an audience of developers around their brand and forge relationships with potential business partners. However, if the API grants access to valuable digital assets, you can monetize it by selling access (this is referred to as the API economy). When AccuWeather (link resides outside of IBM) launched its self-service developer portal to sell a wide range of API packages, it took just 10 months to attract 24,000 developers, selling 11,000 API keys and building a thriving community in the process.
- Added security: As noted above, APIs create an added layer of protection between your data and a server. Developers can further strengthen API security by using tokens, signatures, and Transport Layer Security (TLS) encryption; by implementing API gateways to manage and authenticate traffic; and by practicing effective API management.
Common API examples
Because APIs allow companies to open up access to their resources while maintaining security and control, they have become a valuable aspect of modern business. Here are some popular examples of application programming interfaces you may encounter:
- Universal logins: A popular API example is the function that enables people to log in to websites by using their Facebook, Twitter, or Google profile login details. This convenient feature allows any website to leverage an API from one of the more popular services to quickly authenticate the user, saving them the time and hassle of setting up a new profile for every website service or new membership.
- Third-party payment processing: For example, the now-ubiquitous "Pay with PayPal" function you see on ecommerce websites works through an API. This allows people to pay for products online without exposing any sensitive data or granting access to unauthorized individuals.
- Travel booking comparisons: Travel booking sites aggregate thousands of flights, showcasing the cheapest options for every date and destination. This service is made possible through APIs that provide application users with access to the latest information about availability from hotels and airlines. With an autonomous exchange of data and requests, APIs dramatically reduce the time and effort involved in checking for available flights or accommodation.
- Google Maps: One of the most common examples of a good API is the Google Maps service. In addition to the core APIs that display static or interactive maps, the app utilizes other APIs and features to provide users with directions or points of interest. Through geolocation and multiple data layers, you can communicate with the Maps API when plotting travel routes or tracking items on the move, such as a delivery vehicle.
- Twitter: Each Tweet contains descriptive core attributes, including an author, a unique ID, a message, a timestamp when it was posted, and geolocation metadata. Twitter makes public Tweets and replies available to developers and allows developers to post Tweets via the company's API.
Types of APIs
Nowadays, most application programming interfaces are web APIs that expose an application's data and functionality over the internet. Here are the four main types of web API:
- Open APIs are open source application programming interfaces you can access with the HTTP protocol. Also known as public APIs, they have defined API endpoints and request and response formats.
- Partner APIs are application programming interfaces exposed to or by strategic business partners. Typically, developers can access these APIs in self-service mode through a public API developer portal. Still, they will need to complete an onboarding process and get login credentials to access partner APIs.
- Internal APIs are application programming interfaces that remain hidden from external users. These private APIs aren't available for users outside of the company and are instead intended to improve productivity and communication across different internal development teams.
- Composite APIs combine multiple data or service APIs. These services allow developers to access several endpoints in a single call. Composite APIs are useful in microservices architecture where performing a single task may require information from several sources.
Types of API protocols
As the use of web APIs has increased, certain protocols have been developed to provide users with a set of defined rules that specifies the accepted data types and commands. In effect, these API protocols facilitate standardized information exchange:
- SOAP (Simple Object Access Protocol) is an API protocol built with XML, enabling users to send and receive data through SMTP and HTTP. With SOAP APIs, it is easier to share information between apps or software components that are running in different environments or written in different languages.
- XML-RPC is a protocol that relies on a specific format of XML to transfer data, whereas SOAP uses a proprietary XML format. XML-RPC is older than SOAP, but much simpler, and relatively lightweight in that it uses minimum bandwidth.
- JSON-RPC is a protocol similar to XML-RPC, as they are both remote procedure calls (RPCs), but this one uses JSON instead of XML format to transfer data. Both protocols are simple. While calls may contain multiple parameters, they only expect one result.
- REST (Representational State Transfer) is a set of web API architecture principles, which means there are no official standards (unlike those with a protocol). To be a REST API (also known as a RESTful API), the interface must adhere to certain architectural constraints. It’s possible to build RESTful APIs with SOAP protocols, but the two standards are usually viewed as competing specifications.
APIs, web services, and microservices
A web service is a software component that can be accessed via a web address. Therefore, by definition, web services require a network. As a web service exposes an application’s data and functionality, in effect, every web service is an API. However, not every API is a web service.
When using APIs, there are two common architectural approaches—service-oriented architecture (SOA) and microservices architecture.
- SOA is a software design style where the features are split up and made available as separate services within a network. Typically, SOA is implemented with web services, making the functional building blocks accessible through standard communication protocols. Developers can build these services from scratch, but they usually create them by exposing functions from legacy systems as service interfaces.
- Microservices architecture is an alternative architectural style that divides an application into smaller, independent components. Applying the application as a collection of separate services makes it easier to test, maintain, and scale. This methodology has risen to prominence throughout the cloud computing age, enabling developers to work on one component independent of the others.
While SOA was a vital evolutionary step in application development, microservices architecture is built to scale, providing developers and enterprises with the agility and flexiblity they need to create, modify, test, and deploy applications at a granular level, with shorter iteration cycles and more efficient use of cloud computing resources.
For a deeper dive on how these architectural approaches relate, see “SOA vs. microservices: What’s the difference?”
APIs and cloud architecture
It’s crucial to develop APIs fit for purpose in today’s world. Cloud native application development relies on connecting a microservices application architecture through your APIs to share data with external users, such as your customers.
The services within microservices architecture utilize a common messaging framework, similar to RESTful APIs, facilitating open communication on an operating system without friction caused by additional integration layers or data conversion transactions. Furthermore, you can drop, replace, or enhance any service or feature without any impact on the other services. This lightweight dynamic improves cloud resources optimization, paving the way for better API testing, performance and scalability.
APIs and IBM Cloud®
APIs will continue to be just one part of application modernization and transforming your organization as the demand for better customer experiences and more applications impacts business and IT operations.
When it comes to meeting such demands, a move toward greater automation will help. Ideally, it would start with small, measurably successful projects, which you can then scale and optimize for other processes and in other parts of your organization.
Working with IBM, you’ll have access to AI-powered automation capabilities, including prebuilt workflows, to help accelerate innovation by making every process more intelligent.
Take the next step:
- Explore IBM API Connect®, an intuitive and scalable API design platform to create, securely expose, manage and monetize APIs across cloud computing systems.
- Build skills to help you create developer communities to publish and share APIs and engage with them through a self-service portal in the Solution Developer: IBM API Connect curriculum.
- API Connect can also come integrated with other automation capabilities in IBM Cloud Pak® for Integration, a hybrid integration solution that provides an automated and closed-loop lifecycle across multiple styles of enterprise integration.
- For Business to Business API connections explore the IBM Sterling Supply Chain Business Network B2B API Gateway for secure connections between you, your customers, and your partners.
- Take our integration maturity assessment to evaluate your integration maturity level across critical dimensions and discover the actions you can take to get to the next level.
- Download our agile integration guide, which explores the merits of a container-based, decentralized, microservices-aligned approach for integrating solutions.
Get started with an IBM Cloud account today. | <urn:uuid:17833741-4716-4530-9f8f-8340e1ac7334> | CC-MAIN-2022-40 | https://www.ibm.com/cloud/learn/api | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00198.warc.gz | en | 0.916842 | 2,722 | 3.90625 | 4 |
Hot Standby Routing Protocol (HSRP) is Cisco standard of providing redundancy for IP host configured in a LAN network with default gateway address.It enables a set of router interfaces to work together to present the appearance of a single virtual router or default gateway to the hosts on a LAN.
A single router that is elected from the group is responsible for the forwarding of the packets that hosts send to the virtual router. This router is known as the active router. Another router is elected as the standby router. If the active router fails, the standby assumes the packet forwarding duties. Although an arbitrary number of routers may run HSRP, only the active router forwards the packets that are sent to the virtual router IP address.
Routers that run HSRP communicate HSRP information between each other through HSRP hello packets. These packets are sent to the destination IP multicast address 18.104.22.168 on User Datagram Protocol (UDP) port 1985. IP multicast address 22.214.171.124 is a reserved multicast address that is used to communicate to all routers. The active router sources hello packets from its configured IP address and the HSRP virtual MAC address. The standby router sources hellos from its configured IP address and the burned-in MAC address (BIA). This use of source addressing is necessary so that HSRP routers can correctly identify each other.
Virtual MAC address that is composed of 0000.0c07.ac** where ** is the HSRP group number in hexadecimal, based on the respective interface. For example, HSRP group 1 uses the HSRP virtual MAC address of 0000.0c07.ac01. Hosts on the adjoining LAN segment use the normal Address Resolution Protocol (ARP) process in order to resolve the associated MAC addresses.
Even though HSRP group can be consist of multiple layer 3 devices, in typical enterprise environment distribution block (two aggregation switches) is configured with HSRP to provide gateway redundancy to all access layer VLANs. Below shows a typical topology which we are going to see how we configure HSRP.
When we selecting HSRP Active, it is always good idea to select spanning tree root for that VLAN select as HSRP active for that vlan.
DS01 vlan 50 interface Vlan50 ip address 10.10.50.251 255.255.255.0 DS02 vlan 50 interface vlan 50 ip address 10.10.50.252.0 255.255.255.0 DS02(config)#do sh span vlan 50 VLAN0050 Spanning tree enabled protocol ieee Root ID Priority 50 Address 001a.e3a7.ff00 This bridge is the root
To configure the HSRP parameters on this interface you have to use command syntax “standby <HSRP_Group> <HSRP_Parameter>“. All configurable options shown below (highlighted few commonly configured features).
CAT2(config-if)#standby ? <0-255> group number authentication Authentication delay HSRP initialisation delay follow Name of HSRP group to follow ip Enable HSRP IPv4 and set the virtual IP address mac-refresh Refresh MAC cache on switch by periodically sending packet from virtual mac address name Redundancy name string preempt Overthrow lower priority Active routers priority Priority level redirect Configure sending of ICMP Redirect messages with an HSRP virtual IP address as the gateway IP address timers Hello and hold timers track Priority tracking version HSRP version
Minimum configuration wise you need to configure “standby <group> ip <virtual-IP>” in order to activate HSRP on an interface. In this example will configure HSRP Group no 50 ( a value between 0 -255). Therefore virtual MAC address should be 0000.0c07.0032 (where 50 is in hex 32). If you haven’t specify a group number it will assume group number as 0. So will configure “standby 50 ip 10.10.50.250” command on DS01 & DS02 vlan 50 interface. You can verify status of this HSRP group by issuing “show standby vlan 50” command as shown below.
DS01#show standby vlan 50 Vlan50 - Group 50 State is Standby 3 state changes, last state change 00:08:02 Virtual IP address is 10.10.50.250 Active virtual MAC address is 0000.0c07.ac32 Local virtual MAC address is 0000.0c07.ac32 (v1 default) Hello time 3 sec, hold time 10 sec Next hello sent in 0.192 secs Preemption disabled Active router is 10.10.50.252, priority 100 (expires in 8.368 sec) Standby router is local Priority 100 (default 100) Group name is "hsrp-Vl50-50" (default) DS02#show standby vlan 50 Vlan50 - Group 50 State is Active 2 state changes, last state change 00:11:40 Virtual IP address is 10.10.50.250 Active virtual MAC address is 0000.0c07.ac32 Local virtual MAC address is 0000.0c07.ac32 (v1 default) Hello time 3 sec, hold time 10 sec Next hello sent in 1.632 secs Preemption disabled Active router is local Standby router is 10.10.50.251, priority 100 (expires in 8.816 sec) Priority 100 (default 100) Group name is "hsrp-Vl50-50" (default)
As you can see DS02 has become active HSRP router. HSRP priority value determine who will become active. In this case both having same default priority of 100.If you want to ensure DS02 become HSRP active for this vlan you can configure higher priority value (between 1-255) on DS02. You can do that by using “standby 50 priority 200” on DS02 vlan 50 interface.
In the event of DS02 failure, DS01 will assume the HSRP active role. But even DS02 came back after a failure, still DS01 will acting as active router. If you want to change this behaviour (ie make DS02 when it is available) you have to configure “preempt” on the DS02. You can do that “standby 50 preempt” command. When configuring preempt you can specify a delay when to preempt. It is good practice to configure a value thinking about your STP/IGP convergence & set a value suitable for your environment. Otherwise leave the default settings.
CAT2(config-if)#standby 50 preempt delay ? minimum Delay at least this long reload Delay after reload sync Wait for IP redundancy clients
If you want to make sure this HSRP is secure, you can configure Authentication for this HSRP communication.
DS02(config-if)#standby 50 authentication ? md5 Use MD5 authentication text Plain text authentication DS02(config-if)#standby 50 authentication md5 ? key-chain Set key chain key-string Set key string *** This is how you do it with a Key String **** DS02(config-if)#standby 50 authentication md5 key-string 0 MRN-CCIEW **** This is how you do it with Key-Chain ****** DS02(config-if)#standby 50 authentication md5 key-chain MRN DS02(config)#key chain MRN DS02(config-keychain)#? Key-chain configuration commands: default Set a command to its defaults exit Exit from key-chain configuration mode key Configure a key no Negate a command or set its defaults DS02(config-keychain)#key ? <0-2147483647> Key identifier DS02(config-keychain)#key 1 ? <cr> DS02(config-keychain)#key 1 DS02(config-keychain-key)#? Key-chain key configuration commands: accept-lifetime Set accept lifetime of key default Set a command to its defaults exit Exit from key-chain key configuration mode key-string Set key string no Negate a command or set its defaults send-lifetime Set send lifetime of key DS02(config-keychain-key)#key-string ? 0 Specifies an UNENCRYPTED password will follow 7 Specifies a HIDDEN password will follow LINE The UNENCRYPTED (cleartext) user password DS02(config-keychain-key)#key-string 0 MRN
As you can see, default Hello Time is 3s & default Hold Time is 10s. If you want to make the HSRP fail-over occur more quickly you can change these values. In seconds, you can go to min 1s Hello Time. But if you want to make it further faster, you can specify in Hello Time in ms.
DS02(config-if)#standby 50 timers ? <1-254> Hello interval in seconds msec Specify hello interval in milliseconds *** How to set Hello Time 333 ms & Hold Time 1s (or 1000 ms) *** DS02(config-if)#standby 50 timers msec 333 msec 1000
Make sure you change these timer values in all router in the same HSRP group. There are two version of HSRP. Version 1 & Version 2. By default it would be version 1 if you not specify the version. You can configure it “standby 50 version 2” command in our example. What are the difference between v1 & v2. Here is the full list of differences.
1. In HSRP version 1, millisecond timer values are not advertised or learned. HSRP version 2 advertises and learns millisecond timer values. This change ensures stability of the HSRP groups in all cases.
2. The group numbers in version 1 are restricted to the range from 0 to 255. HSRP version 2 expands the group number range from 0 to 4095. For example, new MAC address range will be used, 0000.0C9F.Fyyy, where yyy = 000-FFF (0-4095).
3. HSRP version 2 uses the new IP multicast address 126.96.36.199 to send hello packets instead of the multicast address of 188.8.131.52, which is used by version1.
4. HSRP version 2 packet format includes a 6-byte identifier field that is used to uniquely identify the sender of the message. Typically, this field is populated with the interface MAC address. This improves troubleshooting network loops and configuration errors.
5. HSRP version 2 allows for future support of IPv6.
6. HSRP version 2 has a different packet format than HSRP version 1. The packet format uses a type-length-value (TLV) format. HSRP version 2 packets received by an HSRP version 1 router will have the type field mapped to the version field by HSRP version 1, and subsequently ignored.
7. Note that HSRP version 2 will not interoperate with HSRP version 1. However, the different versions can be run on different physical interfaces of the same router.
It looks like 3750 switch does not support HSRPv2 config
CAT2(config-if)#do sh standby vlan 50 Vlan50 - Group 50 (version 2) State is Init (virtual MAC reservation failed) 3 state changes, last state change 00:06:25 Virtual IP address is 10.10.50.250 Active virtual MAC address is unknown Local virtual MAC address is 0000.0c9f.f032 (v2 default) Hello time 333 msec, hold time 1 sec Authentication MD5, key-chain "MRN" Preemption enabled Active router is unknown Standby router is unknown Priority 200 (configured 200) Group name is "hsrp-Vl50-50" (default)
So here is the my final configuration of the two switches in HSRPv1 config
IN DS01 key chain MRN key 1 key-string MRN-CCIEW ! interface Vlan50 ip address 10.10.50.251 255.255.255.0 standby 50 ip 10.10.50.250 standby 50 timers msec 333 1 standby 50 authentication md5 key-chain MRN IN DS02 key chain MRN key 1 key-string MRN-CCIEW ! interface Vlan50 ip address 10.10.50.252 255.255.255.0 standby 50 ip 10.10.50.250 standby 50 timers msec 333 1 standby 50 priority 200 standby 50 preempt standby 50 authentication md5 key-chain MRN
You can find more useful information from this HSRP-FAQ document from Cisco. | <urn:uuid:dd67fe52-058a-4f27-8f42-add684e13cd9> | CC-MAIN-2022-40 | https://mrncciew.com/2013/04/25/configuring-hsrp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00198.warc.gz | en | 0.763897 | 2,714 | 3.390625 | 3 |
The Navy is deploying a shipboard metal-producing 3D printer to test during RIMPAC exercises.
The advantages of installing 3D printing across the federal government could be huge, although the technology has been slow to reach that potential. That could finally be changing as the Navy has deployed the first 3D printer onboard a warship that is capable of printing reliable metal parts while underway at sea.
The government has been interested in 3D printing for a very long time. Back in 2015, I wrote an explainer-type story for Nextgov where experts talked about the many advantages that government would eventually gain from investing in 3D printing technology. But while those early printers were extremely interesting, they had limited use because of the substrate they used to create physical objects.
Early 3D printers only used a plastic-based substrate, which was generally fed into them on long spools, which would be melted and then repurposed into whatever object the creator wanted. The early printers were capable of producing some amazingly advanced projects, with some of them able to accept computer-aided design plan files for extreme precision. However, because of the substrate used, the final product was made of plastic, so it was of limited use. Yes, you could print a small combustion engine, a gear for a machine or a work of art, but trying to use the finished product in any practical way would probably cause it to melt or break.
The killer application for 3D printing in government came not from finding something useful that the government could print in plastic, but from improving the printers to be able to handle more durable raw materials, especially metals. Called additive manufacturing, certain 3D printers now allow the government to print products using everything from metals to composite fibers to concrete.
I recently hosted a roundtable discussion with experts in the field of additive manufacturing. Experts working in the field explained the many advances that 3D printing has made over the years, and how that is opening up new possibilities for government service.
“We have a system that prints stainless steel, metal tools and copper,” said Tony Higgins, Federal Leader for Markforged, one of the new leaders in additive manufacturing. “A lot of our customers are using this type of technology to create functional tools, custom parts, work holdings and fixtures.”
So far, the Army has been one of the biggest proponents of 3D printing for complex construction jobs. “We have a number of different systems that can print everything from concrete, to foams, to other types of materials,” said Megan Kreider, Mechanical Engineer for the U.S. Army Engineer Research and Development Center at the Construction Engineering Research Laboratory. Kreider recently worked on an Army project where an entire bridge was constructed of 3D parts that were printed using concrete and other heavy materials.
“You have to go through a structural engineer, and they outline what the reinforcement needs to be, how it’s going to be printed, and it’s highly interdisciplinary.” Kreider said. But after that, the parts are printed, normally right on the job site, and then fitted together to form the structure.
On land, 3D printing structures in the military can save time and money for big projects. But at sea, having the ability to print a critical part on demand might be the difference between having a ship able to continue its mission and requiring it to return to port for repairs. That is why the Navy has been so interested in additive manufacturing as it evolved from more simple 3D printing.
Last year, the Navy installed a liquid metal printer manufactured by Xerox at the Naval Postgraduate School in Monterey, California. Called the ElemX Liquid Metal Additive Manufacturing machine, it is being used to test out manufacturing during deployments, and to reduce the long supply chains needed to support ships at sea.
Apparently, the testing on land went well, as the Navy announced that an additive manufacturing 3D printer is now installed on the Wasp-class amphibious assault ship USS Essex. The printer is being tested during the massive Rim of the Pacific—or RIMPAC—2022 combat exercises taking place over the summer. The Essex is the first ship to participate in the initial testing and evaluation of an additive manufacturing 3D printer during underway conditions at sea.
During RIMPAC, the 3D printer on the Essex will be tasked with printing many of the parts that Navy ships routinely require while on maneuvers. This includes training sailors how to quickly manufacture heat sinks, housings, bleed air valves, fuel adapters, valve covers and much more. The printer on the Essex can manufacture metal parts as large as 10-by-10 inches.
According to Lt. Cmdr. Nicolas Batista, the Aircraft Intermediate Maintenance Department (AIMD) officer aboard Essex, “Additive manufacturing has become a priority and it’s evident that it will provide a greater posture in warfighting efforts across the fleet, and will enhance expeditionary maintenance that contributes to our surface competitive edge.”
If the printer performs well during RIMPAC, the Navy could expand the role of those devices. Batista said in a Navy press release that the “Commander Naval Air Force, U.S. Pacific Fleet and Commander, Naval Air Systems Command have also initiated efforts to establish an AIMD work center, solely designed for the additive manufacturing concept, and are striving towards the capability of fabricating needed aircraft parts with a 3D printer.”
So it seems like if all goes well for the 3D printer during RIMPAC, that we may soon see more heavy metal manufacturing on the high seas, and more complex and larger parts being constructed by sailors, without any assistance or materials from back on land required.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys | <urn:uuid:c1357b79-3a6d-4730-9392-d063fa719046> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2022/07/heavy-metal-high-seas/374189/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00198.warc.gz | en | 0.961678 | 1,232 | 3.21875 | 3 |
OpenDNS released statistics about which websites were commonly blocked — and which websites users were frequently given access to — in 2010. The report additionally details the companies online scammers targeted in 2010, as well as where the majority of phishing websites were hosted.
“Overall, 2010 was all about social, and this trend is reflected in the data we’re seeing at OpenDNS. Facebook is both one of the most blocked and the most allowed websites, reflecting the push/pull of allowing social sites in schools and the workplace,” said OpenDNS Founder and CEO David Ulevitch.
“This trend was also apparent in the phishing data we analyzed, where Facebook and other websites focusing on social gaming were frequently the targets of online scammers.”
Key statistics from 2010 include:
- Facebook is both the #1 most frequently blocked website, and the #2 most frequently whitelisted website. More than 14 percent of all users who block websites on their networks choose to block Facebook.
- The most frequently blocked categories of content were related to online pornography. The proxy/anonymizers category, which contains sites users will use to try and circumvent Web content filtering settings, was the next most popular category of content to block.
- The top three most commonly blocked websites for business users in specific are Facebook, MySpace and YouTube.
- PayPal continues to be the most frequent target of phishing websites; it was targeted nine times more frequently than the next most frequent target, Facebook; 45 percent of all phishing attempts made in 2010 were targeting PayPal.
- Five of the top ten most-phished brands — Facebook, World of Warcraft, Sulake Corporation (makers of Habbo), Steam and Tibia (online games) — are associated with online and social games.
A PDF of the report is available here. | <urn:uuid:5cfa4095-5d5d-4da9-b443-48d32d8541ce> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2011/01/25/paypal-most-phished-facebook-most-blocked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00198.warc.gz | en | 0.963961 | 379 | 2.609375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.